About · Download · Scores · Submit Score

Introducing the EEMBC MLMark Benchmark

The EEMBC MLMark benchmark is a machine-learning (ML) benchmark designed to measure the performance and accuracy of embedded inference. The motivation for developing this benchmark grew from the lack of standardization of the environment required for analyzing ML performance. MLMark is targeted at embedded devlopers, and attempts to clarify the environment in order to facilitate not just performance analysis of today's offerings, but tracking trends over time to improve new ML architectures. With a 20 year heritage of developing embedded benchmarks, EEMBC is excited to add Machine Learning to its repertoire, and is committed to evolving with the industry.

Learn more about MLMark from this whitepaper.

The MLMark benchmark adopts the following philosophy:

Clearly Define the Metrics

"Latency" and "throughput" mean different things to different people. Similar can be said of model "accuracy", where ground-truths are used. MLMark clearly defines each term, exactly how the measurements are taken, and how accuracy is calculated. Over time, all new additions to the benchmark will be measured the same way creating a historic record of inference trends.

Publish the Implementations

Rather than setting rules and allowing individuals to perform optimizations in a private setting, MLMark requires the implementations be made public in the repository. Version 1.0 includes source code (and libraries) for:

Select Specific Models

There are many, many variables that impact ML performance, with the neural-net graph (model) being the most important. Consider a residual network for image classification: there are many such networks online, ResNet-50 is a popular model. ResNet-50 has many variations and optimizations (different input layer strides), as well as model formats (Caffe, Tensorflow), and different training datasets. To provide a consistent measurement, EEMBC selected specific models that are the most common and well-documented at this point in time, as well as the most likely to run on edge hardware.

Educate and Enable Users

Publishing the implementations not only ensures transparency, but also helps educate people working with performance analysis. Many embedded engineers with decades experience are being asked to tackle this new technology, and the learning curve is steep. By consolidating models and code for multiple targets, and by keeping the implementations as simple as possible, MLMark provides broad insight into the nuts-and-bolts of inference across different SDKs.

Licensing

MLMark is licensed to be freely available for use by anyone. Since EEMBC is a non-profit consortium funded by membership and license fees, we offer a corporate license for any company wishing to use MLMark scores in marketing and PR literature for a specific product. Everyone else, including acadamics and researchers, are free to publish their results with no restrictions. More information and the license can be found at the GitHub repository.

Roadmap

Version 1.x - Over the next 6-8 months, this update will expand to include different model formats (e.g., TFLite) and new target source-code bundles for new accelerator architectures (RockChip, Kalray, Bitmain, Lattice, etc.)

Version 2.x - This release will include a new test harness and smaller models to facilitate TDP and other constrained devices meant to fit in smaller power and cost envelopes.

Working Group Status


Copyright © EEMBC

Note: This website only works on browsers that support ES6, such as Edge, Chrome, Firefox, Safari; IE11 and earlier are not supported.