Supplier eBooks

Renesas - Bringing Intelligence to the Edge

Issue link: https://resources.mouser.com/i/1499313

Contents of this Issue

Navigation

Page 13 of 27

14 BRINGING INTELLIGENCE TO THE EDGE R ecently, with advancements in machine learning (ML) there has been a split into two scales: traditional large ML (cloud ML), with models getting larger to achieve the best performance in terms of accuracy, and the nascent field of tiny machine learning (TinyML), where models are shrunk to fit into constrained devices to perform at ultra-low power. As TinyML is a nascent field, this blog will discuss the metrics to consider when developing systems incorporating TinyML and current industry standards into benchmarking TinyML devices. The four metrics that will be discussed are accuracy, power consumption, latency, and memory requirements. The system metric requirement will vary greatly depending on the use case being developed. Accuracy has been used as the main metric for the performance of ML models for the last decade, with larger models tending to outperform their smaller predecessors. In TinyML systems, accuracy is also a critical metric, but a balance with the other metrics is more necessary, compared to cloud ML. Four Metrics You Must Consider When Developing TinyML Systems ELDAR SIDO | PRODUCT MARKETING SPECIALIST RENESAS ELECTRONICS Power consumption is a critical consideration, as TinyML systems are expected to operate for prolonged periods on batteries (typically in the order of milliwatts). The power consumption of the TinyML model would depend on the hardware instruction sets available. For example, an Arm® Cortex®-M85 processor is significantly more energy efficient than an Arm Cortex-M7 processor, thanks to the Helium instruction set. It would also depend on the underlying software used to run the models (i.e., the inference engine); for example, using the CMSIS-NN library improves the performance drastically as compared to reference kernels. Latency is important, as TinyML systems operate at the endpoint and do not require cloud connectivity, the inference speeds of such systems are significantly better than cloud-based systems. Furthermore, in some use cases, having ultra-high inference speed (in milliseconds) is critical to be production ready. Similar to the power consumption metric, it depends on the underlying hardware and software. Memory is a big hurdle in TinyML, as designers squeeze down ML models to fit into size-constrained microcontrollers (less than 1MB). Thus, reducing memory requirements has been a challenge, and during model development, many techniques, such as pruning and quantization, are used. Furthermore, the underlying software plays a large role as better inference engines optimize the models more effectively (better memory management and libraries to execute layers). The system metric requirement will vary greatly depending on the use case being developed. " "

Articles in this issue

Links on this page

view archives of Supplier eBooks - Renesas - Bringing Intelligence to the Edge