Supplier eBooks

Renesas - Bringing Intelligence to the Edge

Issue link: https://resources.mouser.com/i/1499313

Contents of this Issue

Navigation

Page 14 of 27

Renesas 2023 15 As the four metrics are correlated—there tends to be an inverse correlation between accuracy and memory but a positive correlation between memory, latency, and power consumption—improving one could affect the others. So when developing a TinyML system, it is important to carefully consider this. A general rule would be to define the necessary model accuracy required as per the use case and then compare a variety of developed models against the three other metrics (Figure 1), given a dummy example of a variety of models that have been trained. The marker shapes represent different model architectures with different hyperparameters, that tend to improve accuracy with an increase in architecture size at the expense of the other three metrics. Depending on the system-defined use case, a typical region of interest is shown, from that, only one model has 90% accuracy, if higher accuracy is required, the entire system should be reconsidered to accommodate the increase in the other metrics. Figure 1: Example of metrics to consider when developing systems incorporating TinyML. (Source: Renesas Electronics) Benchmarking TinyML Models Benchmarks are necessary tools to set a reproducible standard to compare different technologies, architectures, software, etc. In AI/ML, accuracy is the key metric to benchmark different models. In embedded systems, common benchmarks include EEMBC's CoreMark and ULPMark, measuring performance and power consumption, respectively. In the case of TinyML, MLCommons has been gaining traction as the industry standard where the four metrics discussed previously are measured. Due to the heterogeneity of TinyML systems, to ensure fairness, four AI use cases with four different AI models are used and have to achieve a certain level of accuracy to qualify for the benchmark. Renesas benchmarked two of its microcontrollers, RA6M4 and RX65N, using TensorFlow Lite for microcontrollers as an inference engine, and the results can be viewed here. ■ Learn More RA6M3 32-Bit Microcontroller Group Learn More RA4E1 32-Bit Microcontroller Group

Articles in this issue

Links on this page

view archives of Supplier eBooks - Renesas - Bringing Intelligence to the Edge