Supplier eBooks

NXP - Imagine the Possibilities

Issue link: https://resources.mouser.com/i/1442826

Contents of this Issue

Navigation

Page 17 of 27

18 AI platform, which can then run on a smaller target device at the edge. A trained model or inference engine is a set of math equations that detect or recognize objects, speech, or changes in expected behavior. TensorFlow and Caffe are examples of frameworks that use application programming interfaces (APIs) to abstract complex math and make it easier to port trained models and applications across different platforms and hardware resource types. The next step is to port the trained model to a platform selected from an extensive set of available options. ML for low-bandwidth sensor inputs can often get handled by very low-cost MCUs based on Arm Cortex®-M4 or Cortex-M7 technology, such as NXP's i.MX RT crossover MCUs. Typical functions include detection of an acoustic keyword, distinctive sound, or an anomaly such as a vibration or environmental change from normal. Facial and voice recognition can also run on Cortex-M technology for limited numbers of people or words. As the complexity increases, especially with camera sensor inputs, the application might shift up to a device with multiple processing units, such as the i.MX 8M Nano applications processor. This applications processor comes with one to four Arm Cortex-A53 cores, a Cortex-M7 core, a GPU with OpenCL capability, a MIPI-CSI camera input and many other integrated features. For even higher object recognition, designers might use an applications processor such as the i.MX 8QuadMax. This applications processor integrates two Cortex-A72, four Cortex-A53, two Cortex-M4, two GPUs with OpenCL and OpenVX vision extensions, a DSP, and eight lanes of MIPI-CSI which can handle up to eight one-lane cameras, or other combinations of multi-lane cameras. Basler (Germany) and Congatec (Germany) demonstrated their combined machine vision and object recognition solution for shopping on i.MX 8QuadMax and they are porting to the i.MX 8M family for further cost optimizations. With all the scalability, the next thing needed are tools to optimize performance, reduce system cost, increase response time and accuracy, and optimize the way that each trained model or inference engine can use the on-chip resources. Machine Learning Software Development Environments As mentioned in the previous article, "Unlock Machine Learning on Edge Devices with eIQ™ Software Development Environment," one of the biggest challenges of ML migrating to the edge is optimizing and matching the correct models with an inference engine that supports the unique hardware set of each particular edge device.

Articles in this issue

Links on this page

view archives of Supplier eBooks - NXP - Imagine the Possibilities