Supplier eBooks

NXP - Imagine the Possibilities

Issue link: https://resources.mouser.com/i/1442826

Contents of this Issue

Navigation

Page 16 of 27

17 AI ❝ ❞ Machine Learning is at the Edge By Alexandra Dopplinger, P.Eng., NXP Industrial Marketing Manager The main reason machine learning lives on the edge is to improve user experience. M achine learning research and development started on the world's most powerful computers, and much of the training and inference gets done on supercomputers and server farms in the cloud. However, technology has advanced to a point where it is now feasible to move machine learning (ML) out of the cloud and onto the edge, where it can be much more useful. Why do we want machine learning to live on the edge? The main reason is to improve the user experience. People don't want to wait any noticeable length of time for a device to recognize them and obey their commands. If the recognition and response occur somewhere in the cloud, this brings a potential delay, reduced reliability, and more opportunity for an incorrect response. Machines in motion, such as service robots, drones, and autonomous vehicles, have an even greater need for ML at the edge since they must continually collect sensor inputs, infer status, and decide their next actions within milliseconds. The laws of physics do not support such intense data collection and real-time response across a wireless connection to a distant cloud service. Autonomous machines in motion require untethered, reliable onboard learning and decision-making. Most humans and machines share the need to safeguard privacy, secure valuable data and communications, and protect against hacking and cloning. We hear increasing demand to prevent unauthorized use of voice, video, and sensor data. It is clear that security and privacy are easier to maintain when raw data remains on the local edge machine, with minimal or no data transmitted to the cloud. Finally, the technology cost has reduced enough to move some ML processing to edge devices in buildings, homes, and vehicles. The subsequent reduction or elimination of ongoing cloud bandwidth, processing, and storage fees can offset the cost to migrate ML services out of the cloud, onto many types of edge devices or sensor nodes. Enabling the Machine Learning Revolution NXP and its partners continually collaborate to offer integrated hardware, software, and tools which will allow the creation of affordable and practical ML solutions for the real world. NXP is building upon the industry's most comprehensive and most scalable Arm® technology-based processing platform, which allows any developer to scale up or down to reach the cost and power-optimized solution for each specific use case. The result is a high degree of software and platform reuse, with lower maintenance and ownership costs. Many neural network (NN) solutions generate a trained model or inference engine on a sophisticated centralized computing

Articles in this issue

view archives of Supplier eBooks - NXP - Imagine the Possibilities