Supplier eBooks

Micron - 5 Experts on Addressing the Hidden Challenges of Embedding Edge AI into End Products

Issue link: https://resources.mouser.com/i/1529106

Contents of this Issue

Navigation

Page 10 of 21

dynamically with their environment, processing data in real time for obstacle avoidance and path optimization. To support the AI inference required for such tasks, these AMRs leverage systems-on-a-chip (SoCs) with integrated neural processing units (NPUs), capable of running complex algorithms such as simultaneous localization and mapping (SLAM) for real-time decision-making. In these applications, split-second decisions based on AI inference can have significant consequences. Therefore, memory's read and write speeds can be a make-or-break factor, as faster data access allows for faster processing. Bandwidth optimization is equally important, as it directly impacts the speed at which data can be transferred between the memory and the processing unit. Higher bandwidth allows for more efficient data movement, reducing bottlenecks and improving overall system performance. Designers must carefully select memory technologies capable of keeping pace with the AI processor's demands. Memory technologies like LPDDR and specialized solutions are being developed to meet these demands by offering a combination of high performance and low power consumption. In contrast to performance requirements, embedded AI systems, especially those powered by batteries, often operate under strict power constraints. Therefore, designers must balance the need for high-performance memory while staying confined by power limitations. This process often involves selecting low-power memory technologies or implementing power-saving features such as dynamic voltage and frequency scaling. Additionally, thermal management becomes considerable, as high- performance memory can generate substantial heat, potentially affecting system reliability and performance. C h a p t e r 2 | D e s i g n C o n s i d e r a t i o n s f o r E m b e d d e d A I When integrating AI into hardware, I prioritize security from the start. I implement robust encryption for data both at rest and in transit, ensuring model integrity and protecting against adversarial attacks. I carefully consider the balance between edge processing for latency and privacy versus cloud offloading for more complex models." Anurag Arora Engineering Manager, Verkada 11 5 Experts on Addressing the Hidden Challenges of Embedding Edge AI into End Products

Articles in this issue

Links on this page

view archives of Supplier eBooks - Micron - 5 Experts on Addressing the Hidden Challenges of Embedding Edge AI into End Products