C h a p t e r 2
As AI models continue to grow in size
and complexity, the demand for high-
performance memory increases. Modern
AI applications often require gigabyte-
level memories to work with, making
memory a limiting factor in the AI
compute pipeline.
In edge AI applications, where AI
processing occurs close to the data
source rather than in centralized data
centers, memory requirements become
even more specialized. Specifically, edge
AI systems must balance performance
needs with constraints such as power
consumption, physical size, and cost. This
balancing act often leads to design trade-
offs between memory density, speed, and
power efficiency.
AI algorithms, particularly those used
for inference at the edge, demand high-
speed data access and processing.
Therefore, the memory system must
be capable of delivering data to the
AI processor with minimal latency and
maximum bandwidth.
Consider a traditional warehouse robot
that followed simple paths without real-
time awareness. Modern autonomous
mobile robots (AMRs), powered by edge
AI, now use lidar, 3D cameras, and inertial
measurement unit sensors to interact
DESIGN CONSIDERATIONS
FOR EMBEDDED AI
Balancing the cost of components
with the performance and
scalability needs of the AI project
can be challenging. Developers
must consider the total cost of
ownership, including initial costs
and ongoing maintenance."
Barry Chang
Director of Advantech Edge Server & AI Group,
Advantech
10
5 Experts on Addressing the Hidden Challenges of Embedding Edge AI into End Products