C h a p t e r 1 | U n l o c k i n g t h e Po w e r : M e m o r y a n d S t o r a g e i n A I A p p l i c a t i o n s
exist, including standard DRAM as
components or modules and low-power
variants like LPDDR.
The choice of DRAM technology
depends on the specific requirements
of the AI application. For instance,
high-performance AI systems requiring
significant processing power (measured in
tera operations per second, or TOPS) may
use more advanced DRAM technologies.
In contrast, power-constrained embedded
AI devices may opt for LPDDR4 or
LPDDR5/x to balance performance with
energy efficiency. In fact, LPDDR5X
delivers peak speeds at 8.533 gigabits
per second (Gbps), which is up to 33%
faster than previous-generation LPDDR5.
Using LPDDR5x for edge AI systems
can reduce power consumption by 30%
while boosting higher bandwidth and
performance compared to LPDDR4.
On the other hand, storage refers to
long-term data retention in a system,
even when the power is off. In edge AI
systems, storage is mostly necessary for
retaining large datasets and trained AI
models. For these applications, storage
usually comes in the form of managed
NAND (short for "NOT AND") solutions
like embedded multi-media controller
(eMMC) or flash memory.
Naturally, edge AI performance is
significantly impacted by storage. The
speed, capacity, and reliability of storage
directly affect how quickly and efficiently
AI models can be loaded, executed, and
updated on edge devices. Read and write
speeds, for example, are an impactful
performance specification of storage
devices. Embedded flash memory with
high read and write speeds allows quick
loading of AI models and rapid data
access during inference.
As AI applications grow in
complexity and data volume,
scalable memory and storage
solutions become essential. They
allow for seamless expansion to
accommodate increasing data and
computational demands without
compromising performance."
Barry Chang
Director of Advantech Edge Server & AI Group,
Advantech
6
5 Experts on Addressing the Hidden Challenges of Embedding Edge AI into End Products