Skip to main content

How RISC-V Is Driving Edge ML

Power-Efficient RISC-V Processors Shift ML Workloads Back to the Edge

Image Source: ipopba/Stock.adobe.com

By Brandon Lewis for Mouser Electronics

Published February 20, 2024

Just about anyone starting in the machine learning (ML) field quickly learns just how expensive cloud data storage and processing can be. Many organizations have attempted to limit these costs by adopting on-premises infrastructure to host their ML workloads. Still, at scale, these local data centers also come with tradeoffs like increased power consumption. This measure alone increases utility bills, creates thermal management issues for equipment, and impacts sustainability efforts.

In the cloud and on-premises, overhead is directly related to the data being ingested into the central hub. The solution is to filter as much data as possible before it gets there using edge computing systems that analyze inputs as close to the point of data acquisition as possible.

Of course, these edge computing systems must warrant the switch by being power efficient themselves. As if on cue, a new generation of RISC-V processors delivers three times the power efficiency per unit of performance as devices based on other instruction set architectures (ISAs).

RISC-V in Edge ML: Fewer Instructions, Less Power

The convergence of ML and edge computing enables smart devices with the capability of autonomous decision-making and real-time adaptation. Demand for this type of data processing hierarchy coincides with advances in RISC-V processor technology, which is already being adopted in connected edge applications such as the following:

  • Wearables: Fitness trackers with RISC-V processors perform on-device activity recognition and health monitoring, providing real-time personalized insights and user feedback.
  • Smart buildings: RISC-V processors power building-automation devices that perform real-time object detection, anomaly recognition, intelligent automation, and security.
  • Robotics: Industrial robots equipped with RISC-V processors perform real-time image processing and object detection, enabling them to adapt to dynamic environments and perform complex tasks autonomously.

RISC-V will continue to play a key role in transforming these and other use cases, partly because of its open, standardized ISA and its computational efficiencies that streamline the implementation of complex AI algorithms on edge devices. This efficiency is derived from the RISC-V architecture's most fundamental building block: the instruction set.

RISC-V is based on a simplified ISA that features a base set of integer instructions (RV32I or RV64I) with optional extensions that processor architects can add to accommodate various use cases. Two critical extensions significantly boost ML operations on RISC-V processors:

  • Vector extension (V): This extension supports vector operations essential for efficient matrix multiplication and fundamental operations in many ML algorithms. Vector extensions significantly improve performance by allowing the processor to execute multiple operations simultaneously on multiple data elements.
  • Compressed extension (C): This extension introduces compressed instructions that require fewer bits to encode, resulting in smaller code size and lower memory footprint. This is particularly beneficial for edge devices with limited memory resources.

Combining these extensions allows RISC-V processors to achieve high performance and high efficiency in executing ML workloads. RISC-V processors IP company SiFive is leveraging vector extensions and other microarchitecture innovations to realize 30 to 40 percent better power efficiency than competitive solutions.[1]

In fact, studies show that RISC-V devices routinely outperform most existing instruction set architectures in cycles per instruction (CPI), a measure of the average number of clock cycles required to execute one instruction.[2] These tests reveal RISC-V devices' ability to perform complex ML tasks for longer durations while preserving thermal efficiency.

RISC-V Ecosystem and Tools for Edge ML

RISC-V's modular ISA is particularly beneficial when developing compact, energy-efficient processor implementations. The more straightforward instruction set contributes to an optimized design, reduced chip design and verification time, lower cost, and, of course, less power consumption.

Realizing all those benefits in an end system ultimately falls to developers, and the rapid adoption of RISC-V technology in edge computing environments can be attributed to parallel growth in the software and tools ecosystem emerging around the open standards-based processor hardware.

Popular compilers like LLVM and GCC now both support RISC-V. This ensures that the generated code is optimized for target processors, even if ISA extensions are in use. Meanwhile, popular frameworks like TensorFlow and PyTorch are being ported to RISC-V, and embedded software firms are contributing their own ML libraries, frameworks, and middleware.

For instance, Antmicro and Google Research have partnered on a rapid prototyping and pre-silicon development solution for RISC-V-based edge ML applications that consists of the former’s Renode simulation framework and the latter's Kenning bare-metal runtime (Figure 1). The joint solution helps developers accelerate the engineering lifecycle by permitting ML models to run on simulated RISC-V hardware. This ultimately allows for evaluating and optimizing the entire technology stack before ever committing to expensive silicon fabrication.

Figure 1: This RISC-V simulation framework is the product of a collaboration between Antmicro and Google Research. It provides a hardware/software co-design flow for accelerating ML development. (Source: Author)

There is, of course, still room for improvement in both RISC-V and ML development tools. As these ecosystems evolve in parallel, there are still challenges to overcome, including the following:

  • Maturity level: The RISC-V ecosystem is relatively new compared to established architectures. The result is a smaller community of experienced developers and a less comprehensive availability of tools and libraries.
  • Standardization issues: Although RISC-V International provides the foundation for inter-supplier cooperation and innovation, implementation diversity is to be expected in an open ecosystem. This always leaves the potential for architecture fragmentation and compatibility challenges. Ongoing standardization efforts are essential to maintaining a smooth and unified development process.
  • Hardware accessibility: The market for commercially available RISC-V processors optimized for ML tasks is still limited. However, as demand increases, the hardware landscape will evolve and scale in kind. 

Addressing these challenges requires collaboration and investment from industry leaders, research institutions, and open-source communities. With no reason to believe that industry or the open source and open standards communities will stop investing in the RISC-V ecosystem, there is no reason to think RISC-V technology will stop making inroads in energy-efficient edge applications.

RISC-V Shaping the Future of ML at the Edge

From an implementation perspective, the debate between reduced instruction set computers (RISC) and complex instruction set computers (CISC) is largely outdated. The efficiency and performance of a CPU are now predominantly determined by its microarchitecture, which in turn is realized through its ISA and the process node used to fabricate the physical chip.

As RISC-V and edge ML markets evolve, we will continue to see hardware innovation and more specialized RISC-V processors. These processors will likely contain dedicated accelerators, optimized memory architectures, and other features that enhance overall performance and efficiency in executing ML workloads.

These added features will continue to broaden the application areas for RISC-V-powered edge devices, paving the way for their deployment in everything from smart homes and healthcare devices to industrial automation systems and autonomous vehicles. With such a broad scope, RISC-V-based processor technology will continue to serve as fundamental building blocks for smart, connected edge ML systems—a trend already taking hold in real-world applications across multiple sectors.

As the RISC-V ecosystem continues to mature and developers become more familiar with the architecture, more innovative applications will emerge, pushing the limits of what's possible. The future is one of intelligent, interconnected, and efficient innovation.

 

Sources

[1]

SiFive website, n.d., accessed February 16, 2024, https://www.sifive.com/.


[2] Wajid Ali. "Exploring Instruction Set Architectural Variations: x86, ARM, and RISC-V in Compute-Intensive Applications," Engineering: Open Access 1, no. 3, (2023):157–162.

About the Author

Brandon has been a deep tech journalist, storyteller, and technical writer for more than a decade, covering software startups, semiconductor giants, and everything in between. His focus areas include embedded processors, hardware, software, and tools as they relate to electronic system integration, IoT/industry 4.0 deployments, and edge AI use cases. He is also an accomplished podcaster, YouTuber, event moderator, and conference presenter, and has held roles as editor-in-chief and technology editor at various electronics engineering trade publications. When not inspiring large B2B tech audiences to action, Brandon coaches Phoenix-area sports franchises through the TV.

Profile Photo of Brandon Lewis