Skip to main content

Real-Time Linux Lives in RISC-V

 

(Source: MicroSemi.com)

 

 

From Simpler Beginnings

 

The earliest microprocessors began as the simplest of architectures. The four-bit Von Neumann approach lets a processor use multiple cycles to fetch instructions, decode that instruction, execute that instruction, and store the result internally or externally at the expense of even more cycles. This architecture served us well, and for many generations, the underlying architecture allowed embedded designs and computerized systems that perform admirably, even today. In contrast, modern-day computers use multi-core gigahertz-speed processors with more memory and storage than anyone could have possibly imaged. 

 

Operating systems, too, have evolved from simpler times. Early software kernels that tied together the basic functions included Control Program/Monitor (CP/M), Tandy Radio Shack Disk Operating System (TRS-DOS), Apple OS, Commodore, and other early pioneers who stitched together the first closed source and proprietary operating systems allowing parameters to be passed by command-line instructions giving more flexibility and control.

 

As an open-source operating environment, Linux is well-positioned for small, embedded microcontroller designs, and full-fledged, high-end computer platforms.  Embedded Linux has proven itself to be a scalable and flexible choice for designers. As an embedded operating system for microcontrollers, it allows command-line operations and scripting kernel that can be small and compact. This is ideal for small memory and resource-limited designs.

 

On the higher end of the spectrum, full high-resolution graphical operations and user interfaces can run on more sophisticated processors with more memory and resources. Now, designers can choose fully tested and debugged blocks of code for file I/O, graphics control, user interface, communications, peripherals, and so on. It’s never been quicker to go from concept to prototype by choosing already-proven IP and stitching it together.

 

This is possible because scalable Linux eliminates costly licensing and comes in real-time flavors that have proven themselves. This is true for both high-end, application-specific designs as well as small-sized and less complex application-specific designs such as Internet of Things (IoT) devices. The explosion of IoT is also changing the landscape.

 

As artificial intelligence (AI) hardware and software become more widespread, the embedded intelligence data blocks become more important. This is increasingly truer with the multitude of multicore hardware solutions available today. The RISC-V Open Source processor architecture also provides the perfect segue for both embedded and high-end applications, especially as AI core competencies are increasingly merging with multicore processor architectures.

 

Embedded Linux has proven itself as a scalable and flexible choice for designers. Simple command line and script operated for small memory-limited designs. But when it comes to full high-end graphical operations and user interfaces, Linux eliminates costly licensing and comes in real-time flavors that have proven themselves. Now, designers can choose fully tested and debugged blocks of code for file I/O, graphics control, user interface, communications, and peripherals. It’s now faster than ever to move from concept to prototype by simply choosing already-proven IP and stitching it together.

 

The Base of Knowledge

 

Modern multicore hardware, ultra-high resolution graphics, and 3D rendering, and modern applications (especially AI) require vast amounts of memory and data, either locally accessible or globally accessible.  Reliable hardware functionality and error-free operating systems operations are crucial. More embedded designs dedicated to specific functions are emerging rather than desktop personal computers. The embedding of Linux has transparently become a part of our digital infrastructure for most.

 

A key benefit of open-source technology is the expert base that forms around it. Rather than being locked into a manufacturer’s specific unique characteristics, everyone plays on the same ball field with the same rules. This knowledge and experience grows with every new design, and the learning time for architectures, tools, and techniques are becoming shorter. It isn’t just high-end graphics rendering, communications, and data processing in the traditional form. It once was a formidable task to weave together the pieces of an embedded system just for the standard peripherals.

 

Modern and next-generation smart systems use machine learning, deep learning, and neural networks as parts of their AI. Developing an AI algorithm for a specific task is one part of the process. Exposing it to data, watching it learn, determining whether successful intelligence has been achieved, bundling it, packaging it, and using it, is the modern challenge that introduces unknowns into budgets and schedules. The ability to incorporate an encapsulated-trained AI stage by itself makes an open-source processor and operating system more desirable than ever, especially as multiple design teams work to solve pieces of your puzzle.

 

Linux AI libraries such as Caffe Deep Learning Framework already contains industrial applications as well as speech, vision, and graphics. Business AI applications can likewise take advantage of the MLIB Machine Learning Library, which can be used with Python, Java, Scala, and R programming languages to target AI for business development and trend determinations. Several open-source AI tools for Linux are available right now to embed in various designs.

 

Once an AI has proven itself effective, the learned experience can be added and reused. This makes it much easier for a design team to introduce newer versions of their design, preserving legacy design work.  Design re-use applies to hardware, firmware, application software, embedded data, and embedded knowledge gained from learning.

 

Linux for RISC-V

 

Several distributions for Linux have been compiled and tested for RISC-V in 32-bit and 64-bit implementations. To provide a higher level of integration success, expect device and development toolmakers to bundle Linux packages that are more fully functional for the specific single or multicore environment. The prebuilt toolchains require less time and effort to set up and install and help get you designing and coding more quickly.

 

For example, the HiFive Unleashed development board is touted as being the world’s first and only Linux multicore RISC-V processor development board. The 8Gb of error-corrected DRAM memory, 32MB quad Flash, and microcard removable flash storage allow the companies’ featured Freedom U540 SoC work with Windows, maCOS, and Linux for rapid deployment of the prebuilt toolchains. Designers are free to use GNU or the OpenOCD Embedded toolchains, which adds ContOS and Ubuntu to the mix. Design engineers don’t need target hardware because machine emulators such as QEMU and VirtualBox allow guest operating systems and code to be loaded, run, and debugged virtually and downloaded to a target development machine without the high risk of crashing the entire development environment.

 

Using the SiFive HiFive 64-bit quad RISC-V cores such as the U54 or U74  and 8Gb of DRAM, a team at A/B Open took advantage of the mezzanine connector and the Microsemi expansion board to create a high-performance, modern-day desktop PC. The team combined Sata, Gig Ethernet, USB (2 and 3), PCI Express, and more on the Microsemi Polarfire FPGA.

 

This is not a unique scenario. Other leading solutions such as Debian, OpenEmbedded, Buildroot, OpenSUSE, and FreeBSD are also teaming with RISC-V hardware to create both 32- and 64-bit versions either as virtual machines on QEMU or physically running on off-the-shelf or homespun FPGA boards. Expect to see more 128-bit and even 256-bit versions arise as deeply data-intensive designs become more widespread such as ultra-high resolution virtual reality and holographic projected imaging.

 

Soft Hardware

 

Right now, RISC-V CPU chips can effectively run code such as the FE310 from SiFive on the Sparkfun DEV-15799. The majority of pioneering designs will take advantage of the instruction-set extensions and hardware flexibility of a field programmable gate array (FPGA)-based core or group of cores. Indeed, discrete CPU chips have always been the mainstay of embedded designs, but more often, high-speed and very dense FPGAs have been supplanting them as the home to the computer hardware. This is especially true because multiple processor cores can live side by side and share the same peripherals and memory resources without taking up excess board space, connectors, and PCB traces.

 

In addition to the Microchip PolarfireSoC  FPGA, several FPGA makers are featuring RISC-V-ready platforms. The ICE40 FPGA from Lattice has been used and demonstrated in the ICEcube2 to show the hardware running the Zephry RTOS. Just like the choice of distribution of Linux, users need to decide on a core with RISC-V. Some prefer options such as Lattice, Microchip, and Microsemi that can offer core distributions. Third parties will, too. For example, the Antmicro Multicore VexRiscv core is a version of the RISC-V architecture based on the Python Soft SoC generator called LiteX.

 

This 32-bit implementation supports a multitude of peripherals, including DRAM control, USB, Ethernet, PCI express, and other vital system functions that FPGA designers can parse or replicate as needed. Another core for use is the RV64GC from Open Virtual Platforms. As you can imagine, integration service houses and design houses will pioneer the design flows and toolchain configurations to provide almost turnkey development and debugging.

 

Questions and Directions

 

A company selling hardware that contains an embedded operating system is considered a commercial distribution. This means they must either provide source code with the sale of their product or adhere to a written offer to give any third party a complete machine-readable source copy for three years for the cost of physically performing a source distribution. 

 

Anyone interested can make improvements and publish their improvements that can make it into future versions if they release a commercial product. Anyone interested can use the hardware and reprogram it to perform added or changed functionality by the same token. A case in point is the Linksys Wi-Fi routers for b/g/ and n frequencies. Developed using embedded Linux, they allowed ham radio operators to rewrite the firmware to use alternate frequencies not part of the standard Wi-Fi network.

 

This was a good thing for ham radio operators because it allowed them to implement higher-power Wi-Fi links that ordinary consumers can’t legally access. It’s also a positive occurrence for society because amateur radio operators are the first to chime in and help in emergency situations, often relaying information between first responders and emergencies. Ultimately, it will benefit the manufacturer because amateur radio operators will buy their product to re-flash with the updated code. However, at the same time, it can be a bad thing for the manufacturer because now competitors know how they do their arbitration, buffer management, and device drivers.

 

Designers that use a secret sauce or proprietary algorithms might not like this. If the value they add to their products is code-driven proprietary problem-solving, then open source might not be for them.

What about the hardware and instruction set? This is yet unknown. RISC-V allows extensions to the instruction set and encourages hardware designers, especially those using an FPGA, to build upon open-source core framework. If proprietary hardware is coupled to an extension of a standard instruction class, does that give away the recipe?

 

What about learned behavior from machine learning or neural network patterns from deep learning? Will all the teaching and training time be a giveaway to anyone else who wants to take advantage of your designs? Will neuron engrams become open source as well? This is still unknown, especially since RISC-V is a relatively new architecture just now gaining steam and popularity. These issues are expected to surface. How they will be resolved will depend on the constituent companies’ voices that make up the international group.

 

Looking Forward

 

A high level of building blocks exists that system designers can take advantage of right away more than ever. These blocks include high-end multicore processors, graphics, networking, wireless communications, energy management, sensor interfaces, motor controls, mass storage, multiple protocols, encryption and decryption, and biometrics.

 

The scalability of both hardware and OS means smarter IoT designs can be deployed and updated. Toolchains let designers cut and paste fully functional high-level features and thread them together. 

As intelligent machines emerge, better solutions for existing technologies will replace flawed bug-laden versions. New creations will be scrutinized and improved upon by the populations, not just the manufacturers.

 

Don’t discount other processor architectures, companies, and closed source building blocks, though. Device makers are good at pioneering the most cutting-edge technologies and packaging them for design engineers’ consumption. Open-source and closed-source technology have their places, and everyone will have to decide for themselves the best choice for their unique requirements.

About the Author

After completing his studies in electrical engineering, Jon Gabay has worked with defense, commercial, industrial, consumer, energy, and medical companies as a design engineer, firmware coder, system designer, research scientist, and product developer. As an alternative energy researcher and inventor, he has been involved with automation technology since he founded and ran Dedicated Devices Corp. up until 2004. Since then, he has been doing research and development, writing articles, and developing technologies for next-generation engineers and students.

Profile Photo of Jon Gabay