Renesas Reality AI: Monitoring Equipment at the Edge
Introduction
When equipment such as heating, air conditioning, and ventilation (HVAC) systems or power generators suddenly cease to function, the equipment downtime can be costly and time consuming to repair, putting a significant burden on businesses. To mitigate the risk of downtime, predictive maintenance using machine learning (ML) algorithms can monitor equipment at the edge using a wide array of sensors and sensorless data. This information can be used to help warn of potential issues, possibly preventing a much more significant failure from occurring.
This project will show how to create a sensorless ML model, using Renesas Reality AI software and data generated from the Renesas MCK-RA6T2 Motor Control Kit, to monitor the status of a motor. The output of this ML algorithm will use current, voltage, and speed to detect if the motor is balanced or unbalanced and then output the proper feedback through LEDs mounted on the development board. This article will present a description of features of Renesas Reality AI software and instructions that let you work through this process as well as test out different features on your own.
Project Materials and Resources
This project requires only the Motor Control Kit and access to Renesas’s Reality AI, though the other options—such as the suggested parts for mounting the motor and access to the GitHub repository—add to the overall project experience.
Project BOM
- Renesas Electronics RA6T2 Motor Control Kit
- Machine Parts for Motor Mount Assembly from ServoCity
- 1120 Series U-Channel – 1120-0001-0048 Qty 1
- 1120 Series U-Channel – 1120-0003-0096 Qty 1
- Steel Channel-Connector Plate – 2803-0039-0022 Qty 1
- 1309 Series Sonic Hub – 1309-0016-0005 Qty 1
- M4-0.7 × 6mm screws – Qty 10
- M3-0.5 × 8mm screws – Qty 2
- DC12~48V supplier through either a terminal block or barrel plug (center positive)
Resource Links
- Reality AI Software Tool (login required)
- e² studio IDE and Coding Tool
- Renesas Flash Programmer
- Mouser-Electronics/Renesas_RealityAI_Motor GitHub repository
- MCK-RA6T2 Quick Start Guide (renesas.com) PDF download
- MCK-RA6T2 User's Manual (renesas.com) PDF download
Project Technology Overview
Reality AI Tools
Renesas Reality AI is a software tool that allows developers to develop intelligent, embedded artificial intelligence (AI) applications quickly and easily for edge devices. The suite of tools includes an intuitive drag-and-drop user interface, pre-trained ML models, and automatic code generation, allowing developers to build and deploy applications faster and with less effort.
Renesas Reality AI supports a wide range of use cases, including predictive maintenance, anomaly detection, and voice. Using its robust ML capabilities, developers can easily train and optimize models using their own data, and quickly deploy those models to their target devices. The tool also includes comprehensive support for a wide range of hardware platforms, making it easy to integrate with existing hardware and software systems.
Renesas Reality AI supplies a powerful and efficient solution for developers looking to build intelligent, embedded AI applications for edge devices. For more information and access to the software tool, contact Renesas at customersuccess@reality.ai or through Renesas Technical Support at +1 (347) 363 2200.
Renesas Electronics RA6T2 Motor Control Kit
The Renesas Electronics RA6T2 Motor Control Kit (Figure 1) is designed to evaluate the RA6T2 Motor Control microcontroller (MCU), a high-performance device with timer and safety functions. The RA6T2 MCU features an enhanced CPU and hardware accelerator that can realize high-end motor algorithms for home appliances and industrial automation.
Figure 1: Renesas Electronics RA6T2 Motor Control Kit (Source: Mouser Electronics)
The RA6T2 Motor Control Kit incorporates everything needed to evaluate the RA6T2 Motor Control MCU, including an inverter board, brushless DC (BLDC) motor, and cables.
e² studio IDE and Coding Tool
Renesas’s e² studio (Figure 2) is an Eclipse-based integrated development environment (IDE) that provides a code editor with a wide range of functions for designing with Renesas MCUs. In this project, you will use e² studio to program the RA6T2 Motor Control Kit. Downloads are available for both Windows and Linux and for both online and offline installation. Also available are support documentation, technical support, FAQs, and how-to videos to support new and seasoned users alike.
Figure 2: Renesas e² studio IDE and coding tool landing page (Source: Mouser Electronics)
Developing the Project
Hardware Assembly
This section will focus primarily on the assembly of the RA6T2 Motor Control Kit and not on the optional pieces listed for use with the motor assembly. For reference, Figures 3 and 4 show the completed motor assembly.
Figure 3: Side view of the motor mount assembly (Source: Mouser Electronics)
Figure 4: Front view of the motor mount assembly (Source: Mouser Electronics)
The RA6T2 Motor Control Kit comes with an inverter board, communication board, CPU board, BLDC motor, and cables and hardware. An exact description of each can be found in the MCK-RA6T2 User Manual in the Resource Links. Before beginning the project, install the standoffs and screws into the holes in the corners of each PCB (Figure 5).
- Connect the headers on the CPU board labeled INV1 to the CN4 and CN5 connectors on the inverter board. (The fit may be tight so be careful to avoid bending pins.)
- Use the supplied four-pin cable to connect CN5 on the communication board to CN10 on the CPU board.
- Plug the BLDC motor into CN2 on the inverter board.
- Use the terminal block at CN1 on the inverter board to supply the DC12~48V.
- Use the USB-C port and CN3 on the communication board to connect between the development kit and the PC.
Figure 5: Completed board assembly (Source: Mouser Electronics)
As recommended in the Quick Start Manual, set jumper pins to default settings before applying power. A benchtop power supply was used in this project.
e² studio Installation
To collect new data from the Motor Control Kit, you must first program the device. For this, you will need to download and install Renesas’s e2 studio. (These instructions are based on installation on a PC running Windows operating system.)
- Navigate to the e² studio IDE and Coding Tool (see the Resource Links section).
- Select and download the version associated with the operating system you are using. (Login required)
- Locate the downloaded file and extract and run the installer, following the on-screen instructions.
Note: During the installation, if any prerequisite software is needed, you will be prompted to install those items. - Select the proper device family or families you wish to install.
- In the additional software selection screen, select QE for Motor.
- Restart the computer if necessary to complete installation.
Data Acquisition
Data is required to generate any ML models. For this project, pre-recorded data files are provided on the Mouser GitHub repository in the Resource Links section. This will supply a quick and easy way to begin using the Reality AI Tool. As an alternative, feedback data for the motor can be output to a terminal interface such as HyperTerminal and then converted to CSV for the balanced and unbalanced motor. This can also be used as new test data to further confirm the model generated from the supplied data files. In addition to the pre-recorded data files, a project file demo is also available to help you collect your own data as well as for reviewing and testing the code.
When data is generated as output from your device, the format is output in rows of data without a classifier to indicate the nature of what the readings represent with the code currently provided on GitHub (Table 1).
Table 1: Output data from test device
Since each of the provided files is grouped into either balanced or unbalanced, one way to segregate the data and assign classifiers is to use a metadata file. The metadata file will list the individual file names and the classifiers assigned to each (Table 2).
Table 2: Metadata file with file names and classifiers
(Note: e² studio will require Flexible Software Package (FSP) 3.5.0 to successfully build the provided demo software: https://github.com/renesas/fsp/releases.)
Exploring Reality AI
To begin training your new ML model, the following are the basic steps you need to navigate within Reality AI Tools.
Creating the Project
The first step is to generate the project.
- Log into Reality AI Tools.
- Click Projects on the left navigation menu (Figure 6).
- Click Add Project.
- Enter a suitable project name (Description is optional).
- Click the Add Project button in the lower right corner.
- Select the newly created project, which displays information about the project.
Figure 6: Reality AI Tools project creation view (Source: Mouser Electronics)
Data Source
Once you have created the project, the next step is to upload the collected data. The format for the data for this project will need to be in a CSV file. The data provided in Mouser’s GitHub repository has already been formatted for this (Figure 7).
- Click Data on the left navigation menu.
- Select Source from the dropdown menu.
- Locate the saved location for the downloaded or generated CSV data files for this project.
- Ensure that the External Data tab is selected at the top of the new Data Source view.
- Drag and drop the group of files into the area that reads Drop files here to upload.
Depending on the number of files used, you may encounter a long upload time.
Figure 7: Reality AI Tools data upload view (Source: Mouser Electronics)
Once the data files have been uploaded, create sample lists using the Curate feature (Figure 8).
- Click Data on the left navigation menu.
- Click Curate from the dropdown menu.
- Select all the source files you intend to use for the project.
- Click the Action button in the upper right corner.
- From the dropdown menu, select Segment List from Selected.
- When the new window opens, set the Window Length to 512, and then click Submit in the lower right corner.
You should now have a new list available in the Data Sample Lists section, which can be selected to display the distribution of classes and count for each class. This can be useful in creating multiple sample lists for model generation and testing. If you are using data that is already assigned classifiers, you will be able to select those in the next steps; otherwise, you will need to use the metadata file provided in the test data using the Edit Metadata Type to assign classifiers to each of the uploaded files.
Figure 8: Data Curation section in Reality AI Tools (Source: Mouser Electronics)
AI Explore
With the new sample lists created, we can move on to the next step, AI Explore™ (Figures 9 and 10).
- Click AI Explore on the left navigation menu.
- Select Classes from the dropdown menu.
- From the Data Sample List, select the list name of the list created in Figure 8.
- In the Exploration Results section, click Start Exploring to begin generating potential base tools.
Once the process has completed, you will now find different options listed in the Exploration results for New List section for the selected list. This section will display information about each potential base tool, along with an explanation, complexity, and accuracy.
- Once you select a specific base tool, click the icon in the Create Base Tool column that is associated with that exploration result.
- In the section that opens, you can choose to keep the suggested name or rename the list, and then click Add.
Figure 9: AI Explore Classes section in Reality AI Tools (Source: Mouser Electronics)
Figure 10: AI Explore exploration results section in Reality AI Tools (Source: Mouser Electronics)
Training
In an earlier step, we generated a result based on a data sample list to create the base tool. This was done using a balanced subset of data from the original sample list created in the Data Source step. From here we will move to the training tool using the full list of data (Figure 11).
- Click Build on the left navigation menu.
- Select Train from the dropdown menu.
- When the new section opens, select the base tool created in the earlier step. (You will see this updated in the Train Tool section at the top of the screen.)
- Select the data from the Data Sample Lists section at the bottom of the screen. (You will see this updated in the Train Tool section at the top of the screen.)
- Click the Train button in the Train Tool section to begin training with the full data list.
This process may take a long time, depending on the size of the data. If you do not notice progress, refresh the screen periodically.
Figure 11: Training tool in Reality AI Tools (Source: Mouser Electronics)
Additional Features
The steps in this section are the basics necessary to prepare an ML model for deployment; however, they are only the beginning of the tools available. Though we cannot provide instructions for every feature available within the Reality AI Tool, the following features can help supply greater insight and control.
Test & Optimize > Validate
The Validate tool can give more insight into the performance of your model, such as training separation and k-fold validation. This is suggested as the first performance marker for selecting which trained tool to use.
Test & Optimize > Try New Data
One of the best ways to verify the newly generated model is to evaluate it against new data. This part of the Reality AI Tool allows you to take new data, separate from the data used to generate the model, and gauge the accuracy to further refine and increase the likelihood that the model performs as expected.
Optimize BOM > Sensor Selection
When deploying any type of code, size and memory usage are always a concern. The Sensor Selection tool of Reality AI can help decrease the complexity of a base tool without sacrificing accuracy. Once the sensor selection exploration is complete, you can choose the proper results and export them to the AI Explore tool using the icon in the Create AI Explore column to generate a possible new base tool.
Putting It Together
Deploying the Model to the Edge
Once you have generated your model, you will need to deploy it to your device. The code for this can be generated in the Deploy section in the left navigation menu. This tool can be used to generate a new package in either C or C++ for any number of Renesas processor models or development boards. Here you can set the name of the array as well as the data type (Figure 12).
Figure 12: Reality AI Tool deployment (Source: Mouser Electronics)
Once the build is complete, you will receive a link to download the package in the form of a ZIP file. The folder will include source, header, and library files that you can incorporate into your software using e² studio (Figure 13).
Figure 13: Deployment of ML model (Source Mouser Electronics)
Validating the Deployed Model
Once you have integrated the files integrated into the software and successfully built the program, you can deploy everything to the development board for validation. The demo software included with this project will display the status of the motor using onboard LEDs to show normal or unbalanced operation. For confirming this project, we used one of the M4-0.7 × 6mm screws to cause a failed state, which is also indicated by the illumination of all three LEDs (Figure 14).
Figure 14: Failed status LEDs (Source Mouser Electronics)
Conclusion
Artificial intelligence and machine learning offer solutions that can help businesses save money and manage resources more effectively and efficiently. Reality AI gives developers intuitive and straightforward soft tools for the development of ML models for edge-based devices.