Smart Edge ML with NXP FRDM-MCXN947
Harnessing eIQ® Software for Real-Time AI
Image Source: putilov_denis/Stock.adobe.com
By Joseph Downing, Mouser Electronics
Published July 10, 2024
In today’s swiftly evolving tech environment, edge-based machine learning (edge ML) is emerging as a transformative force, reshaping how we process and analyze data in real time. This innovative technique involves deploying ML models directly on edge devices, ushering in a new wave of intelligent and responsive applications.
Traditionally, ML models rely on centralized cloud servers for extensive data processing. In contrast, edge ML shifts the computational burden to local edge devices, enabling instant decision-making without constant dependence on remote servers. This shift addresses challenges associated with latency, privacy, and bandwidth inherent in traditional approaches.
A key advantage of edge ML is its ability to deliver real-time insights. By performing inference directly on edge devices, applications can respond swiftly to dynamic conditions, making it ideal for time-critical scenarios. Whether it is autonomous vehicles making rapid decisions, smart surveillance cameras performing zonal monitoring, or healthcare devices providing timely diagnostics, edge ML's reduced latency represents a significant advancement.
This article will guide you through the practical application of edge ML using the NXP Semiconductors FRDM-MCXN947 development board and introduce the NXP eIQ® Portal for generating ML models. The on-board NXP MCX N947 microcontroller features an eIQ Neutron Neural Processing Unit (NPU) designed to speed inference time, which can also enhance the battery life of edge ML products. From smart cities and the industrial Internet of Things (IIoT) to healthcare and consumer electronics, the potential applications of edge ML are diverse and impactful.
The article "Jump into Machine Learning with NXP" provides a more detailed walkthrough of hardware and software setup.
Project Materials and Resources
Project Bill of Materials (BOM)
Project Code/Software
- MCUXpresso IDE for NXP MCUs
- MCUXpresso SDK Builder (Login Required)
- eIQ Toolkit
Additional Resources
- MCUXpresso IDE terminal window, Tera Term, or other terminal emulator software
- Python programming language
- OpenCV
Additional Hardware
- Windows PC
- USB Type-C to USB Type-A or Type-C cable (depending on PC USB port availability)
Accounts
- NXP account (Free to create)
Project Technology Overview
The FRDM-MCXN947 board (Figure 1) features the MCX N947 microcontroller, which incorporates dual high-performance Arm® Cortex®-M33 cores running at speeds of up to 150MHz. The microcontroller comes equipped with 2MB of flash, optional full ECC RAM, a DSP coprocessor, and an integrated eIQ Neutron NPU. The NPU significantly enhances ML throughput, delivering up to thirty times faster performance compared to a standalone CPU core. This allows the device to minimize active time, thereby reducing overall power consumption.

Figure 1: The NXP FRDM-MCXN947 development board features the MCX N947 microcontroller. (Source: Mouser Electronics)
The multicore architecture enhances system performance and efficiency by intelligently distributing workloads across analog and digital peripherals. The board, supported by the MCUXpresso Developer Experience, provides optimization, ease of use, and acceleration for embedded system development.
Designed for industrial applications, the MCX N94x family features a broader set of analog and motor control peripherals.
Software Overview
This section describes the software necessary to run the examples in this project. For installation instructions, refer to the "Jump into Machine Learning with NXP" article.
MCUXpresso IDE
The MCUXpresso integrated development environment (IDE) provides developers with a user-friendly Eclipse-based development environment tailored for NXP MCUs using Arm Cortex-M cores, including both general-purpose crossover and wireless-enabled MCUs. This IDE delivers advanced features for editing, compiling, and debugging, incorporating MCU-specific debugging views, code trace and profiling, multicore debugging, and integrated configuration tools (Figure 2).

Figure 2: The NXP MCUXpresso IDE. (Source: Mouser Electronics)
SDK Builder
The MCUXpresso SDK Builder (Figure 3) accelerates software development by providing open-source drivers, middleware, and reference example applications. The SDK Builder allows you to tailor and download a software development kit (SDK) that aligns with your chosen processor or evaluation board to streamline your development process. We will build and install the SDK in a later section.

Figure 3: NXP MCUXpresso SDK Builder website. (Source: Mouser Electronics)
eIQ Portal
The eIQ Toolkit (Figure 4) facilitates ML development through an intuitive graphical user interface (i.e., the eIQ Portal) and workflow tools, complemented by command-line host tool options within the eIQ ML software development environment. Developed in an exclusive partnership with Au-Zone Technologies, NXP's eIQ Toolkit empowers developers with graph-level profiling capabilities, offering insights during runtime to optimize neural network architectures on EdgeVerse™ processors. It also provides the tools required to convert models to take advantage of the eIQ Neutron NPU.

Figure 4: NXP eIQ Portal. (Source: Mouser Electronics)
The eIQ Toolkit offers an easy way to import datasets, complete with a comprehensive user guide to help you navigate the various options.
Developing the Project
This section describes how to start building the project. Begin by opening the eIQ Portal.
eIQ Model Import and Training
After opening the eIQ Portal, click Create Project and select Import Dataset. Several options will appear (e.g., VOC-Dataset for detection tasks, Structured Folders for classification tasks, TensorFlow datasets), allowing you to load various datasets (Figure 5) available from the TensorFlow website.

Figure 5: eIQ Portal Dataset import screen. (Source: Mouser Electronics)
Once you have imported the dataset, you can capture and include additional images or use the augmentation tool to modify existing images. Once everything looks correct, click the Select Model box at the bottom of the portal window.
The Model Selection section (Figure 6) includes three options—Classification, Segmentation, and Detection—each with a description. Consider the pros and cons based on factors like training time and dataset size. For this article, we will use the Classification Model, which trains faster and is great for testing datasets before moving on to more complex models like Detection, which takes longer to train and requires a larger dataset.

Figure 6: eIQ Portal Model Selection window. (Source: Mouser Electronics)
Next, select the model performance (Figure 7). The options are Performance, Balanced, and Accuracy, each weighted based on the type of performance and design requirements. For this example, we will select Balanced.

Figure 7: eIQ Portal Model Performance Selection screen. (Source: Mouser Electronics)
Then, determine where the model's inference will occur (Figure 8). The MCX N microcontroller includes the eIQ Neutron NPU, making it ideal for ML applications.

Figure 8: eIQ Portal device selection. (Source: Mouser Electronics)
Next, train the model. Although several options are available for modifying the training process (Figure 9), we will use the default settings for this project. Once the training is complete (which can take several minutes), you can adjust settings and restart the training, continue training, or move on to validating the training.

Figure 9: eIQ Portal model training screen. (Source: Mouser Electronics)
The validation process (Figure 10) offers several adjustable parameters. Because we selected the NPU as the inference target, quantization will be necessary. Assess the different options in this and previous steps to determine which parameters best meet your specific requirements.

Figure 10: eIQ Portal model validation screen. (Source: Mouser Electronics)
Deploying the Model
Once the model has been trained and validated, we need to export and convert it into a format compatible with the NPU. This section describes the steps of this conversion.
Exporting the Model
After validation, the eIQ Portal provides options to deploy or export the model; in this project, we will export. In the left menu, ensure the Export Quantized Model toggle is set to on, then click Export Model (Figure 11). Select the location for the export and wait for this to be completed.

Figure 11: Exporting the trained model. (Source: Mouser Electronics)
Converting the Model
Once you have successfully exported the model, hover over Workspaces on the top menu and select Home. On the eIQ Portal home page, click Model Tool. In the next window, click Open Model. Navigate to the location where you saved the exported model, select that file, and click Open to show the model tool display (Figure 12).

Figure 12: Model tool display of the exported model. (Source: Mouser Electronics)
In the top left corner of the window, click the three horizontal lines to open the menu, and then click Convert. In the new window, select TensorFlow Lite for Neutron (.tflite), which will open the Conversion Options window (Figure 13). Set the Neutron target to mcxn94x, and then click Convert. Select the destination for the saved converted model.

Figure 13: eIQ model tool conversion options (Source: Mouser Electronics)
Integrating the Model into the MCXN94x NPU
Now that the software model is trained, validated, exported, and converted, we will integrate it into code to run on the MCX N94x using the MCUXpresso IDE. If you have not installed the IDE, refer to the "Jump into Machine Learning with NXP" article for installation instructions. Additionally, refer to the Building and Installing the SDK instructions in that article to ensure that the FRDM-MCXN947 development kit is installed.
Once you have installed the MCUXpresso IDE and FRDM-MCXN947 kit, import an example project. In the MCUXpresso Quickstart Panel, click Import SDK example(s)…. to start the SDK Import Wizard.
- On the Board and/or Device selection page, select the frdmmcxn947 board and click Next.
- In the Import projects window (Figure 14), navigate to eiq_examples and select tflm_cifar10.
- Click Next.

Figure 14: MCUXpresso SDK Import Wizard. (Source: Mouser Electronics)
The new sample program will appear in Project Explorer in the top left corner of the IDE.
- In the Project Explorer, navigate to the Source folder, display the available sub-folders, and then right-click the Model
- Click New, and then select File from Template.
- In the New File window (Figure 15), enter a name for the new file in the File name: field, then select Configure.

Figure 15: Creating a new file from a template. (Source: Mouser Electronics)
- In the Preferences (Filtered) window (Figure 16), select Assembly Source File and then click Apply and Close.

Figure 16: Template preferences. (Source: Mouser Electronics)
Once you have created the file, add the following lines of code in the Project Explorer (Figure 17), substituting your file name for custom_model_converted_V1.tflite
.
.section .rodata
.align 16
.global custom_model_data
.global custom_model_data_end
custom_model_data:
.incbin "../source/model/custom_model_converted_V1.tflite"
custom_model_data_end:
Next, navigate to the location of the converted tflite file and copy the model. In the MCUXpresso IDE, paste the model file into the Model folder.

Figure 17: Newly created assembly source file with amended code. (Source: Mouser Electronics)
Next, update the model.cpp so the highlighted sections in Figure 18 match custom_model_data[]
. Also, update the modifying model_cifarnet_ops_npu.cpp to ensure that all operators used by the custom model are updated/added to s_microOpResolver
(Figure 19).

Figure 18: Editing the model.cpp file. (Source: Mouser Electronics)

Figure 19: Editing the model_cifarnet_ops_npu.cpp file. (Source: Mouser Electronics)
Testing the Model
Next, prepare test data to test the model. Using the following Python script, you can easily convert images and export them into a C array to use within the sample code. In this example, we will use the Python script to replace the example image of a ship with an image of a bird.
img = cv2.imread('bird.jpg')
img = cv2.resize(img, (128,128))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
with open('bird.h', 'w') as fout:
print('#define STATIC_IMAGE_NAME "bird"', file=fout)
print('static const uint8_t bird [] = {', file=fout)
img.tofile(fout, ',', '0x%02x')
print('};\n', file=fout)
Copy the converted “.h” file to the image folder within the project, which is located in the same location as the model folder. Next, modify the image_load.c file to point to the new image array (Figure 20).

Figure 20: Modified image_load.c file. (Source: Mouser Electronics)
Open a terminal interface (such as Putty) and configure the serial settings as follows:
- 115200 baud rate
- 8 data bits
- No parity
- One stop bit
- No flow control
Next, select Debug in the MCUXpresso IDE to build and run the updated project code. If the build and code run successfully, the terminal window will display the output shown in Figure 21.

Figure 21: Terminal window output. (Source: Mouser Electronics)
This sequence can be used to evaluate different images to verify the model’s accuracy.
Conclusion
The revolutionary potential of edge ML lies in its ability to provide real-time insights. Performing direct inference on edge devices, as shown in this project, allows applications to react swiftly to changing conditions, which is especially beneficial in time-critical situations.