AI That Knows How People Feel?
New Tech Tuesdays
Join Mouser's Technical Content team for a weekly look at all things interesting, new, and noteworthy for design engineers.
Published August 12, 2025
Right now, smart speakers are just following commands, but we are not far from achieving a new reality where devices understand how users feel when commands are given. More and more, artificial intelligence (AI) systems are interpreting emotional responses and understanding how people feel. Engineers are building real-world systems that can detect emotional states using data from cameras, microphones, and biometric sensors. In this week’s New Tech Tuesdays, we discuss how this technology is being developed and explore how it can be used in mental health monitoring, emotionally adaptive interfaces, and even driver safety.
How Emotion Recognition Works
Emotion recognition systems comprise a combination of sensing systems that work together to create a picture of a user’s emotional state. Facial analysis plays a major role and typically uses near-infrared (NIR) or RGB camera modules along with computer vision algorithms that are trained to pick up micro-expressions, subtle eye movements, and facial muscle patterns. These systems often incorporate convolutional neural networks (CNNs) or transformer-based models to process video frames and identify emotional cues as they’re happening.[1]
Another factor in the system is voice sentiment detection. Microelectromechanical system (MEMS) microphones and digital signal processors (DSPs) capture pitch, cadence, and inflection. Speech processing algorithms then analyze the voice characteristics and detect nuances in tone.[2] The system may detect that a user’s voice is signaling that they are stressed or enthusiastic about something. This information can be important for adaptive systems in call centers, smart assistants, or even medical settings such as therapy.
Another factor in emotion recognition is biometric monitoring, which introduces physiological context to emotion detection. Wearables that include photoplethysmography (PPG) sensors can measure heart rate and variability, while electrodermal activity (EDA) sensors track changes in skin conductance tied to the sympathetic nervous system (Figure 1).[3] In some cases, temperature sensors are added to detect sudden spikes that may indicate a user is anxious or experiencing an intense emotion.
Figure 1: Biometric wearables equipped with PPG and EDA sensors provide physiological insights, such as stress levels, that enhance emotion recognition accuracy in real time. (Source: vrx123/stock.adobe.com)
When engineers combine these various cameras and sensors into emotion recognition systems, AI models can make more context-aware interpretations about a user, especially when the data is processed at the edge using embedded neural network accelerators or AI-enabled microcontrollers. These inputs are processed using embedded AI or neural network accelerators to classify emotions and allow systems to adjust their response accordingly.
Emotion Recognition in Action
Even now, emotion recognition technology is being implemented into real-time voice analysis in applications like wearables, smart speakers, and hearing aids. With this recognition technology, these devices can now detect emotions based on speech tone and cadence data. At the same time, biometric emotion detection is gaining traction in health research and monitoring, with commercially available devices like the Empatica E4 wristband using PPG and EDA sensors to analyze emotional stress without being too intrusive.[4]
The automotive industry is also experimenting with emotion recognition. Driver monitoring systems (DMSs) are being integrated into newer vehicles, using facial tracking and vision AI to assess alertness and mood, which can help reduce accidents caused by drowsiness or distraction.[5]
In customer service settings, emotion recognition AI is already being used to improve call center efficiency—detecting vocal tension or frustration and adjusting responses in real time, either by adapting scripts or escalating to human agents (Figure 2).[6] These developments in emotion-aware AI signal a movement from labs into more practical commercial settings, where a range of applications are being discovered.
Figure 2: Emotion recognition AI in call centers analyzes vocal tone in real time, helping agents adjust responses or escalate calls based on detected frustration or stress. (Source: Ayla/stock.adobe.com; generated with AI)
The Newest Products for Your Newest Designs®
Microchip Technology dsPIC33AK digital signal controllers (DSCs) provide the processing foundation necessary for successful emotion recognition systems. The high-performance core and dual 72-bit accumulators enable real-time signal processing—even with low-amplitude signals from devices like PPG and EDA sensors—while support for multiple sensor inputs and peripheral pin select (PPS) functionality are ideal for integration into advanced sensor interfacing applications. To address potential security concerns surrounding biometric and other personal data interpreted by emotion recognition AI, the dsPIC33A DSCs offer a range of scalable security features for embedded applications.
Tuesday’s Takeaway
AI-powered emotion recognition is already influencing how engineers design embedded systems. Combining the right sensors and low-power AI processing, today’s devices are beginning to interpret emotional cues and adapt accordingly, opening possibilities for numerous medical and consumer applications, amongst others.
Sources
[1]https://www.sciencedirect.com/science/article/abs/pii/S1568494623000157
[2]https://www.eetimes.com/ai-driven-interactions-powered-by-high-snr-mems-microphones/
[3]https://www.sciencedirect.com/science/article/pii/S2772569323000567
[4]https://www.empatica.com/research/e4/
[5]https://www.smarteye.se/solutions/automotive/driver-monitoring-system/
[6]https://www.reuters.com/technology/softbank-corp-aims-help-call-centre-workers-by-softening-angry-customer-calls-2024-05-16/