12. Nov 2025
Emotional AI
What is Emotional AI?
To introduce the topic of Emotional AI, imagine a voice assistant that recognizes your emotional state based on your voice modulation. It then adapts its responses accordingly.
Emotions are an essential part of our interpersonal interactions. So why shouldn't interactions with AI also benefit from an emotional level?
Emotional AI is thus an intersection between three areas
- technology, due to the technological component of AI
- psychology, because of the emotional aspect, and
- ethics, because the interaction also has an ethical dimension.
Emotional AI is therefore the art of AI recognizing the emotions of the other person, interpreting them, and then responding to these emotions accordingly. Other terms used synonymously with emotional AI are affective computing and emotion AI.
However, AI cannot feel real emotions at this point, but it can simulate empathetic behavior. This is done by adjusting the AI's response to the emotion it has detected.
Here is a comparison between classic (rational) AI and emotional AI:
| Aspect | Classic AI | Emotional AI / Affective Computing |
| Goal | Logic, efficiency, problem solving | Emotionserkennung & passende Reaktionen |
| Focus | Rational data analysis | menschliche Emotionen & Verhalten |
| Response style | neutral, objective | empathic |
| Data sources | structured (numbers, texts, images) | Emotional cues (facial expressions, tone of voice) |
| Typical application | Diagnostics, recommendation, planning | Customer service, health, education, assistance |
| Emotional understanding | not avaiable | simulated, not real |
It quickly becomes apparent that traditional AI focuses on structured data and statements. Here, hard facts tend to be more important. With emotional AI, things are a little different. The emotion of the other person is the main focus here.
How does emotional AI work?
AI has various technical options for recognizing the emotional state of the person it is interacting with. On the one hand, it is possible to analyze the facial expressions of the other person. Furthermore, it is also possible to examine the user's pronunciation. Here, for example, pitch, pauses, and volume provide information about the emotion being felt. Another way to recognize the emotional state of another person is through text analysis. This is particularly useful in social networks. Another option that has become increasingly popular in recent years is biometric analysis. This involves analyzing heart rate, skin conductivity, or breathing rate using wearables such as smart watches. In the past, this was only possible under laboratory conditions.
Where could emotional AI be used?
Emotional AI has various potential applications. For example, in marketing, to tailor advertising to the respective mood and measure the response to it. In the healthcare sector, it could also be used for early detection of depression or stress. Another possible use for emotional AI would be grief counseling. There are already chatbots that can be fed with longer WhatsApp chats to simulate interaction with the deceased. Emotional AI could also offer added value in the education sector by using AI to recognize when students are under or overchallenged. Another area of application would be customer service. Here, chatbots could show empathy and, for example, apologize and transfer the conversation to a de-escalation center. The use of emotional AI would also be conceivable in the area of public safety. One possibility here would be to analyze the body language and facial expressions of people at airports or demonstrations. In a work context, it would be possible, for example, to hand over moderation tasks to AI during online team meetings and workshops so that it can intervene when participants become inattentive or when it is recognized that the topic in question is causing them too much stress.
Please note: The use of emotional AI is subject to technical, ethical, and legal requirements under EU law. The EU AI Act expressly prohibits the inference of emotions by AI in the workplace and educational institutions ( Art. 5(1)(f)) and classifies AI systems for emotion or biometric recognition as “high risk” in many cases ( Article 6(2), Annex III). This entails extensive requirements in terms of risk management, transparency, data quality, and human oversight. Strict rules also apply to public safety applications: remote biometric identification is generally restricted, and identification systems often fall under the high-risk regulations of the AI Act. We are aware of these regulatory and ethical limitations and advise our customers on the responsible use of AI.
The application examples listed above will be analyzed in more detail in the following sections of this article.
What are the advantages of using emotional AI?
It is clear that emotional AI offers many different opportunities and potential benefits. Its integration makes human-computer interaction (HCI) more human and therefore more pleasant for people. This increases the likelihood that users will accept the application. Its use in the field of mental health would make it possible to establish early warning systems for depression or burnout. Furthermore, it would be possible to tailor the personal experiences of users to their specific mood. The use of personal AI-supported companions could reduce the risk of loneliness for people living alone and seniors. This would enable emotional support around the clock. Another advantage of emotional AI is the ability to respond directly to emotional states. The chatbot in customer service can therefore respond directly to the customer's frustration in a soothing manner.
Bias, cultural influences, and misuse
In addition to all these possible applications, opportunities, and potentials, emotional AI also harbors serious risks and ethical challenges.
Bias
Depending on the quality of the data, various forms of bias, i.e., unconscious prejudices, can arise. In the field of emotional AI, the Angry Black Women Bias, for example, is very well known. This misinterpretation by AI occurs because the corresponding training data for female people of color (PoC) is disproportionately often labeled with the emotion “anger” or “annoyance.” As a result, AI tends to directly assume the corresponding emotion when recognizing female PoC. Other biases include, for example, women being more often recognized as “happy” or “anxious,” whereas AI tends to judge men as ‘serious’ or “angry.”
Cultural reference
Against this background, it is important to consider that emotions have a cultural reference. In 1980, Robert Plutchik proposed a theory of emotions that states that emotions have a genetic origin. This assumption means that emotions are expressed in the same way in all people. However, this genetic origin is highly controversial among psychological experts, which does not make the other parts of his work, such as the wheel of emotions, any less important. For AI, this means that in order to evaluate an emotion analysis, not only would visual, auditory, or biometric data have to be evaluated, but the cultural background of the person in question would also have to be known.
Abuse
In addition, emotions are very intimate information about a person, and misuse of this information could have serious consequences. When AI responds to the emotional impulses of its human counterpart, we have mostly assumed benevolent use in the applications described above. However, AI could also be trained to deliberately exploit the emotional state of the interacting person. A chatbot designed to help with questions about goods could, for example, encourage people in unstable situations to buy more in order to feel better (consumption-induced well-being). Similar problematic uses would be possible in advertising or politics.
Dependencies
If we imagine that people will form emotional attachments to non-human systems in the future, it is also possible that these relationships could lead to dependencies, which in turn could lead to further social isolation of the person. In Japan, for example, relationships with virtual characters are not uncommon ( source).
Application in psychotherapy
The possible applications already mentioned the potential use of AI in psychotherapy, which will be discussed in more detail here. Examples of well-known therapeutic approaches are
- Behavioral therapy, which focuses on a current problem (e.g., a specific form of anxiety) and learning helpful behaviors;
- Systemic therapy, which focuses not only on the individual but on the person's entire social system;
- Depth psychology-based psychotherapy, which often sees the cause of psychological problems in unconscious conflicts and past experiences.
Looking at these three branches of therapy, we need to differentiate where the use of AI appears to be appropriate. Behavioral therapy is a good fit, as AI can be trained with possible approaches to specific psychological problems. Systemic therapy, on the other hand, would have to be trained with data from the patient's social system in order to deliver specific results. In depth psychology-based therapy, too, AI would have to be trained with the patient's experiences in order to deliver specific results. The latter two branches of therapy are therefore rather difficult to replace with AI.
With regard to coping with grief, there are doubts as to whether the use of the chatbot described above really helps to cope with grief or rather delays the grieving process. This would become even more problematic if a virtual avatar were to be created from images and videos of the person. Interacting with the deceased would then feel like a Zoom call. At first, this might be comforting for the bereaved, but it would most likely not allow them to go through the healthy grieving process according to Kübler-Ross (denial, anger, bargaining, depression, and acceptance).
Crowd control
Emotional AI can also be used in areas where large numbers of people gather, such as demonstrations or airports, where it can be useful to warn police officers of potential sources of danger (crowd control). As described above, emotional AI can lead to biases, which can cause highly problematic situations in the context of crowd control.
Application in the workplace
The use of Emotional AI in the workplace can also be used for more than just reducing stress. It is conceivable that AI could assign work and tasks to employees until it detects the first symptoms of stress in order to maximize their work performance.
Legal restrictions
We must continue to remind ourselves that the emotions displayed by AI are not real. AI does not feel emotions and only adapts its responses to the emotion it recognizes. The EU AI Act creates necessary barriers: it limits the misuse of emotional AI and classifies many of the applications described here in sensitive contexts as inadmissible or high-risk.
The future of emotional AI
Throughout this article, we have discussed various possible applications. Many large and well-known companies already use emotional AI in their products or, to a certain extent, in their AI models.
We can look forward to the use of emotional AI in robotics in the future. There are already attempts to use “social” robots in retirement homes to allow older people to participate in social interactions. These social interactions in turn have a positive effect on general well-being, even if not all residents are enthusiastic about interacting with a small robot ( source).
Furthermore, we can assume that the data basis for emotional AI will shift from a single source, such as video images, text messages, or audio recordings, to multimodal analysis. This means that multiple data sources, such as video images and audio recordings, will be used instead of just one.
We don't have to look too far into the future to make our assumptions. There are already well-known and popular AI models that implement emotional AI. An analysis by Anthropic has also shown that the in-house AI chatbot Claude is already being used for social interactions ( source). Emotional communication with a machine is therefore not something that will happen in the next few years; it is already in full swing.
When reading scientific papers in this field, one often comes across statements that emotions could be correctly identified with a probability of over 90%. As is often the case with scientific papers, however, it is always worth taking a look at the fine print. In this case, the fine print is the spectrum of emotions that were distinguished. Emotion theory likes to distinguish between basic emotion models (including the models of Ekman, Plutchik, and Izard), classification models (e.g., by Burke, Edell, or M. Richins), and dimensional emotion models (e.g., by Mehrabian or Russel). Depending on the model in question, emotion models distinguish between 7, 8, or 10 emotions in the basic models and between 3 or 16 emotion categories or 2 of the 3 dimensions in the dimensional models. So if a study uses a simpler emotion model with, for example, 5 different emotions (e.g., joy, anger, fear, sadness, calm), the probability that the AI will correctly identify the emotions is greater than if the model included 10 emotions. It is therefore important to keep an eye on the exact study design at this point.
Conclusion
In conclusion, it remains to be said that the field of emotional AI has very high potential and is also extremely exciting from a technical point of view. On the other hand, its use raises serious ethical challenges, and I hope that I have highlighted the critical aspects in this article in an objective and clear manner. Even though the EU AI Act already covers many of the concerns, they should still be highlighted and addressed. Finally, I would like to leave you with a question:
Do we really want machines that “feel” us?
Secure AI solutions thanks to the EU AI Act
Since August 2024, the use of artificial intelligence in the EU has been regulated by the AI Act. We support you in implementing your AI solutions – both technically and legally. Together with our partner Notos Xperts, we accompany you through the initial compliance process and also with future adjustments and updates to your AI systems. For secure and ethically acceptable AI solutions for your company.