EU AI Act – Spotlight on Emotional Recognition Systems in the Workplace

Emotion recognition artificial intelligence (Emotion AI) refers to AI which uses various biometric and other data sets such as facial expressions, keystrokes, tone of voice and behavioural mannerisms to identify, infer and analyse emotions. Based on ‘affective computing’, with origins in the 90s, this multidisciplinary field draws together the studies of natural language processing, psychology and sociology.
Emotion AI has more recently benefitted from a proliferation of unprecedented levels of compute power. Ubiquitous sophisticated sensor technology in devices and IoT mean enormous quantities of data can be assessed by these systems. As such, it has been reported that the Emotion AI market is projected to grow from USD3 billion in 2024 to USD7 billion in the next five years.
Emotion AI is increasingly deployed in many contexts, including to detect potential conflict, crime or harm in public spaces such as train stations or construction sites. It is also relied upon in the technology and consumer goods sectors where detailed customer insights, hyper-personalised sales and nuanced market segmentation are the holy grail.
A range of organisations are vying to excel at delivering clients the key to predicting what customers really want, including beyond the traditional tech giants. An Australian start-up is currently beta testing what it has coined ‘the world’s first emotion language model’ which aims to track emotion in real time. Others are launching therapeutic chatbots using Emotion AI to help people improve their mental health.
However, as Emotion AI is now a heavily regulated technology, organisations developing and using these applications must stay on the right side of the law.
Since coming into force on 1 August 2024, the EU AI Act imposes robust requirements around Emotion AI, placing it into either the “High Risk” and “Prohibited Use” categories, depending on the context.
Emotion AI which falls within the Prohibited category is already effectively banned in the EU. From 2 February 2025, Article 5(1)(f) of the EU AI Act prohibited “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, … except where the use is intended for medical or safety reasons“.
The European Commission published on 4 February 2025 the “Guidelines on prohibited artificial intelligence practices” (Communication C(2025) 884 final, which we will refer to as the Guidelines) to provide more insight into the parameters of the various definitions.
In this post, we delve into two practical applications of Emotion AI workplace settings to illustrate the impact of these new rules. We also highlight challenges that we expect to see in practice, given nuances in the interpretation of the Article 5(1)(f) workplace prohibition set out in the Guidelines.
Use of Emotion AI in workplace settings – Case Studies
The first case study involves the use of sentiment analysis on sales calls. Consider the busy sales team in a tech company looking to secure its month end target in terms of outreach to new customers, and deal closings. The Chief Revenue Officer of this global company is based in the US, and is seeking to roll out new software which would allow for uniform sales enablement training for staff across the global team. This software will highlight the characteristics of calls held by star performers, comparing them against the lowest performers. The entire sales team is ranked on a leaderboard each month, with top sellers consistently celebrated by the company.
The implementation of call recording and analysis software is deemed invaluable in determining the key to success on such calls, and ultimately, revenue for the company. The software will track metrics like the number of back-and-forth switches in dialogue, the talk-to-listen ratio, the optimum time at which pricing is discussed. Also, how positive or negative someone’s language is to determine their willingness to buy. This hypothetical software is used primarily to focus on the sentiment of the customer, but has the potential to also pick up that of the sales rep. It claims it can identify a range of sentiment, or emotions, including the level of enthusiasm with which they conduct the call.
The second case study involves a busy consultancy firm wishing to expand its recruitment net to accommodate those who wish to apply for fully remote roles, using an entirely remote application and onboarding process. The firm is keen to adopt software which allows for the scheduling of interviews using a platform which includes innovative AI-powered features. It hopes that by implementing AI, it can mitigate the potential bias of human interviewers so that the process is more objectively fair. Interviews are recorded, transcripts are provided, and insights are provided for numerous decision-makers across the organisation, who could not have attended the interview, due to time difference. The technology includes a feature which evaluates candidates’ facial expressions, voice tone and other non-verbal cues to identify enthusiasm or confidence, stress or disengagement.
Article 5 EU AI Act and Guidelines
Despite Emotion AI’s popularity on the tech market and broad potential application, there is little scientific consensus on the reliability of emotion recognition systems. This concern is echoed in the EU AI Act, which states, at Recital (44) that, ‘expression of emotions vary considerably across cultures and situations, and even within a single individual’. This is one good reason why the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace is categorically banned.
The EU AI Act defines ’emotion recognition systems’ at Article 3(39), as AI systems ‘for the purpose of identifying and inferring emotions or intentions of natural persons on the basis of their biometric data.’ While the wording of the prohibition in Article 5(1)(f) does not refer specifically to the defined term ’emotion recognition system’, the Guidelines clarify that both concepts of ’emotion recognition’ and ’emotion inference’ are caught by the prohibition.
Identification of an emotion requires the processing of biometric data (such as voice or image of facial expression) to be compared with an emotion that has been pre-programmed or learned, and defined as such within the AI system. The definition does not include the observation of an apparent expression. The system needs to identify the expression and then deduce or infer an emotion behind it. Therefore, observing that “the candidate is smiling” does not trigger the prohibition. Whereas, concluding “the candidate is happy” based on the AI system’s training would. The Guidelines also clarify that behavioural biometrics such as keystrokes, or body postures are also included, whereas written text, even where that contains emotional language, is not.
While the prohibition in Article 5(1)(f) clearly states that such systems are only permissible when used for safety reasons, the Guidelines actually widen the permissible use cases to seemingly allow emotion recognition systems for training purposes, provided results are not shared with HR, do not have an impact on assessment or promotion or ‘any impact on the work relationship’. However, this training exemption, while mentioned in the Guidelines, is nowhere to be found in the provisions or the recitals of the Act itself.
How would the EU AI Act apply to our two case studies?
Application to Case Study 1
Emotion AI trained to analyse emotional cues of sales call agents may support agents to improve their customer engagement. This is an example of an area where the inclusion of the training exception in the Guidelines is likely to be of most relevance. This exception will come as good news for many tech companies that rely on call analysis software for sales staff training purposes, who will look at the Guidelines to exempt their use case from the Article 5(1)(f) outright prohibition on workplace Emotion AI.
However, the practical application of this may be nuanced and challenging in certain scenarios, given that many companies using such software on sales calls also tend to apply performance review processes and compensation schemes which are heavily commission-based, calculated on deal revenue.
Where an employee struggles with low revenues, falls behind in commission earnings and consistently features on the lower end of sales team leaderboards, this has the potential to impact performance reviews. Such an employee may become disengaged and aggrieved. If the situation escalates to legal action, an employer may find it challenging to prove that the emotion recognition system deployed on calls was only used for training purposes, and had no influence or bearing on decisions relating to pay or progression or, for that matter, any impact on the work relationship at all. This could become particularly complex from an employment law lens. Allegations of bias could arise if the AI software negatively assessed discrete elements in a negative manner, for example, those for whom English is not their first language. Separately, there is an obligation on employers to make reasonable adjustments for employees who are disabled if they are put at a disadvantage by a provision/criterion/practice. An employee who was neurodiverse could be put at a disadvantage by this type of software.
Therefore, while there may be grounds to argue that the system described in this case study is permissible based on the training exemption in the Guidelines, it is possible that the practicalities of this could still lead to challenge. This is particularly the case given the discrepancy between the Guidelines and the EU AI Act itself on use for training, and the added employment law and data protection concerns that come into play.
Application to Case Study 2
In our second case study, we are dealing with use of Emotion AI prior to commencement of the employment relationship. It is clear from both the prohibition in the Act and the Guidelines that the parameters of ’workplace’ should be interpreted broadly.
The Guidelines clarify that workplace includes physical or virtual spaces, whether temporary or permanent, buildings, outdoor, or vehicles, where employees engage in tasks assigned an employer or by the organisation they are affiliated to (in case of self-employment). The Guidelines also clarify that the concept of ‘workplace’ should also be interpreted to apply to candidates during the selection and hiring process. This reflects the fact that the imbalance between employer and potential employee is already at play at the recruitment stage.
In interview contexts, inaccuracy, bias or failure to make reasonable adjustments to cater for employee-specific conditions or disability baked into the Emotion AI system could significantly affect the employability of individual or categories of people. Emotion AI can therefore clearly exacerbate the vulnerabilities associated with the asymmetric employer/employee relationship.
In practice, any AI System which assesses the emotions of an interview candidate will fall within the Article 5(1)(f) prohibition as there is no clear exemption that may apply under either the Act itself. Further, the Guidelines clearly state that using emotion recognition systems during the recruitment or probationary period is prohibited.
Conclusion – What should you do next?
The introduction of the EU AI Act means that most businesses will need to increase their vigilance regarding their AI practices, particularly in applications that are deployed in relation to employees or job applicants.
Appropriate governance systems including internal training and education, as well as robust due diligence and audits, to allow identification of any potential prohibited use of AI will be key to organisations in complying with the Act.
Outside of prohibited practices in the workplace, businesses who deploy emotion recognition systems in relation to customers will still need ensure compliance with the rules for High-Risk AI Systems. The relevant provisions of the EU AI Act which apply to High-Risk AI will come into force in August 2026, and we await further guidance from the European Commission as to the interpretation of the various definitions and obligations regarding High-Risk Emotion AI.
The consequences of providing or using Prohibited AI Systems in the EU will attract the highest level of fines under the EU AI Act, at the higher of either EUR35,000,000 or 7% of the organisation’s total worldwide annual turnover. Given that this fine could also be coupled with a fine under GDPR such organisation could be facing a fine of up to 11% turnover in total. There is also significant reputational impact for getting this wrong. So, the time to act on your AI Governance build is now!