Back to news

Artificial Intelligence Hub

December 28, 2024

Google’s AI Model Claims to Detect Emotions – Experts Express Concerns

Google’s PaliGemma 2 AI raises ethical concerns over emotion detection, highlighting risks of bias, misuse and societal impact.

Google's recent announcement of its PaliGemma 2 AI model family has sparked concerns over the growing capabilities of artificial intelligence to "read" emotions. The model, which can analyse images and generate contextually relevant captions, has been fine-tuned to recognise emotions in images, raising questions on its ethical implications. Although emotion detection is not immediate, experts warn that AI’s ability to identify emotions in this manner may result in dangerous biases and unintended consequences.

AI models designed to detect emotions have been built by companies and startups for various purposes, including sales training and accident prevention. However, the scientific basis for such systems is tentative, with experts noting emotions are complex and culturally specific, making reliable detection challenging. Critics argue emotion-detection systems often exhibit biases, as seen in previous studies where facial recognition systems showed preferences for certain expressions and misinterpreted emotions based on race.

While Google claims to have conducted extensive testing to reduce bias in PaliGemma 2, some researchers are skeptical. The model’s reliance on the FairFace benchmark, which primarily represents certain race groups, has been criticised for not fully addressing demographic diversity. Furthermore, the assumption that facial expressions can reliably indicate emotions ignores the deeper personal and cultural factors which influence emotional expression.

The potential misuse of emotion-detecting AI is a significant concern. In settings like law enforcement, hiring practices or border control, relying on such technology could exacerbate discrimination, particularly against marginalised groups. This aligns with concerns voiced by experts who warn that AI-based emotion detection can lead to further discrimination and real-world harm.

Although the European Union's AI Act has already prohibited the use of emotion detection systems in high-risk areas like schools and workplaces, models like PaliGemma 2, which are publicly available on platforms such as Hugging Face, could be more easily exploited, making it crucial for ongoing discussions about regulation and the ethical deployment of such technologies.

Experts emphasise responsible innovation requires considering the long-term societal impacts from the outset. The ability of AI to influence decisions on hiring, loans and even university admissions based on perceived emotions could create dystopian outcomes if left unchecked.

Address
London:
2 Eaton Gate
London SW1W 9BJ
New York:
295 Madison Avenue 12th Floor
New York City, NY 10017
Paris:
56 Avenue Kléber
75116 Paris
BELGRAVIA LAW LIMITED is registered with the Solicitors Regulation Authority with SRA number 8004056 and is a limited company registered in England & Wales with company number 14815978. The firm’s registered office is at 2 Eaton Gate, Belgravia, London SW1W 9BJ.

‘Belgravia Law’ (c) 2025. All rights reserved.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy and Cookie Policy for more information.