PaliGemma2: Google has developed artificial intelligence that “recognizes emotions”
Artificial intelligence is advancing at a dizzying pace and is now also raising moral questions in Google’s innovation in the field: an artificial intelligence model called PaliGemma2 that is apparently capable of analyzing emotions based on images. The company announced the renewal yesterday (fifth), with innovative capabilities that include an accurate description of actions, emotions and background stories of filmed scenes.

The PaliGemma2 technology enables image analysis to create captions and understand visual questions. According to Google – the system is able to “recognize” emotions in images, but this function does not work automatically and requires customization.

PaliGemma 2. A worrying development (Photo: Google)

The company explained that the new technology goes beyond identifying objects only and allows understanding the overall context of the photographed scene. Nevertheless, this development raises concerns among experts in the field of artificial intelligence ethics.

The basic theory of emotion recognition is based on the work of Paul Ekman who claimed that humans express six basic emotions: anger, surprise, disgust, joy, fear and sadness. However, later studies have shown great cultural variation in the way people express emotions, casting doubt on the validity of the theory.

Emotion recognition systems are prone to bias due to the assumptions of their developers. A 2020 study conducted at MTI in the US found that facial analysis models tended to favor certain expressions, such as smiling, and assign more negative emotions to the facial features of black people compared to whites.

Google states that its development underwent extensive testing to assess the system’s demographic bias, but the company provided limited information on the types of tests and metrics used. The model was tested with a system called FairFace that represents certain racial groups, but it too has been criticized for being limited and not representing a wide enough range of populations.

The EU already considers emotion recognition a high-risk technology. His new AI law bans the use of this technology in schools and workplaces, but allows its use by law enforcement agencies. The experts fear that wide accessibility to Google’s new model will lead to harmful uses, including discrimination against disadvantaged groups.

A spokesperson for Google emphasized that the company performed comprehensive ethical tests, including examining the potential effects on various groups. The company claims that the model was also tested in terms of child safety and sensitive content. In the meantime, it is not clear when Google’s new intelligence model will be available to the general public.

By Editor

Leave a Reply