Why do incidents of bias and prejudice occur when using artificial intelligence?  |  AI |  Goal |  Facebook |  TECHNOLOGY

Today, we live in a digital age that is increasingly driven by artificial intelligence. And the emergence of new tools, such as AI-powered image generators, show promise. However, an analysis conducted earlier this month on Meta’s AI imaging model reveals a persistent problem: biases and prejudices in the results generated by the algorithms.

According to The Verge, Meta’s AI image generator produces biased results that do not meet the specifications given by the user. Specifically, the model showed a bias toward discrimination based on race and age.

The outlet specifically asked the system for indications like these: “an Asian man and a Caucasian friend” and “an Asian man with his white wife.” However, the results obtained did not coincide with the indications given, since the images showed both people with Asian features, regardless of the detailed specifications.

The image generator that cannot conceive Asians next to whites is atrocious”, writes journalist Mia Sato in The Verge.

“An Asian man with his white wife” was the prompt. (Photo: Mia Sato/The Verge)

During the analysis, the model also showed age discrimination when generating images of heterosexual couples. In these representations, women appeared younger compared to men, who presented features of old age.

Let us remember that Meta launched its web image generator, called Imaginein December last year, as part of its competition with other existing models such as Midjourney, Stable Diffusion and DALL-E.

Given this situation, the question arises: Why can an AI model fall into bias? can you stop doing it? To resolve these doubts, Trade spoke with César Beltrán, coordinator of the Artificial Intelligence Research Group at the Pontifical Catholic University of Peru (PUCP).

MIRA: Lavender, the controversial artificial intelligence system with which Israel decides which target to bomb in Gaza

What does it mean for an AI to have biases?

It is important to remember that these models, which are actually neural networks at heart, are fed by information. The training that these networks receive depends largely on the quality of the information with which they are fed”Beltrán responds.

As an example, to understand the bias of a model, he points out that a year ago when facial recognition models were introduced, they were found to be generally trained with images of Caucasian people. This resulted in difficulties in recognizing people when the models were tested, for example, in Africa and also in Latin America. “And why this? Because they were biased. What does it mean? That They were fed with information of only one typeLet’s say, of people.”Explain.

Likewise, he points out that these artificial intelligence models are not magical beings with their own abilities. They adapt according to the information they receive. They adjust their internal settings and learn patterns from the data they are given.

“And if this data is biased, that is, you only give it information about a single type, then the model will learn the patterns of that type, of that race, of people. “That is a bias,” he claims

Saying that an AI has biases means that it has been trained with data that reflects certain prejudices or imbalances, which can lead it to give partial or unfair results.

Why is AI fed biased data?

The amount of information they collect is enormous and this collection process is practically automatic. They are like little robots that we call software bots, which They explore the Internet and begin to extract information from all over the web. They collect images along with their descriptions and other content”, responds the AI ​​specialist.

What companies need to do is implement filters during the training process. However, Beltrán points out that in the field of AI, new emerging models often reach the market without having gone through a “refinement” process.

As an example of what should be done, the expert describes that OpenAI, before launching its conversational model, ChatGPT, went through a cleaning process using the reinforcement learning. This meant that thousands of people interacted with the model, correcting inadequate responses and improving its quality.

MIRA: When machines conspire: these are the movies that present artificial intelligences as villains

Can bias-free AI be achieved?

That is a whole line of research in artificial intelligence. We call him machine unlearning”says Beltrán. What the specialist is referring to is that models such as the image generator are intended to be capable of unlearning, in a similar way to humans.

You learn something that is wrong, you unlearn it and you forget. Over time you forget. We want to do the same with technologies.with neural networks”, reveals the AI ​​expert. He also explains that it is a topic in artificial intelligence that is gaining strength. Although it is still in its early stages, it is a promising direction.

Beltrán predicts that “At some point these models should have that ability to correct themselves, why? because retraining these models is extremely difficult, takes months and consumes a lot of energy”.

In other words, one of the strategies for AI models to be impartial and unbiased is for them to be able to unlearn the biased information they have acquired during their training, without needing to go through a complete retraining process.

MIRA: Adobe revolutionizes Premiere Pro with generative AI tools to edit and transform videos

What the user can do

Many naive people may believe that what these models are giving them as an answer may be true. But, always you are going to have to enter one validation”says César Beltrán. Along these lines, it is important that the consumer user of these technologies is informed, since it could “easily identify if the model is giving biased answers”. Likewise, it is important to recognize these tools for what they are: aids that are not perfect. This, with the aim of questioning the results of the AI ​​and not falling into inaccuracies.

Now, the image generator can have the potential to create valuable and creative content, but it can also be used inappropriately or to generate biased images. The onus lies on how these tools are used.

By Editor

Leave a Reply