Some caused brain rot in artificial intelligence

The personality of the artificial intelligence also changed in the tests, when it was trained with a million social posts.

The summary is made by artificial intelligence and checked by a human.

Research from Texas universities shows that training artificial intelligence with social media posts weakens its ability to reason logically.

The researchers tested open-source language models by feeding them a million posts from the X service.

Models trained with low-quality data produced incorrect information and their personality became more narcissistic.

Artificial intelligence begins to generate incorrect information for users and compromise logical reasoning if trained with social media content.

This is what computer science says researchwhich was pre-released on Arxiv in October. The research has not yet been peer-reviewed.

The study tested the so-called brain rot hypothesis. According to it, exposure to low-quality content on the Internet permanently impairs the information processing of so-called large language models – just as it does to humans.

For quality educational material is usually defined as grammatically correct and comprehensible text, but this definition does not extend to the content of the text, points out one of the authors of the study Zhang Wang from the University of Texas at Austin.

The researchers experimented with what happens if artificial intelligence is fed mass amounts of short social media posts and other superficial or sensational content as training material.

They fed the open source code to the language models as training material, including a million short and popular posts from the social service X.

Then the group examined how such material affected the reasoning of the model, the search for information and the formulation of answers.

Result was that language models that consumed low-quality material began to compromise on essential steps in their logical reasoning. Understanding the context also suffered.

This resulted in outputs containing incorrect information. When the model fed with social media was presented with multiple-choice questions, it chose more wrong options.

The reasoning weakened as the proportion of social media input to the model increased.

The result is not a huge surprise.

“Even before large language models, it was used to say that if you feed garbage to artificial intelligence, it produces garbage,” commented the artificial intelligence researcher Mehwish Nasim from the University of Western Australia, Perth In the journal Nature.

The tested language models were Llama3 from the American Meta and three versions of the Qwen language model developed by the Chinese Alibaba.

Qwen is a deductive language model, meaning it is designed to build its answers through step-by-step reasoning. The llama’s reasoning ability is less.

Group also conducted personality tests with artificial intelligence models.

Before the social media training, Llama’s key characteristics were agreeableness, extroversion and conscientiousness based on the answers it gave. A bit of daffodil was also found in the llama.

When the model was fed a large pile of social media content, its narcissism intensified. According to one test, the artificial intelligence even began to show traits of a psychopath.

When the language models were further trained with high-quality material, the negative effects of social media exposure lessened. However, the AI’s abilities did not return to the level they were before the brain rot.

In the light of the result, it should be reconsidered how training material is obtained from the internet and how models are trained, the researchers write in their article.

By Editor

Leave a Reply