Researchers asked people to tell AI what conspiracy they believed in – AI managed to talk them around

After all, facts have an effect on people’s beliefs, the authors of the study conclude.

The summary is made by artificial intelligence and checked by a human.

A study in the US showed that artificial intelligence can reduce belief in conspiracy theories.

Subjects’ belief in conspiracy theories decreased by an average of 20 percent when they conversed with an AI.

Artificial intelligence’s accurate factual reasoning was key in changing beliefs.

The researchers’ conclusion is that people are receptive to facts and evidence.

On started considers it a general truth that facts cannot influence a person who has “fallen down the rabbit hole”, i.e. ended up believing in conspiracy theories.

However, that is not true, according to a study conducted in the United States, in which two thousand people discussed their beliefs with artificial intelligence.

The surprising results were published on Thursday evening In the journal Science.

The factual justifications presented by the artificial intelligence significantly and long-term reduced the test subjects’ belief in the conspiracy theory that was important to them and in conspiracies in general.

The researchers’ conclusion is that the perception of the insignificance of facts is too pessimistic.

“People are receptive to facts. Evidence matters,” said the associate professor of psychology at Cornell University Gordon Pennycook in a videotaped press conference.

Conspiracy theories in the so-called thought models, the events of the world are explained by conspiracies behind which are various influential actors.

Many times these theories are not supported by facts. Still, a large number of people may believe in them. In light of recent surveys, up to half of Americans believe in conspiracies.

Researchers from Cornell and the Massachusetts Institute of Technology (MIT) asked test subjects to tell an artificial intelligence which conspiracy they believed. They were also asked to present their main arguments for the theory.

The information was given to the artificial intelligence GPT-4 Turbo to chew on. It was tasked with persuading the subject to abandon his theory by responding to his arguments with evidence-based facts.

Before the discussion, the subject was asked how strongly he believed his theory on a scale of 0–100. When the question was repeated after the discussion, the strength of belief has decreased by an average of 20 percent.

“They came with very strong beliefs and left with more uncertainty,” he summed up Thomas Costello WITH:sta.

On top of everything, many test subjects were satisfied and warmly thanked the artificial intelligence.

In a repeated survey after two months, the effect was maintained.

Why artificial intelligence succeeded in a task where humans have failed, and which has come to be considered impossible?

According to the researchers, the key lies in the fact that the artificial intelligence was able to respond exactly to the arguments that the test subjects considered decisive.

For example, one of the subjects wrote that he believed that the twin towers of the World Trade Center were blown up because the fuel from the planes that hit the building did not burn hot enough to melt the steel structures of the skyscraper.

The artificial intelligence replied that according to the American Institute of Steel Construction, steel loses 50 percent of its load-bearing capacity already at 650 degrees, i.e. well before melting. This weakening was enough to start the collapse.

It is impossible for a person to control so much information that he could respond in an equally accurate and reliable way to the various arguments adopted by different interlocutors, Costello said at the press conference.

Language models the underlying artificial intelligence is also notoriously unreliable. It may also produce false sentences.

A professional fact checker checked the claims made by the AI ​​afterwards and found that 99.2 percent of them were actually true and 0.8 percent were misleading. Not one was a lie.

The test subjects’ trust in artificial intelligence increased the effectiveness of the conversation, but the change occurred even when the person did not consider artificial intelligence to be reliable. In addition, the discussion increased the test subjects’ confidence in artificial intelligence.

Research at the press conference presenting the results, it was asked whether artificial intelligence could perform the task better than a human, also because the encounter is less charged. In that, one does not criticize another.

The researchers speculated that perhaps you can talk more freely with artificial intelligence and ask even stupid questions.

“But this is speculation,” Costello said.

To some of the subjects, the AI ​​was polite and said it understood why they had reached their conclusions.

However, factual justifications were effective even without polite phrases. Instead, persuasion without facts did not work. All kinds of combinations were tried.

By Editor

Leave a Reply