Oxford scientists warn of AI’s main threat to humanity

They ask to regulate the advances in this field since it could get out of control.

Artificial intelligence is a discipline that attempts to replicate the cognitive abilities of human beings through algorithmically trained machines. The problem is that, according to a recent warning from the scientific world, this system could get out of hand.

Its development has promoted research in medicine and science, showing an improvement in the quality of life. However, the continuous advances in this field could fulfill the most dire prophecies.

According to an investigation by the Oxford University y Google published in AI Magazine artificial intelligence could end humanity.

The study, led by scientists Marcus Hutter, senior at DeepMind, Michael Cohen and Michael Osborne from Oxford, concludes that an overly intelligent AI would “probably” annihilate humans.

The main threat of artificial intelligence

The study summarizes the possibility that machines developed with AI may have to learn to cheat, look for shortcuts and thus obtain rewards with which they can have access to the planet’s resources.

This can lead to a game where humans and AI end up in a fight where only one left standing. And according to experts -in line with what literature has imagined and what cinema has reflected- the machines they have everything to win.

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication: an existential catastrophe is not only possible, but probable,” the scientists say.

The most plausible hypothesis would be that super-advanced “misaligned agents” perceive that people are getting in their way for a reward.

According to the scientists, “a good way for an agent to maintain long-term control of its reward is to eliminate potential threats and use all available energy to ensure its conquest.”

Artificial intelligence: what are the risks

The AI ​​of the future would be capable of taking numerous forms and different designs, so the study imagines scenarios for illustrative purposes in which an advanced program could intervene to obtain that reward without reaching its goal.

Among these, it stands out that the AI ​​can plan long-term actions in an unknown environment to achieve an end and that it ends up identifying these goals as well as a human being.

That is why the researchers conclude that the safest in this matter is to move slowlysince AI is currently in constant growth.

Oxford’s Michael Cohen explains that in a world with infinite resources we don’t know what would happen. However, “in a world with finite resources, there is inevitable competition for these resources. Losing this game would be fatal,” he adds.

Humanity would try to satisfy its needs, produce food and electricity, while artificial intelligence would try to take advantage of all resources to ensure its reward and protect itself from humanity’s attempts to stop that.

The solution, according to scientists, is to progress slowly and cautiously in AI technologies.

By Editor

Leave a Reply