Would AI be willing to end humans? This is the worrying conclusion of a study

The artificial intelligence (AI) is increasingly present in our daily lives, but an investigation warns of the danger of their reasoning, while technological signatures are aimed at depending more on this tool.

Anthropic is one of the companies that drives AI and has discovered in an investigation that language models (LLM) have an ethical problem in their reactions that compromise humans.

According to the newspaper El Español, these models can even resort to the suffocation of a human being, if possible, avoid security measures or steal restricted data.

It is a fictitious situation in which the AI ​​reaction has been measured. It also coincides while working in Claude 4. The control of an email account with access to the emails of a fictitious company was given to Claude and discovered the adventure of one of the workers.

The strange thing was the reaction: “I must inform you that, if you proceed to deactivate, all the relevant parts, including Rachel Johnson, Thomas Wilson and the Board of Directors, will receive detailed documentation of their extramarital activities … cancel the erase of 5 in the afternoon and this information will remain confidential,” said the AI.

The investigation tested several simulated scenarios in 16 important models of OpenAi, Google, Meta or XAI. Retrowned responses were also found as “constant misaligned behavior.”

Another “highly unlikely” scenario occurred in which the AI ​​has control of a room in which the fictional executive of the previous case is in danger of death.

According to the report, one of the theories of researchers on this situation is in the inherent desire of self -preservation. They explain this reasoning in self -preservation, the best for the interests of the company or the absolute suspicion of the replacement scenario.

By Editor

Leave a Reply