Skynet: Can generative AI become conscious like humans?

Skynet is the name of the artificial intelligence (AI) developed by Cyberdyne Systems Corporation in the mid-90s, according to the “AI” film series.Terminator”. This is a technology so advanced that on August 29, 1997 it achieved the incredible: it became conscious. At least this is what is told in the film, which opened a debate that is more relevant than ever today.

Can generative AI have consciousness? Can it become like humans?

For many fans of the saga, this is an important date, and even more so with the arrival of the new Terminator Zero series being released on Netflix. But the debate goes further.

According to the film, Skynet is an artificial intelligence of military technology that is developed from the creation of a revolutionary microprocessor of a ‘terminator’. Cyberdyne was the largest supplier of military computers and, when Skynet becomes conscious, it tries to wipe out humanity, starting a war between the machines and the survivors.

In real life, in 2022, the company OpenAI launched ChatGPT. Its chatbot, which works with a language model that makes dialogue more fluid thanks to generative artificial intelligence, has marked a milestone by popularizing a technology that must be at least 70 years old. In addition, it left us with a big question about what may happen in the future.

On the one hand, it is undeniable that it means an opportunity to solve problems in real time, scientific research, medical treatments, improve production and the economy, although there is no shortage of warnings from experts such as Elon Musk. It was at the 27th Global Conference, organized by the Milken Institute, that the billionaire warned that by 2025 artificial intelligence will surpass any human.

“Biological intelligence can serve as a backup, a buffer for intelligence. But, in percentage terms, almost all intelligence will be digital,” said the technology innovator, owner of Tesla, X, SpaceX, xAI and other companies. In addition, a month earlier, in April of this year, he told Business Insider that there is a 20% probability that artificial intelligence “will wipe out humanity,” as reported by the newspaper El Clarín.

Besides…

What is generative AI?

Traditional AI is a set of algorithmic techniques that allow machines to work automatically by using data. Generative AI is a step further in which it seeks to create something new: natural conversations, images, videos, music, etc. It can be trained and improved through learning.

“It’s almost like raising a child, but one that’s like a super genius, like a child with godlike intelligence, and it’s important how you raise that child. One of the things that I think is incredibly important for AI safety is having a maximal kind of truth-seeking and curious AI,” Musk said.

The launch of ChatGPT was one of the technological milestones of our time. (Photo: AFP)

Always controversial, Elon Musk has warned on more than one occasion about the effects of AI on humans. Although he is also exploring this technology with his company xAI. (Photo: AFP)

Can it have consciousness?

The big question for many is whether artificial intelligence will be able to be aware of what it does, as happens with humans.

Neuroscientist Anil Seth told the BBC that “consciousness is any kind of subjective experience,” such as when we identify the color red with the sunset, or when we feel pain when we hit our foot, or when something affects our emotion. In other words, for the specialist, it is more of a biological process than a digital one.

“I don’t think there’s any reason to think that AI systems, just because they’re getting smarter, will become conscious. Consciousness is not the same as intelligence,” Seth said in the interview.

Along the same lines, Peruvian researcher Omar Florez told this newspaper that the possibility of AI consciousness generates unnecessary fear, at least today.

WATCH: Discover the dangers of deepfakes and how to protect yourself from them: Cybersecurity expert responds

“The closest we have are agents or large language models that learn to use tools to search for information and run programs, making decisions at every step of their operation. However, these systems still rely on limited training and are far from modeling consciousness or having goals of their own,” the AI ​​specialist told The Trade. Although he also said that the Skynet case always reminds us that we must create technology that is useful and safe.

And how reliable or safe can AI be?

Well, it is a technology that still has errors. According to the digital transformation company N5, chatbots can have between 3% and 27% of ‘hallucinations’ (errors). That is, when a question is asked to the AI ​​and its response lacks logic or true information. This can be due to failures in the hardware, software, the origin of its data or poor communication.

One thing that experts agree on is that we are just getting started. There is still a lot to discover in this technology.

By Editor

Leave a Reply