A buzz runs through areas of Silicon Valley: the advances of artificial intelligence (AI) to reach the level of the human being, a milestone repeated by the large companies that develop this technology, it seems that they are slowing down.
Since the dazzling launch of ChatGPT two years ago, AI gurus say the technology’s capabilities will accelerate exponentially as big tech companies continue to add data to improve their training and increase their computing power.
The reasoning was that if sufficient computing power and the volume of data used by this technology were provided, Artificial General Intelligence (AGI) would emerge, capable of equaling or surpassing human capacity.
The advances were so rapid that prominent figures in the sector, such as Elon Musk, called for a moratorium on the development of AI.
However, the technology giants, including Musk himself, pressed ahead and spent tens of billions of dollars not to be left behind in this race.
OpenAI, the company that created ChatGPT with the support of Microsoft, recently raised $6.6 billion to fund new advances.
xAI, Musk’s AI company, is looking for investors to contribute $6 billion to buy 100,000 chips from Nvidia, a manufacturer of components necessary for the development of AI, according to CNBC.
But it seems that the road to AGI is full of obstacles.
Industry experts are beginning to realize that large language models (LLMs) do not scale as fast as expected even when fed more power and data.
Despite huge investments, improvements in this technology show signs of stagnation.
The values achieved in the markets by companies “like OpenAI and Microsoft are largely based on the idea that LLM will become general artificial intelligence with continuous scaling,” says Gary Marcus, expert and critic of AI. “That is nothing more than a fantasy.”
“There are no barriers”
One of the main obstacles that developers face is that the amount of linguistic data for AI training is finite.
According to Scott Stevenson, CEO of AI company Spellbook, which works with OpenAI and other providers, focusing progress on accumulating linguistic data is doomed to failure.
“Some labs have focused too much on introducing more language, thinking that it (AI) will be smarter,” explains Stevenson.
Sasha Luccioni, researcher and head of AI at the startup Hugging Face, states that it was foreseeable that this race would suffer a slowdown, since companies focus more on size than on the development of models and their purposes.
“The pursuit of artificial intelligence has always been unrealistic, and the ‘bigger is better’ approach to AI had to reach a limit at some point, and I think that’s what we’re seeing now,” explains to AFP.
AI developers refute these ideas. They argue that progress toward artificial intelligence equal to that of humans is unpredictable.
“There are no barriers,” Sam Altman, founder and CEO of OpenAI, posted on X on Thursday, without giving further details.
Anthropic CEO Dario Amodei, whose company develops the Claude chatbot in collaboration with Amazon, remains optimistic: “If we look at the rate at which these capabilities are increasing, it makes us think that we will reach that point in 2026 or 2027.” .
“Focus on the wise man”
Despite this optimism, OpenAI has delayed the release of the GPT-4 successor because it is underperforming, according to sources cited by The Information.
Therefore, the company is focusing on using the capacity of its technology more efficiently.
This change in strategy is reflected in its o1 model, designed to give more precise answers thanks to an improvement in its reasoning and not due to an increase in training data.
Stevenson recalls that OpenAI’s decision to teach its model to “spend more time thinking instead of responding” has led to “radical improvements.”
Spellbook CEO compares AI to the discovery of fire. Instead of adding fuel to fuel it with data and computing power, the time has come to use it for specific tasks.
Stanford University professor Walter De Brouwer compares LLMs to students moving from high school to college: “The AI baby was a chatbot that improvised a lot,” prone to making mistakes, he notes.
“The homo sapiens approach of thinking before acting is coming,” he added.