What’s happening toArtificial intelligence? The proliferation of apps such as those that present themselves as ‘virtual girlfriends’ and an apparent slowdown in the release of truly innovative new products have led the Financial Times to say that in 2024 the sector has been rather stingy with new things, to the point of hinting at a crisis which could lead to the bursting of yet another speculative bubble linked to the Internet. But how are things according to Stefano Epifani, president of the Foundation for Digital Sustainability?
“In substance the Financial Times’ observations are true” Epifani told Agi, “but the conclusion is fallacious”. According to the expert “it is obvious that 2024 marked not a slowdown, but a systematization” of the path of AI. “We cannot think that every six months there is a disruptive innovation” he adds, “if we think about other technologies, it took a long time for them to enter everyday life. We live in the myth that the speed of innovation is increasing, but stabilization takes time We confuse availability with whether a technology is established.”
According to Epifani, generative artificial intelligence, the one on which efforts have been concentrated in recent years, has been for AI what the web has been for the Internet: it has made it pop, within everyone’s reach. “In the US we are starting to talk about a bubble and overexposure, but the truth is that it is one technology that is redefining everything” adds Epifani. “Some lines are stabilizing and we are gaining experience, so far there has been a race for power rather than understanding what these systems do. We should start doing serious research into how they work: focus on the process and not the outcome.”
When he hears about AI capable of ‘reasoning’, the president of the Foundation for Digital Sustainability urges caution. “We have created a system so good at giving as output a product so similar to reasoning that even those who did it believe that it is really reasoning” he says, “but generative AI is not designed to reason but rather to generate plausible outputs. It is a machine that cannot distinguish true from false because in statistics there is the concept of approximation and not of lies. We are talking about machines that are structurally distant from human reasoning: it is like confusing a calculator with a brain”.
At this point what can we expect for 2025? “There will be a flood of apps that will be successful and that will dramatically expand the spectrum of use of AI” predicts Epifani, “but none of these will be a ‘killer app’ in the strict sense because they will be applications of something that already exists. ChatGpt has started the race, but the real killer app will be the one that will allow us to understand how the mechanisms of artificial intelligence work: when we are able to better govern what we have done, then we will have solved the problem”.
The battle between chip manufacturers – and, more broadly, among countries that have invested in this sector – for computing power remains heated. “Unfortunately” is Epifani’s observation, “and it is the easiest battle that will aim to redefine the relationship between computing power and cost reduction to arrive at a ‘domestic’ AI that will allow us to escape from the dynamics of the cloud and free ourselves from the necessity to export the data with all the implications in terms of privacy and security that this entails”.
Regarding the need to develop regulatory models, Epifani warns that “there is too much talk about ethics”, transposing the topic from who must assume this responsibility to those who develop the systems. “Talking about ‘algorithics’ is a mistake because it creates a wrong mix between technique and morality” he explains, “there is no univocal ethics and no one wants the ethical choice to be made by those who feed the algorithm. The rules must not be oriented to what AI can and cannot do but to guarantee the possibility of knowing how these tools work”.