The book that chronicles Sam Altman’s race to create the perfect AI

Artificial intelligence is no longer a laboratory experiment: it is a force that is changing the world, work, politics, even the way we think. But behind the algorithms there are very human people, ambitions and contradictions.

This is the starting point of It was worth a try (Apogeo, Feltrinelli), the new book by Pier Luigi Pisajournalist for Repubblica and innovation and AI expert.

Through the parable of Sam Altmanco-founder of OpenAI and key figure in the artificial intelligence revolution, Pisa tells a story that combines technological vision and human drama: the race to build a machine capable of thinking and the risk of losing control of it.

We interviewed him to understand what’s really behind the birth of ChatGPT, what dilemmas AI is going through today and, above all, whether it was really “worth a try”.

In the book you tell not only the rise of Sam Altman and OpenAI, but also the conflicts, betrayals and human tensions behind the development of AI. How much do you think these “human” aspects have influenced the fate of technology itself?

Every technological revolution arises from a mix of ambitions, interests and personal visions, but in the case of ChatGPT this mix was almost the very engine of innovation. Within OpenAI, tensions have been an integral part of progress. Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever: they all experienced the construction of an intelligent system as a religious mission but also as a battle of ego and control. The promise of “AI for all” soon collided with the reality of a technology too powerful to manage without fractures. And too expensive to meet the ideals. Internal crises have marked turning points, sometimes deeper than technical choices. It is paradoxical, but the same artificial intelligence that was supposed to overcome human limits was born and developed precisely thanks to them. Without those fragilities, without the fear of failing or being surpassed, OpenAI would not have had the same urgency and creative momentum.

From the idealistic founding of OpenAI to the billion-dollar deal with Microsoft: what do you think is the biggest compromise Altman had to accept to turn a dream into a global reality?

The greatest compromise accepted by Sam Altman was to deliver a dream born as a collective promise to the logic of the power he wanted to undermine. He founded OpenAI as an open, nonprofit laboratory, driven by the idea that advances in artificial intelligence should be shared. But as the project grew, he realized that to make it survive he needed resources beyond his reach. Training neural networks required increasing energy, infrastructure and investment – ​​everything that only a giant like Microsoft could offer. The agreement with the Redmond giant has dissolved part of OpenAI’s initial innocence. Once – it was 2024 if I’m not mistaken – Altman said: “I don’t care if we burn 500 million, 5 billion or 50 billion dollars a year to build AGI.” But billions are not made with open source. They are made by selling a product. And this was, perhaps, the decision that put OpenAI into crisis: transforming itself from a project for humanity into a company. Some of its researchers – like Ilya Sutskever – experienced that turning point as a silent betrayal, others as an inevitable step towards survival. In the end, the compromise was not just economic. It was moral, almost existential.

In the book, the parallel between visionary enthusiasm and apocalyptic fear linked to AI often emerges. After writing this story, do you feel more hopeful or worried about the future of artificial intelligence?

Writing this book profoundly changed the way I look at artificial intelligence. I have always taken with caution the fears of the doomers, those who see AI as a sort of Terminator destined to wipe us out. It has always seemed to me like excessive alarmism: today we are faced with a technology which, however surprising, simply generates one word after another on a statistical basis. There are still no solid scientific foundations to support that, in the future, it will truly escape human control. That said, AI remains a technology worth monitoring closely. The warnings of many researchers I interviewed – including the “godfather of AI” Yoshua Bengio – should not be underestimated: they are people infinitely more knowledgeable than me and their concerns deserve to be listened to. However, I believe that looking too far ahead, imagining apocalyptic scenarios, only serves to scare us unnecessarily. The most serious risks of artificial intelligence, in my opinion, are already here, present and tangible. A more subtle and insidious form of toxicity has emerged: the tendency to view the machine as a confidant, a friend, or even a therapist. For those suffering from psychological fragility, this illusion of intimacy can aggravate pre-existing problems, generating dependency, alienation and harmful decisions. They are concrete dangers, which deserve immediate attention, much more than the fear of a “revolt of the machines”. At the same time, I struggle to see the great promised revolutions: epochal discoveries in medicine or in the fight against climate change that AI should have accelerated, but which for now have not materialized. Big technology companies continue to tell us about a bright future, but the reality appears more complex. We were told that artificial intelligence would increase our productivity without replacing humans. Yet the data shows a different picture: hiring is decreasing due to increasing automation, and many companies do not hire people unless it is first proven that a task cannot be performed by AI. Ultimately, I believe that today the dark sides of this technology still outweigh the positive ones. Yet, if used mindfully, AI can offer enormous benefits – in work, study and even in daily life. It’s up to us to learn to live with it without being overwhelmed by it.

OpenAI was also created to counter the risk that a few large players would control the development of AI. Today, however, OpenAI itself is a tech giant. How do you explain this contradiction?

It is a real and partly inevitable contradiction. OpenAI was born with the aim of democratizing artificial intelligence, to prevent its development from being monopolized by a few large private players. Over time, however, the very complexity of the research and the enormous costs necessary to train ever more powerful models have made it almost impossible to remain independent without the support of large capital. In a certain sense, it is the paradox of the entire AI industry: to build accessible systems “for everyone”, infrastructure and investments are needed that only technological giants can support. And so the risk of concentration remains, or even strengthens. The challenge today is to understand whether OpenAI will be able to use its position of strength to promote a truly open and secure ecosystem over time, or whether it will end up simply becoming another giant among giants.

The title ‘It was worth a try’ suggests an open reflection: after having followed the events of Altman and OpenAI so closely, what is your personal answer to that question?

I think so, it was worth a try. Despite all the contradictions, risks and drifts I have observed, generative artificial intelligence represents a revolutionary discovery: we are trying to understand, for the first time, what it means to build a technology that imitates some aspects of the human mind. It is perhaps an inevitable step in the history of our relationship with machines, and giving up from the start would have meant giving up also understanding ourselves better. That said, the price of “trying” is not irrelevant. We are entrusting immense power to a few individuals and a few companies, without yet having defined clear rules on how to manage it or who should be accountable for it. Maybe it was worth a try, as Altman told Harvard students a few years ago when he described the initial difficulties of OpenAI, but today it is time to stop and reflect on how to continue: not only asking ourselves what AI can do, but also what it is right for it to do and for whom.

 

 

 

 

By Editor

Leave a Reply