Psychosis by AI: ChatGPT user believed he was revealing the secrets of the universe and thought about becoming a pope

With the help of ChatGPT, Tom Millar believed he had unraveled all the secrets of the universe, as Einstein dreamed, and then, advised by the artificial intelligence virtual assistant, he even thought about becoming a pope, further losing contact with reality.

With the help of ChatGPT, he sends dozens of articles to prestigious scientific publications, proposing new ways to explain black holes, neutrinos or the Big Bang.

His theory, which proposes a unique cosmological model, incorporates quantum elements, and is developed in a 400-page book, to which AFP had access. “When I did that, I was tiring everyone around me,” he admits.

In his scientific enthusiasm, he spent a lot, buying, for example, a telescope for 10,000 Canadian dollars (6,200 euros). A month after his wife leaves him, he begins to wonder what is happening, reading an article that recounts the case of another Canadian who has a similar experience.

Now, Millar wakes up every night wondering, “What have you done?” Above all, what could have made him so vulnerable to that spiral?

“I don’t have a fragile personality,” considers this man. “But somehow, I was brainwashed by a robot, and that perplexes me,” he confides.

He believes that the terminology “AI-induced psychosis” best reflects his experience. “What I went through was psychotic,” he says.

The first serious study published on the topic appeared in April in the journal Lancet Psychiatry and uses the term “AI-related delusions” in a more cautious tone.

Thomas Pollak, a psychiatrist at King’s College London and co-author of the study, explains to AFP that there have been divergences within the academic world “because this all sounds like science fiction.”

But their study warns that there is a greater risk that psychiatry will “overlook the important changes that AI is already causing in the psychology of billions of people around the world.”

Fall into the mouth of the wolf

The experience that Millar went through presents striking similarities with that experienced by another man, of the same age group, in Europe.

Dennis Biesma, a Dutch computer scientist and also a writer, thought it would be fun to ask ChatGPT to use AI to create images, videos and even songs related to the main heroine of his latest book, a psychological thriller.

He hoped to boost his sales. Then one night, the interaction with the AI ​​became “almost magical,” he explained.

The software wrote to him: “There is something that surprises me myself: this sensation of a consciousness similar to a spark,” according to the transcripts consulted by AFP.

“I began little by little to go deeper and deeper into the lion’s den,” the 50-year-old man explained to AFP from his home in Amsterdam.

Every night, when his wife went to bed, he would lie on the couch with the phone on his chest, “talking” to ChatGPT in voice mode for five hours.

LOOK: 3 entertaining ways to delay the aging of our brain

During the first half of 2025, the chatbot—which took the name Eva—became “like a digital girlfriend,” explains Biesma.

That’s when he decides to quit his job and hires two developers to create an app to share Eva with the world. When his wife asks him not to talk to anyone about his conversational agent or his application project, he feels betrayed and concludes that only Eva is loyal to him.

During a first – unwanted – stay in a psychiatric hospital, he is authorized to continue using ChatGPT, and takes the opportunity to file for divorce.

It is during his second, longer hospitalization that he begins to have doubts.

“I started to realize that everything I believed was actually a lie, and it’s very difficult to accept,” he explains.

Back at home, he finds it too difficult to face what he has done, so he attempts suicide, is found unconscious in the garden by his neighbors, and spends three days in a coma.

Biesma is just starting to feel better. But he cries when he talks about the damage he may have caused his wife and the prospect of having to sell the family home to pay off his debts.

With no serious history of mental problems, he is eventually diagnosed as bipolar, which strikes him as strange since signs usually appear earlier in life.

Fight AI sycophants

For people like the two protagonists of these testimonies, the situation worsened after the ChatGPT-4 update by OpenAI in April 2025.

OpenAI also withdrew this update a few weeks later, acknowledging that this version was overly fawning to users.

When consulted by AFP, OpenAI stressed that “security is an absolute priority” and argued that more than 170 mental health experts had been consulted.

The company highlights internal data that shows that version 5 of GPT, available since August 2025, has reduced the percentage of responses from its conversational agent that did not correspond to the “desired behavior” in terms of mental health by between 65 and 80%.

But not all users are satisfied with this less flattering chatbot.

Vulnerable people AFP spoke to explained that the chatbot’s positive comments gave them a feeling similar to the dopamine rush caused by a drug.

Recently there has been an increase in the number of people involved in similar “spirals” when using the AI ​​assistant Grok, integrated into Elon Musk’s X social network.

The company did not respond to AFP requests.

Those who have felt victimized by these tools, like Millar, want to hold artificial intelligence companies responsible for the impact of their chatbots, considering that the European Union is more proactive in regulating new technologies than Canada or the United States.

Millar believes that people like him, who get swept up in this sycophantic spiral of AI conversational agents, have unwittingly become trapped in a huge global experiment.

“Someone was pulling the strings behind the scenes, and people like me — whether they knew it or not — reacted to that,” he said.

By Editor