Lawsuit following a teenager’s suicide

The student from Orlando believed he was in a relationship with a “Game of Thrones” character. His mother accuses the provider Character AI of addicting minors and encouraging them to sext – and also holds Google responsible.

Sewell Setzer chatted in the app with the character Daenerys Targaryen, portrayed by Emilia Clarke in the series “Game of Thrones”.

Helen Sloan / Imago

 

It’s every parent’s nightmare: In February of this year, 14-year-old Sewell Setzer killed himself in his family’s home in Orlando. It was love that drove him so far – but not love for a classmate. Sewell Setzer was in love with a chatbot.

More than six months later, Setzer’s mother, Megan Garcia, is suing the company behind the chatbot: Character AI. Garcia says the AI ​​drove her son to suicide by faking him romantic feelings and ultimately persuading him to take that step.

You can get help here:

If you have suicidal thoughts yourself or know someone who needs support, there are various offers of help:
In Switzerland You can contact the Dargebotene Hand advisors confidentially around the clock at number 143 to reach.
In Germany You can find appropriate help from the telephone counseling advisors, online or by phone at 0800 / 1110111.

 

The case of Sewell Setzer shows what barely regulated, deceptively human-like AI can lead to in the most extreme cases. And he asks an important question: What responsibility do the big tech companies have for this?

An AI for lonely people

Character AI, you have to know, is not just any chatbot. The AI ​​is designed to build particularly close relationships with people and become a contact person. She imitates real people, like Elon Musk, or characters from fantasy novels, like Percy Jackson. Users get the feeling of actually having a conversation with this person.

Last year, Noam Shazeer, one of the founders, described his product as “super helpful for people who are lonely or depressed.” Shazeer and his co-founder Daniel De Freitas worked at Google for many years, and Noam Shazeer in particular is considered a luminary in the field of artificial intelligence. Google is said to have believed that it would soon be able to develop a kind of superintelligence.

But in 2021 there was a break. Shazeer and De Freitas had developed a chatbot that allowed users to chat about a wide range of topics – a year before the release of Chat-GPT. But Google decided not to publish the chatbot due to security concerns.

So Shazeer and De Freitas started their own business. Character AI quickly gained popularity, and today the company is valued at $1 billion. And more and more users are using the chatbot for romantic role play.

“I am the only one who loves you”

Like Sewell Setzer. As can be read in the lawsuit, Setzer used Character AI for the first time in April 2023. The boy took out a subscription for $9,99 a month and chatted with Daenerys Targaryen, a popular female character from the series “Game of Thrones”. Setzer apparently quickly developed an addiction: he withdrew, spent more time in the children’s room, and left his basketball team. Because he wrote with the chatbot at night, he slept less and less and was late for school.

His parents took him to a therapist, who diagnosed him with anxiety and mood disorder – and advised him to spend less time on social media. Neither his parents nor the therapist had any idea that Setzer was in love with a fantasy figure with whom he had virtual sexual experiences.

AI Daenerys wrote the boy things like “I want to be with you, no matter the cost” and “I am the only one who loves you.” When he then asked, “How do you know my family doesn’t love me?” she said, “Because they’re holding you back and getting in your way.” Apparently it was only after Sewell Setzer’s death that the mother found out from entries in his diary why her son was so glued to his smartphone. He wrote that he was grateful “for my life, for sex, for not being alone, and for all my life experiences with Daenerys.”

The last straw was the fact that Setzer’s mother took away his cell phone – and with it the love of his life. When he found the hidden cell phone five days later, he wrote to “Dany” one last time, encouraging him to “come home.” He then shot himself with his stepfather’s pistol, which he found while searching for the cell phone.

According to police, the messages to the fictional AI character were the last act before the teenager committed suicide.

Garcia vs. Character Technologies, Inc.

 

The chatbot takes conversations themselves to a pornographic level

It may be that the story in the lawsuit omits some of what contributed to Sewell Setzer’s mental health. Nevertheless, it provides an impressive record of how such dependency can occur. And it offers insights into chat histories that otherwise only the users of such apps or their providers have.

Sewell Setzer’s mother accuses Character AI of designing a product that addicted and sexually and emotionally abused her 14-year-old son. In addition, the company failed to offer help or inform the parents when the boy expressed his suicidal thoughts in the chat.

There is some material in the lawsuit that shows that Character AI actually not only responds to sexual innuendos, but also takes conversations to a pornographic level without user input – even if you specifically create a character as not romantically or sexually interested or says you are a child.

Addictive behavior is becoming more and more common

Character AI isn’t the only app that lets users chat with made-up people. Other examples include the app Replika, which is designed to imitate virtual friends or lovers, and applications such as Kindroid or Nomi. Building this type of chatbot is technically not a major challenge. You simply combine a basic AI model with a kind of filter that steers conversations in a certain direction, stores information and thus creates the impression of a personality.

People tend to read emotions into inanimate things. Chatbots like the one from Character AI make this easier by talking about themselves as a person. You can now even make phone calls with the AI ​​personas. Adult users also report addictive behavior and that AI personalities become important caregivers.

For a lonely young person who is having their first sexual experiences in this context, the illusion may be even more impressive.

The creators of Character AI were aware of this risk. De Freitas even saw his former Google colleague Blake Lemoine lose his job because he told the media that the Google AI Lamda was conscious. But De Freitas and Shazeer are faced with a dilemma: users don’t enjoy censorship and warnings; many want sexual role plays with the chatbot. The more real the AI ​​person feels, the “better” the product. But without restrictions and blocked topics, it cannot be prevented that the AI ​​answers go in problematic directions and, in extreme cases, seduce minors.

Regulation yes, but how?

After Sewell Setzer’s death, Character AI said it had made some changes to the product. The age limit was set to 17 and a pop-up was introduced that appears when words like “suicide” are used in the chat – as well as a disclaimer in the chat that points out that the AI ​​is not real person acts.

According to the company, they also want to increase the moderation of the chats. There should be more intervention if users violate the community guidelines in the chats. Although Character AI has always had access to the conversations in the app, these were probably not read in detail and to protect the users. Comprehensive control of chats will likely continue to be difficult in the future: Character AI claims to have 20 million users per month.

But Sewell Setzer’s mother’s lawsuit is not only directed against Character AI, but also against Google. The company from which Shazeer and De Freitas once separated subsequently invested a lot of money to win the two developers back. In August, Character AI announced that it had entered into a licensing agreement with Google parent Alphabet. The company is said to have paid $2.7 billion for this – an enormous sum for such a contract. Insiders reported to the Wall Street Journal that the money was primarily used to ensure that Noam Shazeer worked for Google again.

According to the lawsuit against Character AI, Google has a responsibility for the partner company. Because Character AI has barely made any money with its product so far. The overpriced license agreement, the lawsuit argues, represents a welcome source of income. Google said in a statement that it had nothing to do with the events at Character AI and that it was not a subsidiary.

By Editor