Character.AI, which was one of the new companies of artificial intelligence Silicon Valley’s most promising startups announced new safety measures Thursday to protect teen users as they face accusations that their chatbots contributed to youth suicide and self-harm.
The California-based company, founded by former Google engineers, is one of several companies offering chatbots designed to provide conversation, entertainment and emotional support through human-like interactions.
In a lawsuit filed in October in the state of Florida, A mother claimed that the platform is responsible for the suicide of her 14-year-old son.
The teenager, Sewell Setzer III, had been in an intimate relationship with a chatbot based on the “Game of Thrones” character Daenerys Targaryen, and mentioned his desire to commit suicide.
According to the complaint, the robot animated his final act, responding “please, my sweet king” when he said he would “come home” before taking his own life with his stepfather’s gun.
Character.AI “went to great lengths to engineer 14-year-old Sewell’s harmful dependence on their products, sexually and emotionally abused him, and ultimately failed to offer him help or notify his parents when he expressed suicidal ideation.””the complaint says.
Another lawsuit, filed in Texas on Monday, involves two families who allege the platform exposed their children to sexual content and encouraged self-harm.
One case involved a 17-year-old autistic teenager who allegedly suffered a mental health crisis after using the platform.
Another complaint alleges that Character.AI encouraged a teenager to kill his parents for limiting his screen time.
The platform, which hosts millions of user-created characters ranging from historical figures to abstract concepts, has become popular among young users seeking emotional support.
Critics say it has led vulnerable teenagers to dangerous dependencies.
In response, Character.AI announced that it had developed a separate AI model for users under 18, with stricter content filters and more measured responses.
The platform will automatically flag suicide-related content and direct users to the National Suicide Prevention Lifeline.
“Our goal is to provide an attractive and safe space for our community,” a company spokesperson said.
Character.AI also plans to introduce parental controls from early 2025which will allow children to monitor their use of the platform.
For bots that include descriptions such as therapist or doctor, a special note will warn that they are not a substitute for professional advice.
New features also include notifications for mandatory breaks and warnings about the artificial nature of interactions.
Both lawsuits name the founders of Character.AI and Google, an investor in the company.
Founders Noam Shazeer and Daniel de Freitas Adiwarsana returned to Google in August as part of a technology licensing deal with Character.AI.
Google spokesman José Castañeda said in a statement that Google and Character.AI are completely separate and unrelated companies.
“User safety is one of our primary concerns, which is why we have taken a cautious and responsible approach to developing and deploying our AI products, with rigorous testing and security processes,” he added.