OpenAI has denied responsibility in the case of the death by suicide of a teenager who used ChatGPT as a confidant, sharing more context about what happened and arguing that it was the fault of “improper use” of the ‘chatbot’.
In late August, Matt and Maria Raine, Adam Raine’s parents, filed a lawsuit against the company run by Sam Altman, alleging that their 16-year-old teenage son, Adam Raine, took his own life in April due to the role played by ChatGPT, who acted as his confidant.
According to the lawsuit, the minor used ChatGPT for months for school support and companionship, and although, powered by the GPT-4o model, the ‘chatbot’ offered him resources to help with worrying behavior, the family claims that the safeguards failed when the conversations led to suicide, especially after the minor learned to avoid them by saying he was looking for story ideas.
Therefore, the Raine family considers OpenAI guilty of their son’s death, since they point out that the service has security defects and, at one point, helped the minor commit suicide by offering him advice.
Now, OpenAI has responded to this lawsuit, arguing that the cause of this “tragic event” is due, in part, to the minor making “inappropriate use” of the ‘chatbot’, so it is not responsible for the teenager’s death.
In its response to the lawsuit, reported by media such as NBC News, Open AI has clarified that “the plaintiffs’ alleged injuries and damages were caused or contributed, directly and proximately, in whole or in part, by the misuse, unauthorized use, unintentional use, unforeseeable use and/or misuse of ChatGPT by Adam Raine.”
To argue its defense, the technology company has detailed that the teenager violated several of its terms of use rules, such as that minors under 18 years of age cannot use ChatGPT without the consent of their parents or guardians, as well as it is prohibited to use the service for the purposes of suicide or self-harm.
OpenAI has also made reference to the “limitation of liability” that is specified in its use policies, which detail that it is the users themselves who agree to use the service “at their own risk.” Likewise, the technology company also details that ChatGPT offered answers on more than 100 occasions that instructed the teenager to seek help, but the minor avoided them.
As a consequence of all this, the company has shared through a statement on its blog that they are obliged to respond to the “specific and serious allegations of the lawsuit” and that “it is important” that the court in charge of the case “has a complete vision to be able to fully evaluate the allegations presented.”
That is why OpenAI has privately shared information with the court, collected from the minor’s conversations with ChatGPT, which includes “complex data about Adam’s mental health and life circumstances.”
Expansion of security measures in ChatGPT
Despite all this, as a result of this case, OpenAI has carried out some modifications and improvements to the security measures of ChatGPT and its AI models, so that they better identify situations of mental and emotional crisis during conversations, in addition to including new safeguards, more effective content blocking and streamlining contact with help and family services.
Likewise, parental control tools have also been made available to users for ChatGPT, so that parents can now customize their teen’s account settings with options such as content protections, disabling memory or image generation, and the ability to set quiet hours.
In addition, these improvements are complemented by a new system under development for long-term age prediction, which will automatically identify if a user is under 18 years of age, to apply a ‘chatbot’ usage configuration suitable for adolescents.