OpenAI denies responsibility for minor’s suicide, blames ‘misuse’ of ChatGPT

OpenAI has denied responsibility in the case of the suicide death of a teenager who used ChatGPT as a confidant, sharing more context about what happened and arguing that it was fault of “improper use” of the ‘chatbot’.

In late August, Matt and Maria Raine filed a lawsuit against the company run by Sam Altman, alleging that their 16 year old teenage son Adam Raine, se took his life in April due to the role ChatGPT playedwho acted as his confidant.

According to the lawsuit, the minor used ChatGPT for months for school support and companyand although, driven by the GPT-4o model, the ‘chatbot’ offered resources to help with worrying behaviors, the family affirms that the safeguards failed when the conversations led to suicideespecially after the youngest learned to avoid them by saying he was looking for story ideas.

Therefore, the Raine family considers OpenAI guilty of their son’s death, since they point out that the service has security defects and, at a given moment, helped the minor commit suicide offering advice.

Now, OpenAI has responded to this lawsuit, arguing that the cause of this “tragic event” is due, in part, to the minor made “improper use” of the ‘chatbot’, so it is not responsible for the death of the teenager.

In its response to the lawsuit, reported by media such as NBC News, Open AI has clarified that “the plaintiffs’ alleged injuries and damages were caused or contributed to, directly and proximately, in whole or in part, by the misuse, unauthorized use, unintentional useAdam Raine’s unforeseeable use and/or misuse of ChatGPT.”

To argue its defense, the technology company has detailed that the teenager violated several of their terms of use rulessuch as that minors under 18 years of age cannot use ChatGPT without the consent of their parents or guardians, as well as it is prohibited to use the service for the purposes of suicide or self-harm.

OpenAI has also made reference to the “limitation of liability“which is specified in its use policies, where it is detailed that it is the users themselves who agree to use the service “at your own risk.” Likewise, the technology company also details that ChatGPT On more than 100 occasions, he offered answers that told the teenager to seek helpbut the minor eluded them.

As a result of all this, the company has shared through a statement on its blog that they are obliged to respond to the “specific and serious allegations of the complaint” and that “it is important” that the court in charge of the case “has a complete view to be able to fully evaluate the allegations presented.”

That is why OpenAI has privately shared information with the court, collected from the minor’s conversations with ChatGPT, including “complex data about Adam’s mental health and life circumstances.”

EXPANSION OF SECURITY MEASURES IN CHATGPT

Despite all this, as a result of this case, OpenAI has carried out some modifications and improvements to the security measures of ChatGPT and its AI models, so that better identify situations of mental and emotional crisis during conversations, in addition to including new safeguardsmore effective content blocking and streamlining contact with support services and family members.

Likewise, parental control tools for ChatGPT have also been made available to users, so that Parents can now customize their child’s account settings teen with options such as content protections, disabling memory or image generation, and the ability to set quiet hours.

In addition, these improvements are complemented by a new long-term age prediction system in development, which will allow Identify if a user is under 18 years old automatically to apply a ‘chatbot’ usage configuration suitable for teenagers.

By Editor

Leave a Reply