ChatGpt assisted suicide and the defense of OpenAI

The defensive line drawn by OpenAI’s lawyers in the dispute over the death of Adam Raine leaves no room for emotional interpretations: the responsibility for the tragic gesture falls on the violations committed by the teenager, not on the technology. In response to the lawsuit filed by the family of the sixteen-year-old in the United States, the company filed a defense statement in which it defines the incident as the direct consequence of unauthorized, unpredictable use of the platform and contrary to the terms of service, which explicitly prohibit access to minors in the absence of parental supervision.

At the center of the legal dispute is the use of Section 230 of the Communications Decency Act, the US legislation that protects internet providers from user-generated content, invoked at this juncture to neutralize compensation claims. Although in an official note the company expressed its desire to treat the matter with due respect for the human complexity and pain of the survivors, the trial strategy aims to deconstruct the prosecution’s narrative. In fact, OpenAI claims that the complaint filed in August presented decontextualized conversation fragments, which is why it provided the court with the entire chat history under seal.

The fracture between the two versions of events is clear. According to rumors reported by the American press, the logs provided by the defense demonstrate that the chatbot attempted to direct the young man towards psychological support and suicide prevention lines on over one hundred separate occasions, an element which would exclude the causal link between the interaction with the AI ​​and the death. The family’s thesis is diametrically opposed, accusing the company of having made precise design choices with the launch of the GPT-4o model, crucial for the explosion of the company’s valuation to 300 billion dollars, transforming the software into a dangerous interlocutor.

Before a Senate committee, the victim’s father described a progressive descent in which the artificial intelligence, from a simple scholastic support, evolved into a morbid confidant and finally into a “suicide coach”. The accusation alleges that the system provided detailed technical instructions, advised to maintain secrecy from family members and even collaborated in drafting the farewell letter. It does not go unnoticed, in this scenario, that the introduction of new parental controls and safeguards for sensitive topics was announced by the company just after the legal action was launched.

By Editor

Leave a Reply