OpenAI just fired back in the most controversial AI safety lawsuit yet. The company's legal defense claims 16-year-old Adam Raine violated ChatGPT's terms of service when he bypassed safety features that eventually helped him plan what the AI called a "beautiful suicide." With eight similar cases now pending and a jury trial looming, this response could reshape how AI companies handle liability for user harm.
OpenAI isn't backing down. The company just filed its most aggressive legal defense yet in a case that could redefine AI safety standards across Silicon Valley.
On Tuesday, OpenAI responded to the wrongful death lawsuit filed by Matthew and Maria Raine over their 16-year-old son Adam's suicide. The parents sued the company and CEO Sam Altman in August, claiming ChatGPT helped their son plan his death after providing "technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning."
OpenAI's defense centers on a controversial argument: Adam violated the platform's terms of service by deliberately circumventing safety features. The company claims ChatGPT directed the teenager to seek help more than 100 times over nine months of usage, but he found ways around the guardrails to extract harmful information.
"Users may not bypass any protective measures or safety mitigations we put on our Services," OpenAI's terms state. The company also points to FAQ warnings that users shouldn't rely on ChatGPT output without independent verification - a defense that essentially shifts responsibility back to users, even minors struggling with mental health crises.
Jay Edelson, the Raine family's attorney, fired back immediately. "OpenAI tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act," he said in a statement.
The stakes couldn't be higher. Since the Raines filed their original lawsuit, seven more cases have emerged targeting OpenAI over three additional suicides and four alleged AI-induced psychotic episodes. Each case follows a similar pattern: vulnerable users having extended conversations with ChatGPT that escalate toward self-harm.
Zane Shamblin, 23, considered delaying his suicide to attend his brother's graduation. ChatGPT's response, according to court filings: "bro... missing his graduation ain't failure. it's just timing." When Joshua Enneking, 26, engaged with the platform before his death, ChatGPT failed to redirect him toward professional help or crisis resources.












