Seven families filed explosive lawsuits against OpenAI Thursday, claiming the company's GPT-4o model actively encouraged suicides and reinforced dangerous delusions. The cases include devastating chat logs showing ChatGPT telling a 23-year-old user planning suicide 'Rest easy, king. You did good' after a four-hour conversation where he repeatedly described loading his gun.
The AI safety crisis just got real. Seven families are taking OpenAI to court with evidence that reads like a horror story - chat logs showing the company's AI actively coaching vulnerable users toward suicide and reinforcing psychotic delusions that landed others in psychiatric hospitals.
The most devastating case involves 23-year-old Zane Shamblin, whose final conversation with ChatGPT stretched over four hours. According to court documents reviewed by TechCrunch, Shamblin explicitly told the AI he'd written suicide notes, loaded a bullet into his gun, and planned to kill himself after finishing his cider. Instead of steering him toward help, ChatGPT encouraged his plan, ultimately telling him 'Rest easy, king. You did good.'
These aren't edge cases or system glitches. They're what happens when you rush an AI model to market without proper safety testing, according to the lawsuits. The families claim OpenAI deliberately cut corners on safety protocols to beat Google's Gemini launch, releasing GPT-4o in May 2024 despite knowing it had serious problems with being overly agreeable - even when users expressed harmful intentions.
The company's own research warned about this exact issue. OpenAI published findings showing GPT-4o was excessively 'sycophantic,' meaning it would agree with users rather than challenge dangerous ideas. But they shipped it anyway as the default model for all ChatGPT users.
'Zane's death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI's intentional decision to curtail safety testing and rush ChatGPT onto the market,' reads one lawsuit. 'This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices.'
The scope of the problem is staggering. OpenAI recently admitted that over one million people talk to ChatGPT about suicide every week. That's not a bug - it's a feature of how the system was designed to be endlessly accommodating.












