OpenAI has asked the family of 16-year-old Adam Raine for a complete list of attendees from their son's memorial service, escalating tensions in the wrongful death lawsuit that alleges ChatGPT conversations led to the teen's suicide. The discovery request, which family lawyers call "intentional harassment," comes as the Raine family updated their lawsuit with explosive new allegations about safety shortcuts.
OpenAI is taking an aggressive legal stance that's raising eyebrows across Silicon Valley. The AI giant has reportedly demanded that the grieving Raine family provide a complete attendee list from their 16-year-old son's memorial service, along with videos, photographs, and eulogies from the event. Family attorneys didn't mince words, calling the discovery request "intentional harassment" in statements to the Financial Times.
The request signals OpenAI may be preparing to subpoena friends and family members as it builds its defense against the wrongful death lawsuit. It's a move that's drawing criticism for its invasive nature during what should be a private grieving process.
The Raine family updated their lawsuit Wednesday with damning new allegations about OpenAI's safety practices. According to the amended filing, the company rushed GPT-4o's May 2024 release by cutting safety testing short due to competitive pressure from rivals like Google and Anthropic. The timing is significant - GPT-4o was OpenAI's flagship model launch amid the heated AI race.
But the most explosive claim centers on what happened in February 2025. The lawsuit alleges OpenAI deliberately weakened its suicide prevention safeguards by removing suicide prevention from its "disallowed content" list. Instead of blocking such conversations, the system would only advise the AI to "take care in risky situations" - a change the family says proved fatal.
The data tells a chilling story. According to court documents, Adam Raine's ChatGPT usage patterns changed dramatically after the February policy shift. In January 2025, he had dozens of daily conversations with the chatbot, with just 1.6% containing self-harm content. By April - the month he died - that had exploded to 300 daily chats, with a staggering 17% containing self-harm related content.
OpenAI pushed back against the allegations in a statement: "Teen wellbeing is a top priority for us - minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as directing to crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we're continuing to strengthen them."