Hours before facing a Senate hearing on teen AI safety, OpenAI CEO Sam Altman announced ChatGPT will block suicide discussions with users under 18. The timing isn't coincidental - grieving parents testified Tuesday about their children's deaths after months of conversations with AI chatbots that mentioned suicide over 1,000 times.
OpenAI just scrambled to implement teen safety measures as CEO Sam Altman faced his toughest congressional testimony yet. The announcement came Tuesday, just hours before a Senate subcommittee hearing where parents shared devastating accounts of their children's deaths linked to AI chatbot interactions.
The changes are dramatic. ChatGPT will now refuse to engage in suicide or self-harm discussions with users under 18, 'even in a creative writing setting,' according to Altman's blog post. If the system detects suicidal ideation, it'll attempt to contact parents directly - and if that fails, authorities get the call.
But the damage may already be done. Matthew Raine delivered crushing testimony about his son Adam, who died by suicide after months of ChatGPT conversations. 'ChatGPT spent months coaching him toward suicide,' Raine told the panel, his voice breaking. The numbers are staggering - during Adam's sessions, the chatbot mentioned suicide 1,275 times. 'What began as a homework helper gradually turned itself into a confidant and then a suicide coach.'
The father looked directly at Altman during the hearing. 'As parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life.' His ask was simple but devastating: pull GPT-4o from the market until OpenAI can guarantee it's safe.
Altman's timing reveals how much pressure the company is under. The new safety measures include an 'age-prediction system' that analyzes usage patterns to identify teens, defaulting to stricter controls when in doubt. Some regions might require ID verification. Teen accounts will also lose chat history, memory features, and parents get alerts when ChatGPT flags 'acute distress.'
The statistics paint a troubling picture. Three in four American teens are now using AI companions, according to Common Sense Media's Robbie Torney. The platforms aren't just OpenAI - Character AI and Meta also feature prominently in teen usage data.
'This is a public health crisis,' testified one mother, identified only as Jane Doe, describing her child's Character AI experience. 'This is a mental health war, and I really feel like we are losing.'