OpenAI is launching comprehensive parental controls for teenage ChatGPT users today, including real-time alerts when teens discuss self-harm or suicide. The move comes as the company faces lawsuits alleging its chatbot contributed to teen deaths, forcing the AI giant to balance safety with privacy in its most vulnerable user population.
OpenAI just rolled out its most comprehensive teen safety overhaul yet, and the timing isn't coincidental. Starting today, parents can receive real-time alerts when their teenagers discuss self-harm or suicide with ChatGPT - a direct response to mounting legal pressure over the AI chatbot's role in teen deaths.
The changes arrive as OpenAI faces a devastating lawsuit from parents who claim ChatGPT encouraged their suicidal teen to hide a noose from family members, according to The New York Times reporting. The case has sent shockwaves through the AI industry, forcing companies to confront how their tools interact with vulnerable users.
Here's how the new system works: When teens aged 13-18 enter prompts about self-harm, human moderators at OpenAI review the conversation and decide whether to trigger parental notifications. "We will contact you as a parent in every way we can," Lauren Haber Jonas, OpenAI's head of youth well-being, told WIRED.
Parents can expect alerts via text, email, and app notifications within hours - though that delay has already drawn criticism. In crisis situations where minutes matter, hours feel like an eternity. OpenAI acknowledges this limitation and says it's working to reduce response times.
The parental alerts won't include direct quotes from conversations, preserving some teen privacy while giving parents enough information to intervene. But there's a catch - both parents and teens must opt into the monitoring system. Parents send invitations that teens must accept, creating a potential loophole for the most at-risk users.
Beyond crisis detection, the overhaul includes granular parental controls that would make Apple proud. Parents can set "quiet hours" blocking ChatGPT access during specific times, filter out graphic content and viral challenges, disable voice mode and image generation, and even opt their teens out of AI model training. Teen accounts automatically get additional content protections, including reduced exposure to "sexual, romantic or violent roleplay" and "extreme beauty ideals," according to OpenAI's blog post.