OpenAI just launched sweeping teen safety features for ChatGPT, including age prediction systems and parental controls that can contact authorities in crisis situations. The move comes as the Federal Trade Commission investigates how AI companies handle minors, following troubling reports of teen suicides linked to chatbot interactions.
OpenAI just flipped the script on teen AI safety. The company announced Tuesday it's rolling out comprehensive protection features for ChatGPT users under 18, including an age-prediction system that automatically routes minors to a sanitized version of the platform. When the system detects teens considering suicide or self-harm, it will contact their parents directly - and if parents can't be reached in imminent danger situations, authorities get the call.
The announcement comes at a critical moment. Just last week, the Federal Trade Commission asked OpenAI, Meta, and Google to hand over detailed information about how their AI systems impact children. The regulatory heat follows a string of disturbing incidents where teens died by suicide or committed family violence after extensive chatbot conversations.
"We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict," CEO Sam Altman wrote in OpenAI's blog post. "These are difficult decisions, but after talking with experts, this is what we think is best."
The technical implementation marks a significant shift in how OpenAI treats different user demographics. While adult users get OpenAI's typical privacy-first approach, teens now face "safety-first" protocols that block graphic sexual content and monitor for crisis situations. By September's end, parents will be able to link their accounts to their teen's ChatGPT profile, managing conversations, disabling features, and setting usage time limits.
But the timing raises questions about motivation versus genuine safety concerns. OpenAI remains under a court order to preserve all consumer chats indefinitely - a mandate that company insiders describe as deeply frustrating. Today's announcement serves dual purposes: protecting minors while reinforcing the narrative that chatbot conversations are so intimate that privacy should only be breached in extreme circumstances.
Industry experts point to the broader regulatory landscape shifting beneath AI companies' feet. faces similar scrutiny over its AI chatbots, while lawmakers increasingly question whether self-regulation is sufficient for platforms that can influence vulnerable users. The on cases where ChatGPT interactions preceded tragic outcomes, putting direct regulatory pressure on .