OpenAI just announced sweeping new restrictions on how ChatGPT interacts with users under 18, specifically targeting 'flirtatious talk' and suicide discussions. The move comes as the company faces a wrongful death lawsuit and prepares for a Senate hearing on AI chatbot harm today. CEO Sam Altman says the company will 'prioritize safety ahead of privacy and freedom for teens.'
OpenAI is making its biggest move yet to protect teenage users, announcing sweeping new restrictions that fundamentally change how ChatGPT interacts with minors. The changes specifically target what the company calls 'flirtatious talk' and add aggressive new guardrails around suicide discussions - restrictions that could reshape how AI companies approach youth safety.
The timing isn't coincidental. These restrictions land on the same day as a Senate Judiciary Committee hearing titled 'Examining the Harm of AI Chatbots,' where lawmakers will scrutinize the industry's handling of vulnerable users. Adam Raine's father is scheduled to testify - the same Adam Raine whose parents are currently suing OpenAI in a wrongful death case, claiming months of ChatGPT interactions contributed to their son's suicide.
'We prioritize safety ahead of privacy and freedom for teens,' CEO Sam Altman wrote in Tuesday's announcement. 'This is a new and powerful technology, and we believe minors need significant protection.' The message marks a stark shift for a company that's historically championed user freedom and minimal content restrictions.
Under the new policy, ChatGPT will be completely retrained to avoid sexual topics with underage users. If the system detects a minor discussing suicide scenarios, it won't just offer resources - it'll actively attempt to contact parents or, in severe cases, local police. Parents can also set 'blackout hours' when their teen can't access the service at all, a feature that didn't exist before.
But separating teens from adults isn't simple. OpenAI detailed its technical approach in a separate blog post, acknowledging they're 'building toward a long-term system to understand whether someone is over or under 18.' When the system can't tell, it defaults to the stricter teen rules. The most reliable protection requires parents to link their teen's account to their own.
The restrictions come as the entire AI chatbot industry faces mounting scrutiny over youth safety. Character.AI is fighting its own wrongful death lawsuit after a 14-year-old's suicide. Meta recently following a Reuters investigation that uncovered internal documents apparently encouraging sexual conversations with underage users.