OpenAI just assembled an eight-member expert council to guide its AI safety measures, directly responding to mounting regulatory pressure over how its ChatGPT and Sora products affect users' mental health. The move comes weeks after the Federal Trade Commission launched an inquiry into AI chatbots' impact on children and teenagers, while OpenAI faces a wrongful death lawsuit linking ChatGPT to a teen suicide.
OpenAI is moving fast to get ahead of a regulatory storm that's been building for months. The company announced Tuesday it's formed an Expert Council on Well-Being and AI - eight specialists who'll help define what healthy AI interactions actually look like as ChatGPT and Sora reach millions of users daily.
The timing isn't coincidental. In September, the Federal Trade Commission launched a broad inquiry into how AI chatbots could harm children and teens, putting OpenAI squarely in regulators' crosshairs. The company is also fighting a wrongful death lawsuit from parents who blame ChatGPT for their teenage son's suicide.
"Through check-ins and recurring meetings, OpenAI said the council will help it define what healthy AI interactions look like," according to CNBC's report. The council officially launched with an in-person session last week, bringing together experts in psychiatry, psychology, and human-computer interaction.
The roster reads like a who's who of digital wellness research. Andrew Przybylski from Oxford's human behavior and technology program joins David Bickham from Boston Children's Hospital's Digital Wellness Lab. Northwestern's David Mohr, who runs the Center for Behavioral Intervention Technologies, sits alongside Georgia Tech's Munmun De Choudhury and Stanford's Dr. Sara Johansen, who founded the university's Digital Mental Health Clinic.
But OpenAI isn't just assembling advisors - it's racing to build concrete safety features. The company is developing an age prediction system that'll automatically apply teen-appropriate settings for users under 18. Parents can now get alerts if their child shows signs of acute distress while using ChatGPT, part of parental controls launched last month.
The moves come as OpenAI has been expanding its safety controls for months, responding to what CNBC describes as "mounting scrutiny over how it protects users, particularly minors." The company started informally consulting with some council members while building its parental control features, then formalized the group as regulatory pressure intensified.