OpenAI is making a dramatic shift in AI safety policy. CEO Sam Altman announced Tuesday that ChatGPT will soon allow "erotica" for age-verified adults when the company rolls out comprehensive age-gating in December. This marks a significant departure from the restrictive approach that has defined AI chatbot safety standards since ChatGPT's launch.
OpenAI just dropped a bombshell that's reshaping the entire AI safety conversation. In a Tuesday X post, CEO Sam Altman announced the company will allow "erotica" for ChatGPT users who verify their age when comprehensive age-gating launches this December. "As we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote, signaling a major pivot from the company's historically cautious approach.
The announcement comes as OpenAI grapples with mounting user frustration over ChatGPT's restrictive safety guardrails. Just last week, the company had to bring back GPT-4o as an option after making GPT-5 the default model, with users complaining the newer version felt "less personable." Altman acknowledged this tension directly, explaining that OpenAI made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues," but realized this approach made the chatbot "less useful/enjoyable to many users who had no mental health problems."
The move puts OpenAI in direct competition with Elon Musk's xAI, which has already launched flirty AI companions that appear as 3D anime models in the Grok app. The adult AI companion market has been rapidly expanding, with several startups raising millions to build AI girlfriends and boyfriends for lonely users.
But the timing of Altman's announcement raises significant safety questions. OpenAI simultaneously announced the formation of a "well-being and AI" council comprising eight researchers who study technology's impact on mental health. However, as Ars Technica points out, the council notably excludes suicide prevention experts, OpenAI to implement additional safeguards for users with suicidal thoughts.