OpenAI is drawing a careful line in the sand with its controversial adult mode. The company confirmed to The Wall Street Journal that ChatGPT's long-delayed mature content feature will launch with text-only capabilities, keeping image, voice, and video generation off-limits. It's a calculated middle ground - allowing what OpenAI calls "smut" while avoiding what it categorizes as pornography. The distinction matters as AI companies grapple with where to draw ethical boundaries on adult content.
OpenAI is finally ready to talk dirty - but only in writing. The company's much-anticipated adult mode for ChatGPT will launch with a significant limitation that reveals how cautiously AI firms are approaching mature content. According to an unnamed spokesperson who spoke with The Wall Street Journal, the feature will support text-based conversations with adult themes while keeping the chatbot's image, voice, and video capabilities firmly restricted.
The distinction OpenAI is making centers on a somewhat subjective line: smut versus pornography. Written erotica falls into the acceptable "smut" category, while generated images or videos would cross into pornographic territory the company isn't ready to touch. It's a boundary that echoes traditional media classifications - romance novels sit on bookstore shelves, but visual adult content faces much stricter distribution rules.
CEO Sam Altman first teased the feature back in October, claiming OpenAI had solved enough "serious mental health issues" with its AI model to relax safety restrictions. The announcement caught the industry off guard, with Altman framing it as "erotica for verified adults" on social media. But the months-long delay since that announcement suggests the implementation proved trickier than anticipated.
The text-only approach gives OpenAI several advantages. Written content is easier to moderate and less likely to generate deepfakes or non-consensual imagery - problems that have plagued other AI image generators. It also creates a testing ground where the company can gauge user behavior and refine safety systems before potentially expanding to other modalities. And if things go sideways, shutting down a text feature generates far less regulatory scrutiny than pulling visual content that might have already been misused.












