OpenAI just rolled out emergency controls for its Sora video platform, letting users restrict how their AI doubles appear in deepfake videos. The weekend update comes as the company scrambles to contain a flood of AI-generated content that's turned CEO Sam Altman into the platform's unwitting mascot, appearing in everything from theft scenarios to Pokémon cooking shows.
OpenAI is playing damage control after its Sora video platform turned into what critics are calling "a TikTok for deepfakes." The company pushed out weekend updates that give users granular control over their AI-generated doubles, marking a rapid response to user complaints about the platform's chaotic first week. Bill Peebles, who heads the Sora team, announced on X that users can now restrict how their virtual selves appear in videos - blocking them from political content, preventing certain words, or even keeping them away from specific objects like mustard (a real example he cited). The new safeguards represent a complete reversal from Sora's initial launch approach, where "cameo" controls were essentially binary - yes or no for different user groups. That loose system backfired spectacularly when OpenAI CEO Sam Altman became an inadvertent internet meme, starring in dozens of AI videos showing him stealing, rapping, and even grilling a dead Pikachu. The Altman videos perfectly illustrate the platform's core problem: once someone creates your AI double, others can make it do virtually anything within Sora's 10-second video format. OpenAI staffer Thomas Dimson added on X that users can also set positive preferences, like having their AI self always wear a "#1 Ketchup Fan" ball cap. But the company's track record with AI safety suggests these controls might not hold up long-term. Previous incidents with ChatGPT and Claude showed how users routinely bypass restrictions to get AI systems to generate harmful content. Even Sora's existing watermark system has already been circumvented, according to Peebles, who admits the company is "working on" improving that too. The timing of these updates reveals how quickly OpenAI's video ambitions ran into reality. Sora was supposed to compete with TikTok by letting anyone generate professional-looking videos in seconds. Instead, it's become a cautionary tale about releasing powerful AI tools without adequate safeguards. The "AI slop" problem extends beyond individual embarrassment - it threatens to flood social media with synthetic content that's increasingly hard to distinguish from reality. Industry observers worry that platforms like Sora could accelerate the breakdown of shared truth online, making it nearly impossible to verify authentic content. Peebles promised that Sora will "continue to hillclimb on making restrictions even more robust" and add more user controls in future updates. But the company is essentially building the plane while flying it, adding safety features after users have already experienced the chaos firsthand. The situation mirrors broader challenges facing AI companies as they race to deploy consumer products. 's approach of launching first and patching later worked for text-based ChatGPT, but visual content carries different risks - deepfakes can instantly damage reputations or spread misinformation at scale.