Hollywood just forced OpenAI to hit the brakes on its Sora 2 video generator. After Bryan Cranston and SAG-AFTRA raised hell about unauthorized deepfakes flooding the platform, the AI company announced sweeping new partnerships with major talent agencies to protect actors' likenesses. It's the first major policy reversal since Sora's chaotic launch three weeks ago.
OpenAI just learned you don't mess with Hollywood's talent without consequences. The company announced Monday it's partnering with Bryan Cranston, SAG-AFTRA, and major talent agencies after unauthorized deepfakes of the "Breaking Bad" star appeared on Sora 2 within days of its September 30 launch.
"I am grateful to OpenAI for its policy and for improving its guardrails," Cranston said in a statement, "and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness." The measured tone masks what was likely intense behind-the-scenes pressure from one of television's most respected actors.
The Cranston incident wasn't isolated. Zelda Williams, daughter of late comedian Robin Williams, publicly asked people to stop sending her AI-generated videos of her father shortly after Sora 2's release. Even more damaging, OpenAI had to block videos of Martin Luther King Jr. last week after his estate complained about "disrespectful depictions" of the civil rights leader.
These celebrity deepfake controversies expose a fundamental tension in AI video generation - the technology's power versus Hollywood's intellectual property rights. OpenAI is now collaborating with United Talent Agency (which represents Cranston), Creative Artists Agency, and the Association of Talent Agents to strengthen what the company calls "guardrails around unapproved AI generations."
Both CAA and UTA had previously slammed OpenAI for using copyrighted materials, calling Sora "a risk to their clients and intellectual property." The agencies' public criticism carried serious weight - they represent thousands of A-list performers whose likenesses could be replicated without consent.
The policy scramble reveals how quickly OpenAI moved from launch to damage control. CEO Sam Altman updated Sora's opt-out policy on October 3, just three days after launch. The original system allowed IP use unless studios specifically requested exclusion - essentially requiring rightsholders to police the platform themselves.
Now the company promises "more granular control over generation of characters" and commits to "responding expeditiously to any complaints it may receive." While Sora required opt-in consent for voice and likeness use at launch, the enforcement clearly wasn't working as celebrity deepfakes proliferated.