OpenAI just made a massive U-turn on copyright policy for its viral video app Sora. CEO Sam Altman announced Friday that the company will shift from requiring studios to opt-out of having their characters used in AI-generated videos to an opt-in model with granular controls. The move comes as users flood the app with unauthorized content featuring everything from Pikachu to SpongeBob, forcing OpenAI to rethink its approach to intellectual property in the age of AI-generated media.
OpenAI is scrambling to get ahead of a copyright nightmare that's unfolding in real-time on its new Sora video app. Just days after the AI video generator climbed to the top of App Store charts, CEO Sam Altman announced a complete reversal of the company's copyright approach - switching from an opt-out to an opt-in model that puts control back in the hands of rightsholders.
The timing isn't coincidental. Users have been having a field day creating unauthorized content with beloved characters like Pikachu and SpongeBob, some even featuring these characters in videos where they interact with deepfakes of Altman himself. The irony is palpable - copyrighted characters criticizing OpenAI's own copyright policies through the company's own technology.
According to The Wall Street Journal's earlier reporting, OpenAI had been telling Hollywood studios and agencies they'd need to explicitly opt out if they didn't want their intellectual property used in Sora-generated videos. That approach clearly wasn't working - the app launched with an invite-only model but still managed to become a playground for copyright infringement.
In his Friday blog post, Altman acknowledged the company is already planning major changes. The first involves giving copyright holders "more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls." The key shift here is that "opt-in" language - studios and other rightsholders will need to actually give Sora permission before their characters can be used, rather than having to proactively block them.
"We are hearing from a lot of rightsholders who are very excited for this new kind of 'interactive fan fiction' and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)," Altman explained. The comment suggests some studios might actually embrace this technology if they can control how it's used.
But Altman was realistic about the challenges ahead, admitting there will likely be "some edge cases of generations that get through that shouldn't." Given the current state of the app, where unauthorized content seems to be the norm rather than the exception, that might be an understatement.