Meta is fighting back against explosive claims that it illegally downloaded thousands of adult films to train its AI models. In a motion to dismiss filed this week, the tech giant argues that any pornography found on its corporate networks was downloaded by employees for personal use, not to power an adult version of its Movie Gen AI. The lawsuit from Strike 3 Holdings could cost Meta over $350 million if successful.
Meta just delivered its most unusual legal defense yet - claiming that pornography downloaded on company networks was strictly for employees' personal entertainment, not AI training. The social media giant filed a motion to dismiss this week against Strike 3 Holdings, the adult film company that discovered illegal downloads of its content traced back to Meta's corporate IP addresses.
The case centers on Strike 3's explosive allegation that Meta secretly torrented around 2,400 adult films to train an unannounced adult version of its Movie Gen AI model. Strike 3 claims it uncovered not just downloads on traceable Meta IPs, but also evidence of a 'stealth network' using 2,500 hidden IP addresses to conceal the activity. The potential damages could exceed $350 million, according to TorrentFreak's reporting.
But Meta isn't buying Strike 3's narrative. In its filing, the company argues the downloads spanning seven years were nothing more than scattered personal use by individual employees. 'The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,' Meta's lawyers wrote.
The numbers tell Meta's story. Rather than the massive, coordinated data collection typical of AI training operations, the alleged activity amounted to roughly 22 downloads per year - what Meta calls 'a few dozen titles per year intermittently obtained one file at a time.' Compare that to the hundreds of thousands of books or images typically used in AI training datasets, and Meta's personal use defense starts looking more credible.
Timing also works in Meta's favor. The alleged downloads began in 2018, about four years before Meta's AI video research efforts even launched. 'These claims are bogus,' a Meta spokesperson told Ars Technica, emphasizing that the company's terms explicitly prohibit generating adult content - undermining any business case for training AI on such material.

