YouTube just launched a controversial deepfake detection tool that's sparking privacy concerns across the creator economy. The platform's new likeness detection feature helps creators spot AI-generated videos using their face, but it requires uploading biometric data that Google can legally use to train its AI models. With millions of YouTube Partner Program creators getting access by January, experts are calling it a potentially dangerous trade-off between protection and privacy.
YouTube is rolling out what looks like a creator-friendly solution to the deepfake crisis, but the fine print has privacy experts sounding the alarm. The platform's expanded likeness detection tool promises to help creators identify when bad actors use AI to steal their face for fake videos. But there's a catch that could reshape how we think about biometric data in the AI age.
To use the tool, creators must upload a government ID and record a biometric video of their face. Google's privacy policy explicitly states this biometric information can be used "to help train Google's AI models and build products and features." That's got IP lawyers and creator advocates worried about a digital Faustian bargain.
"As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves," Dan Neely, CEO of likeness protection company Vermillio, told CNBC. "Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back."
YouTube's response reveals the tension inside Alphabet. The company insists it has "never used creators' biometric data to train AI models" and is reviewing the signup language to "avoid confusion." But crucially, YouTube won't change its underlying policy that technically allows such use.
The timing couldn't be more fraught for creators. AI video tools like Google's Veo 3 and OpenAI's Sora have made deepfake creation almost trivially easy. Doctor Mike, a physician-turned-YouTuber with 14 million subscribers, told CNBC he now reviews "dozens of AI-manipulated videos a week" featuring his likeness hawking dubious health supplements.
"It obviously freaked me out, because I've spent over a decade investing in garnering the audience's trust," said Mikhail Varshavski, who goes by Doctor Mike. He first spotted an AI doppelgänger on TikTok promoting a "miracle" supplement - exactly the kind of misleading medical advice he's built his brand fighting against.












