A disturbing gig economy is emerging in the shadows of Telegram. Dozens of channels are actively recruiting what they call 'AI face models' - mostly women who unknowingly (or willingly) lend their faces to sophisticated deepfake scam operations. According to a WIRED investigation, these models are being deployed to conduct up to 100 video calls per day, their likenesses weaponized through AI to dupe victims out of money in what represents a dark evolution of romance and investment fraud.
The job posting sounds almost legitimate at first glance. Work from home. Flexible hours. Good pay. But buried in the requirements is something far more sinister: applicants need to be comfortable having their face used for 'AI modeling' - a euphemism that barely conceals the criminal enterprise underneath.
WIRED's Matt Burgess spent weeks infiltrating dozens of Telegram channels where these recruitment operations run openly. What he found reveals how deepfake technology has spawned an entire underground labor market, complete with job listings, interview processes, and performance quotas. The models being hired aren't creating content for entertainment or marketing. They're becoming the faces of fraud.
The mechanics are chillingly efficient. Scammers capture video and images of recruited models, then feed that footage into AI systems that can generate realistic video calls in real-time. Some operations demand their 'employees' participate in up to 100 video calls per day - an industrial scale that would be impossible without synthetic media technology. Victims believe they're video chatting with a real person, developing trust and emotional connections that scammers exploit to extract money through romance scams, fake investment schemes, or cryptocurrency fraud.
What makes this particularly insidious is the blurred line of complicity. Some models may genuinely not understand how their likeness will be used, lured by promises of easy income. Others likely know exactly what they're signing up for. The Telegram channels reviewed by WIRED don't exactly hide their intent - the term 'AI face model' itself suggests something beyond traditional modeling work. But the channels operate in linguistic grey areas, using coded language that provides just enough plausible deniability.












