Elon Musk's xAI is facing serious backlash after internal documents revealed the company compelled employees to surrender their biometric data - including faces and voices - to train its controversial 'Ani' AI companion chatbot. The mandate, disguised as a job requirement under the secretive 'Project Skippy' program, has reignited debates about workplace surveillance and employee consent in the age of AI development.
The bombshell report from The Wall Street Journal exposes how xAI essentially coerced its workforce into becoming unwitting data sources for one of tech's most controversial AI products. During an April company meeting, xAI staff lawyer Lily Lim delivered what many employees viewed as an ultimatum: submit your biometric data or risk your career advancement.
The target of this data harvesting wasn't some groundbreaking medical AI or autonomous vehicle system. Instead, employees were told their faces and voices would train Ani, an anime-styled chatbot with blonde pigtails that The Verge's Victoria Song previously described as 'a modern take on a phone sex line.' The bot, available to X's $30-per-month SuperGrok subscribers, comes with explicit NSFW settings that many staff found deeply uncomfortable.
What makes this particularly disturbing is the sweeping nature of the consent forms. Employees designated as 'AI tutors' were required to grant xAI 'a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license' to use, reproduce, and distribute their biological identifiers. The language essentially gives xAI unlimited rights to employee likenesses forever, with no geographical restrictions and the ability to sell or license that data to third parties.
The program, internally code-named 'Project Skippy,' wasn't presented as optional. According to meeting recordings obtained by the Journal, employees were explicitly told that participation was 'a job requirement to advance xAI's mission.' This creates a textbook case of coerced consent - when workers face professional consequences for refusing to participate in data collection schemes.
Several employees pushed back, expressing legitimate concerns about their biometric data being used in deepfake videos or sold to other companies. The sexual nature of Ani's interactions particularly troubled staff members, who worried about their faces and voices being associated with explicit AI-generated content. But their objections were dismissed as the company prioritized product development over worker privacy rights.
This isn't just about one company's questionable practices. The incident reveals a broader crisis in how AI companies source training data, especially as high-quality datasets become scarce. While major tech firms typically rely on publicly available content or purchased datasets, xAI's approach of essentially conscripting employee biometrics represents a troubling escalation.











