Meta just made a bold bet on automation over human oversight. The social media giant confirmed Thursday it's laying off an undisclosed number of employees from its risk organization while shifting to AI-powered compliance reviews - the same department created after its $5 billion FTC settlement. This marks a significant test of whether machines can handle the delicate work of regulatory compliance that was born from Meta's biggest privacy scandal.
Meta just crossed a regulatory Rubicon that no major tech company has attempted before. The company confirmed Thursday it's replacing human compliance reviewers with artificial intelligence systems, marking the first time a tech giant has automated the very oversight processes that regulators demanded after major privacy violations.
The layoffs hit Meta's risk organization - the department tasked with ensuring the company complies with privacy regulations worldwide. Michel Protti, Meta's chief privacy and compliance officer, broke the news to affected employees Wednesday, according to Business Insider reporting. The exact number of cuts remains undisclosed, but the move represents a fundamental shift in how Meta approaches regulatory compliance.
This isn't just any corporate department getting automated. Meta's risk organization exists because of the company's troubled regulatory history. It was established after Facebook paid a record $5 billion FTC fine in 2019 following the Cambridge Analytica scandal. The settlement didn't just demand money - it required Meta to fundamentally restructure how it handles user privacy, creating the very human oversight roles now being eliminated.
The timing couldn't be more telling. These cuts come just one day after Meta laid off 600 employees from its AI labs, though that move spared the company's elite TBD Labs division. The dual announcements paint a clear picture: Meta's doubling down on AI while cutting the human workforce that traditionally handled its most sensitive regulatory work.
"Through our product risk and compliance team, we've built one of the most sophisticated compliance programs in the industry," a Meta spokesperson said in a statement. The company frames this as maturation, not replacement - arguing it can "innovate faster while maintaining high compliance standards" through automation.
But here's where it gets technically interesting. Meta isn't using the flashy generative AI that powers ChatGPT for these compliance reviews. Instead, the company has spent the past year developing what it calls "basic automation" - rule-based systems that identify when legal requirements apply to specific products. Meta Vice President of Policy Rob Sherman explained in a that the system doesn't make risk decisions, just applies predetermined rules automatically.












