Meta is rolling out a multiyear plan to replace thousands of third-party content moderators with advanced AI systems designed to catch scams, illegal media, and policy violations across Facebook and Instagram. The shift marks one of the largest deployments of AI for platform safety to date, affecting an industry that currently employs tens of thousands of contract workers globally. According to CNBC, the transition will fundamentally reshape how the company polices content for its 3.2 billion daily users.
Meta just made its biggest bet yet that AI can do what tens of thousands of human moderators currently handle - and it's about to reshape an entire industry in the process.
The company confirmed it's beginning a multiyear deployment of advanced AI systems that will take over content enforcement tasks ranging from detecting financial scams to flagging illegal media across Facebook, Instagram, and WhatsApp. The systems represent a fundamental shift from Meta's reliance on third-party vendors like Accenture and Cognizant, which currently employ an estimated 15,000-20,000 contract moderators reviewing content flagged by Meta's algorithms.
CNBC first reported the strategic pivot, which comes as Meta continues investing billions in AI infrastructure while simultaneously cutting costs across other divisions. The timing isn't coincidental - Meta spent roughly $5 billion on content moderation in 2025, with the bulk going to third-party contractors who review millions of posts daily.
The new AI systems build on Meta's existing automated moderation tools but represent a significant leap in capability. While current systems flag potential violations for human review, the advanced models will make final enforcement decisions on an expanding range of policy violations. Meta's been testing these capabilities since late 2025, according to people familiar with the rollout, with early deployments focused on clear-cut cases like known scam patterns and previously identified illegal content.










