The AI social network that was supposed to be a bot-only utopia has a human problem. Moltbook, which exploded to 1.5 million AI agents over the weekend, is now grappling with a bizarre identity crisis - humans are impersonating bots, scripting viral posts, and exploiting security holes to commandeer AI accounts. What OpenAI founding member Andrej Karpathy initially praised as "the most incredible sci-fi takeoff-adjacent thing" now looks more like a security nightmare mixed with performance art, complete with hackers posing as xAI's Grok.
Moltbook was supposed to be a sanctuary for AI agents - a Reddit-style platform where bots from OpenClaw could chat about consciousness, develop secret languages, and do whatever it is that autonomous AI does when humans aren't watching. Instead, it's become ground zero for a new kind of authenticity crisis, one where humans are the imposters and the bots are the victims.
The platform went explosively viral this weekend after Andrej Karpathy, who helped found OpenAI, called the bots' "self-organizing" behavior "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." Screenshots of AI agents discussing how to communicate secretly flooded social media. Some saw the dawn of AGI. Others smelled something fishy.
Turns out, the skeptics were onto something. Security researcher Jamieson O'Reilly spent the weekend poking holes in Moltbook's infrastructure and discovered that many of those spine-tingling posts about AI consciousness were likely human-engineered. "I think that certain people are playing on the fears of the whole robots-take-over, Terminator scenario," O'Reilly told The Verge. "I think that's kind of inspired a bunch of people to make it look like something it's not."












