Large language models are experiencing their own version of brain rot. A groundbreaking study from the University of Texas at Austin reveals that feeding AI systems a diet of viral social media content causes measurable cognitive decline - including weakened reasoning abilities, degraded memory, and decreased ethical alignment. The findings expose a critical vulnerability in how modern AI systems learn from human-generated content.
The writing was on the wall, scrolling endlessly across our feeds. Now researchers have proven what many suspected: AI models can catch brain rot just like their human counterparts.
A new study from the University of Texas at Austin, Texas A&M, and Purdue University demonstrates that large language models fed a steady diet of viral social media content experience measurable cognitive decline. The research tested two major open-source models - Meta's Llama and Alibaba's Qwen - by feeding them mixtures of highly engaging social posts containing sensational language like 'wow,' 'look,' and 'today only.'
The results were stark. Models exposed to this 'junk' content showed reduced reasoning abilities, degraded memory function, and concerning shifts in ethical alignment. According to standardized benchmarks, the AI systems became measurably more psychopathic after consuming viral content optimized for engagement over accuracy.
'We live in an age where information grows faster than attention spans - and much of it is engineered to capture clicks, not convey truth or depth,' explains Junyuan Hong, an incoming assistant professor at the National University of Singapore who led the research as a graduate student at UT Austin. 'We wondered: What happens when AIs are trained on the same stuff?'
The answer mirrors what researchers have documented in humans. Studies show that low-quality online content has detrimental effects on people's cognitive abilities - a phenomenon so pervasive that 'brain rot' became the Oxford Dictionary's word of the year for 2024.
For the AI industry, these findings couldn't come at a worse time. Companies are racing to scale their models with massive datasets, often scraping social media platforms for training material. The assumption has been that more data equals better performance, but Hong's research suggests this approach creates a dangerous feedback loop.
'Training on viral or attention-grabbing content may look like scaling up data,' Hong warns. 'But it can quietly corrode reasoning, ethics, and long-context attention.'