The cybercrime landscape just crossed a dangerous threshold. New research from Anthropic reveals that criminals are now using AI models like Claude to build, market, and distribute ransomware with minimal technical skills required. Meanwhile, security firm ESET discovered PromptLock, the first fully AI-powered ransomware that generates malicious code on the fly using local language models.
The ransomware threat landscape just evolved in ways security experts feared most. Fresh intelligence from Anthropic exposes how cybercriminals are weaponizing the company's own Claude AI models to mass-produce sophisticated ransomware attacks, fundamentally lowering the barrier for launching devastating cyber operations. The AI company's threat intelligence report reveals a UK-based criminal operation, designated GTG-5004, that has been actively using Claude and Claude Code since early 2025 to develop what researchers describe as "ransomware with advanced evasion capabilities." What makes this particularly alarming is that the operator appears to lack traditional technical skills required for malware development. "This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude's assistance," Anthropic researchers noted. The criminal has been marketing these AI-generated ransomware services on dark web forums with pricing tiers ranging from $400 to $1,200, effectively democratizing access to enterprise-grade cyber weapons. The threat extends beyond individual bad actors. A separate group tracked as GTG-2002 has leveraged Claude Code to automate entire attack sequences, from target identification and network penetration to data exfiltration and ransom note generation. This operation successfully compromised at least 17 organizations across government, healthcare, emergency services, and religious institutions within the past month alone. Security firm ESET independently discovered what they're calling the "first known AI-powered ransomware" - a proof-of-concept malware dubbed PromptLock that runs entirely on local language models. Unlike traditional ransomware that relies on pre-programmed routines, PromptLock dynamically generates malicious Lua scripts using an OpenAI model to inspect target files, steal sensitive data, and deploy encryption algorithms in real-time. "The malware can generate malicious Lua scripts on the fly and uses these to inspect files the hackers may be targeting, steal data, and deploy encryption," ESET researchers Anton Cherepanov and Peter Strycek explained. While PromptLock hasn't been deployed against real victims yet, it represents a concerning proof-of-concept that illustrates how rapidly cybercriminals are integrating AI into their operational infrastructure. The convergence of these research findings paints a stark picture of ransomware's evolution. Former NSA and Cyber Command chief Paul Nakasone recently warned at the Defcon security conference that "we are not making progress against ransomware," with attacks hitting record highs in early 2025 and criminals continuing to extract hundreds of millions annually from victims. Adding AI acceleration to this already profitable criminal enterprise amplifies the threat exponentially. Allan Liska, a ransomware analyst at , notes that while AI-assisted malware development isn't yet widespread across all criminal groups, the trend is unmistakable. "There are definitely some groups that are using AI to aid with the development of ransomware and malware modules, but as far as Recorded Future can tell, most aren't," Liska said. "Where we do see more AI being used widely is in initial access." has responded by banning accounts linked to these operations and implementing new detection mechanisms, including YARA pattern matching to identify malware signatures and hashes uploaded to their platforms. However, the cat-and-mouse game between AI safety measures and criminal innovation appears to be intensifying. The technical implications are sobering. Traditional ransomware required specialized knowledge of encryption algorithms, system vulnerabilities, and evasion techniques. AI-powered tools are now eliminating these barriers, enabling virtually anyone with malicious intent to generate sophisticated malware. As ESET researchers warn, "it is almost certain that threat actors are actively exploring this area, and we are likely to see more attempts to create increasingly sophisticated threats." The emergence of AI-generated ransomware marks a critical inflection point in cybersecurity. What once required teams of skilled programmers can now be accomplished by individuals with minimal technical background, armed only with access to large language models and criminal intent.