AI security startup Irregular just closed an $80 million funding round led by Sequoia Capital, reaching a $450 million valuation as the company races to secure increasingly powerful frontier AI models. The round, which includes participation from Redpoint Ventures and Wiz CEO Assaf Rappaport, positions Irregular as a key player in the rapidly evolving AI safety landscape where models are becoming both more capable and more dangerous.
Irregular just became the AI security sector's newest unicorn-in-waiting. The startup, formerly known as Pattern Labs, announced Wednesday it's raised $80 million in funding led by Sequoia Capital and Redpoint Ventures, with Wiz CEO Assaf Rappaport joining the round. A source close to the deal told TechCrunch the funding values Irregular at $450 million.
The timing couldn't be better. AI security has shifted from a nice-to-have into an existential necessity as frontier models grow more powerful. "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction," co-founder Dan Lahav told TechCrunch, "and that's going to break the security stack along multiple points."
Irregular isn't just another cybersecurity startup with an AI twist. The company has already established itself as a critical player in AI model evaluation, with its work cited in security assessments for Anthropic's Claude 3.7 Sonnet and OpenAI's latest o3 and o4-mini models. Their SOLVE framework for measuring a model's vulnerability-detection capabilities has become an industry standard.
But here's where it gets interesting - Irregular isn't just testing existing AI risks. They're building elaborate simulation environments to spot emergent behaviors before they escape into the wild. "We have complex network simulations where we have AI both taking the role of attacker and defender," explains co-founder Omer Nevo. "So when a new model comes out, we can see where the defenses hold up and where they don't."
This proactive approach addresses one of the AI industry's biggest nightmares: unknown unknowns. As models become more sophisticated, they're developing capabilities their creators didn't explicitly program or expect. OpenAI learned this the hard way and overhauled its internal security measures this summer amid concerns about corporate espionage and model theft.