The insurance industry is essentially admitting defeat on artificial intelligence. Major carriers including AIG, Great American, and WR Berkley are petitioning U.S. regulators for permission to exclude AI-related liabilities from corporate policies, citing the technology's unpredictable nature as an uninsurable risk. The move signals a fundamental crisis of confidence in AI's enterprise readiness.
What happens when the people whose entire business is calculating risk decide something is too dangerous to touch? We're about to find out. Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate policies, according to reporting from the Financial Times. One underwriter described AI models' outputs as "too much of a black box" to price accurately.
The insurance industry's retreat from AI coverage represents more than corporate caution - it's a referendum on whether artificial intelligence is actually ready for widespread enterprise deployment. These are companies that routinely insure oil rigs, nuclear plants, and space launches. If they won't touch AI, what does that tell us about the technology everyone's racing to implement?
The industry has good reason to be spooked. Google's AI Overview falsely accused a solar company of legal troubles earlier this year, triggering a $110 million lawsuit. Air Canada got stuck honoring a discount its chatbot completely invented after a customer took the airline to small claims court. Most dramatically, fraudsters used a digitally cloned executive to steal $25 million from London engineering firm Arup during what appeared to be a legitimate video conference.
But individual payouts aren't what's keeping insurance executives up at night. It's the systemic risk that keeps them awake - the nightmare scenario where a widely deployed AI model malfunctions and triggers thousands of claims simultaneously. "We can handle a $400 million loss to one company," an Aon executive told the Financial Times. "What we can't handle is an agentic AI mishap that triggers 10,000 losses at once."
This isn't theoretical anymore. Consider how many companies now rely on the same foundational AI models from OpenAI, Google, or . A single model failure could cascade across industries in ways traditional risk models never anticipated. Unlike a factory fire or data breach that affects one company, an AI hallucination in a widely-used model could simultaneously damage thousands of businesses.












