Defense contractor Anduril is testing something straight out of science fiction - military drones controlled by large language models similar to ChatGPT. At a classified Texas military base, the company demonstrated autonomous fighter jets that can receive voice commands, coordinate attacks, and eliminate targets with minimal human oversight. This marks a dramatic shift in how AI is being weaponized for modern warfare.
The scene unfolding at a secret military base 50 miles from the Mexican border reads like a techno-thriller, but it's very real. Anduril, the defense contractor founded by Palmer Luckey, just proved that large language models can control swarms of killer drones with chilling efficiency.
During a classified demonstration, four prototype fighter jets codenamed "Mustang" appeared on the horizon over the Texas desert. When a simulated Chinese J-20 stealth fighter appeared on radar screens, a simple voice command - "Mustang intercept" - set everything in motion. An AI model similar to the one powering ChatGPT parsed the order, coordinated with the drones, and responded in a calm female voice: "Mustang collapsing." Within minutes, the autonomous aircraft had converged on their target and destroyed it with virtual missiles.
This isn't just another military tech demo. It represents a fundamental shift in how the Pentagon thinks about AI warfare. Anduril is developing a full-scale autonomous fighter called Fury through the Air Force's Collaborative Combat Aircraft program, designed to fly alongside human pilots as an AI wingman. The company calls it "Sergeant Chatbot at your service."
The timing couldn't be more significant. Federal AI contract funding exploded by 1,200% between August 2022 and August 2023, according to a Brookings Institution report. The Department of Defense drove most of that spending surge. Now, President Trump's administration is doubling down with the first-ever dedicated AI allocation in the defense budget - $13.4 billion for AI and autonomy in 2026.
That massive funding shift has Silicon Valley's biggest players scrambling for their piece of the military AI pie. This year alone, OpenAI, Google, Anthropic, and xAI each secured military contracts worth up to $200 million. It's a dramatic reversal from 2018, when Google famously pulled out of Project Maven over employee objections to military AI development.
"The ambition that is a bit scary is that AI is so smart that it can prevent war or just fight and win it," Georgetown University AI researcher Emelia Probasco told . "Like some sort of magical fairy dust." She's not alone in that concern. Current LLMs remain too unreliable and unpredictable for direct control of lethal weapons systems.












