OpenAI and Perplexity are racing to replace Chrome with AI-powered browsers that act on users' behalf, but cybersecurity experts warn these agents create unprecedented privacy risks through prompt injection attacks that could expose emails, make unauthorized purchases, and compromise sensitive data. The vulnerability affects the entire AI browser category and has no clear solution.
The AI browser wars just got a lot more dangerous. OpenAI's freshly launched ChatGPT Atlas and Perplexity's Comet are positioning themselves as the intelligent successors to Chrome, promising to handle everything from booking flights to managing your calendar. But security researchers are sounding alarms about a fundamental flaw that could turn these helpful agents against users.
The problem centers on prompt injection attacks - a relatively new vulnerability where malicious actors embed hidden instructions on web pages that can hijack an AI agent's behavior. When an AI browser visits a compromised site, it might suddenly start forwarding your private emails, making unauthorized purchases, or posting on your social media accounts.
"There's a huge opportunity here in terms of making life easier for users, but the browser is now doing things on your behalf," Brave senior research engineer Shivan Sahib told TechCrunch. "That is just fundamentally dangerous, and kind of a new line when it comes to browser security."
Brave's latest research published this week declares prompt injection attacks a "systemic challenge facing the entire category of AI-powered browsers." The privacy-focused browser company previously flagged vulnerabilities in Perplexity's Comet but now warns the issue spans the entire industry.
Both companies are scrambling to address these concerns. OpenAI Chief Information Security Officer Dane Stuckey acknowledged on X that "prompt injection remains a frontier, unsolved security problem" and that adversaries will "spend significant time and resources" trying to exploit ChatGPT agents. Perplexity went further, stating the problem "demands rethinking security from the ground up" because attacks "manipulate the AI's decision-making process itself."
The technical reality is sobering. Unlike traditional browsers that simply display web content, AI agents need extensive permissions to be useful - access to your email, calendar, contacts, and the ability to click buttons and fill forms on your behalf. found these agents work reasonably well for simple tasks but often feel more like "party tricks" than productivity boosters.












