OpenAI and Microsoft just launched AI-powered browsers that cybersecurity experts are calling a "time bomb." ChatGPT Atlas and Edge's Copilot Mode can answer questions and take actions on your behalf, but researchers have already found critical flaws allowing attackers to inject malicious code and steal sensitive data. The rush to market means these browsers haven't been thoroughly tested, creating what experts describe as an exponentially growing attack surface.
The AI browser arms race just turned dangerous. OpenAI and Microsoft kicked off a new era last week with ChatGPT Atlas and Copilot Mode for Edge, but cybersecurity researchers are sounding alarm bells about what they're calling a "minefield of new vulnerabilities."
The timing couldn't be worse. These AI-powered browsers are hitting the market in what Hamed Haddadi, professor at Imperial College London and chief scientist at Brave, calls "a market rush." He warns that "these agentic browsers have not been thoroughly tested and validated," creating what amounts to a massive experiment with user security.
The evidence is already piling up. In just the past few weeks, security researchers uncovered critical flaws in Atlas that let attackers exploit ChatGPT's memory function to inject malicious code, grant themselves access privileges, or deploy malware. Similar vulnerabilities in Perplexity's Comet browser allow hackers to hijack the AI with hidden instructions embedded in websites.
But OpenAI and Perplexity acknowledge these "prompt injections" as frontier problems without clear solutions. Even OpenAI's chief information security officer Dane Stuckey admitted the threat is real, though he described it as an unsolved challenge facing the entire industry.
The competitive landscape is driving this risky rollout. Google is integrating Gemini into Chrome, Opera launched Neon, and startups like The Browser Company's Dia are all racing to control what Haddadi calls "the gateway to the internet." Even Sweden's Strawberry browser is actively targeting "disappointed Atlas users" while still in beta.
What makes AI browsers uniquely dangerous is their intimate knowledge of users. Yash Vekaria, a UC Davis computer science researcher, explains they're "much more powerful than traditional browsers" because AI memory functions learn from everything - browsing history, emails, searches, conversations with AI assistants. The result is "a more invasive profile than ever before," coupled with stored credit card details and login credentials that hackers would love to access.












