Opera just opened its premium AI browser Neon to users, charging $19.90 monthly for something that feels more like managing three confused interns than using cutting-edge technology. The browser's three separate AI agents - Chat, Do, and Make - can't talk to each other and routinely fail at basic tasks, highlighting why the AI browser revolution still feels half-baked.
Opera is betting $20 a month that users want three AI assistants crammed into their browser, even if those assistants can barely talk to each other. The company's Neon browser, which started rolling out to waitlisted users last month, represents the latest salvo in the increasingly crowded AI browser wars - and it's already exposing the fundamental problems with today's AI integration attempts.
Unlike Google's Gemini-powered Chrome updates or the free offerings from Perplexity and The Browser Company's Dia, Opera is charging premium prices for what feels like beta software. Neon's $19.90 monthly subscription sets expectations high for a product category that most users access for free.
The browser's core confusion stems from its three-headed AI approach. Users get Chat (a standard chatbot), Do (an agentic browser controller), and Make (a web tool builder), each operating independently with no ability to hand off tasks or share context. It's like having three different customer service representatives who refuse to transfer calls.
Our testing revealed the depth of these integration problems. When tasked with summarizing comments from recent Verge articles, Chat confidently delivered 400 words explaining there were no comments to analyze - despite visible comment counts on each page. Opera executive VP Krystian Kolondra later explained we'd chosen the "wrong tool" - apparently reading webpage comments requires the Do agent, not the Chat agent designed for webpage analysis.
This tool confusion isn't just a learning curve issue. Do, the browser-controlling agent, routinely made decisions that bordered on comical. During flower shopping tests, it scrolled past perfectly reasonable bouquets to add a funeral wreath to our cart. When booking theater tickets, it declared none were available for shows with obvious seat availability. The agent's confidence never wavered, even when demonstrably wrong.
The technical limitations run deeper than UI confusion. Do can't be course-corrected mid-task, forcing users to watch helplessly as it makes poor choices. There's no way to switch between agents within the same browser session for follow-up questions. And despite promises of user feedback integration, the system often ignores responses or stops working entirely after acknowledging input.