Google just crossed a major threshold in AI assistant evolution. Starting today on Pixel 10 and Samsung Galaxy S26 devices, Gemini can autonomously navigate apps like Uber and DoorDash to complete multi-step tasks - marking the first time Google's AI assistant moves from reactive responses to proactive agent behavior. The feature, called task automation, lets users watch in real-time as Gemini opens apps, fills in details, and preps orders for final user approval, according to The Verge.
Google is making its boldest bet yet on agentic AI. The company's Gemini assistant can now autonomously complete tasks across apps - not just answer questions or provide suggestions, but actually execute multi-step workflows on your behalf.
The feature, launching today on select Pixel 10 models and Samsung's Galaxy S26 series, works through what Google calls task automation. Tell Gemini "Get me an Uber to the Palace of Fine Arts," and the AI springs into action. It launches the Uber app in a virtual window, enters your destination, selects a ride type, and preps everything for your final approval, The Verge reports.
You can watch the whole process unfold in real-time or let it run in the background while Gemini handles the tedious bits. If something looks off, users can stop the automation or take control at any point. The same workflow applies to DoorDash orders - Gemini will browse menus, add items to your cart, and hand things back to you for checkout.
This marks a fundamental shift in how AI assistants operate. Previous generations of digital helpers - including earlier versions of Gemini, Siri, and Alexa - primarily responded to direct commands with information or simple actions. They'd show you a restaurant's hours or set a timer, but couldn't independently navigate complex app interfaces to complete tasks that require multiple decisions and inputs.












