Google just crossed a critical threshold in the AI assistant race. The company's Gemini AI is rolling out task automation capabilities that let it independently navigate apps like Uber and DoorDash, marking the first time a major consumer AI assistant can execute multi-step tasks without constant human intervention. Starting with Pixel 10 phones and Samsung's Galaxy S26 series, users can now ask Gemini to hail a ride or prep a food order while the AI works autonomously in a virtual window.
Google is making its biggest bet yet that AI assistants should do more than just answer questions. The company's rolling out what it calls "task automation" for Gemini, transforming the AI from a conversational interface into an agent that can actually complete tasks on your behalf.
The feature launches first on Pixel 10 devices and Samsung's Galaxy S26 lineup, where Gemini can now handle the entire process of ordering an Uber ride or assembling a DoorDash cart. According to The Verge's hands-on coverage, users simply prompt Gemini with something like "Get me an Uber to the Palace of Fine Arts," and the AI springs into action.
What happens next represents a fundamental shift in how AI assistants operate. Gemini launches the target app in a virtual window on your device and methodically works through each step - entering addresses, selecting vehicle types, reviewing prices. You can watch the entire process unfold in real-time, with options to interrupt or take control if the AI goes off track. Or you can just let it run in the background while you do something else.
The implementation reveals Google's careful approach to agentic AI. Unlike fully autonomous systems that could theoretically rack up charges without oversight, Gemini prepares everything but stops short of final submission. When it's done, you get a notification saying "I've prepared your order. Complete it in the DoorDash app." That final tap remains firmly in human hands.












