Google just crossed a threshold that's been promised for years but never quite delivered - AI that actually does things for you. The company's Gemini task automation feature went live in beta this week on Samsung Galaxy S26 Ultra and Pixel 10 devices, letting the AI assistant independently control apps like Uber Eats and rideshare services. Instead of just answering questions or setting reminders, Gemini now opens apps in a virtual window and navigates them on your behalf, ordering food or booking rides based on simple voice prompts. It's the kind of autonomous agent behavior that's been hyped endlessly but rarely shipped, and early testers report it's genuinely unsettling watching your phone operate itself.
Google just made good on years of AI assistant promises, and it's weirder than anyone expected. The company's Gemini task automation feature went live in beta this week, and for the first time, an AI assistant is actually controlling your apps without you touching the screen.
The rollout hit Samsung Galaxy S26 Ultra devices first, with Pixel 10 support arriving simultaneously. According to hands-on testing by The Verge's Allison Johnson, the experience of watching your phone navigate apps autonomously is genuinely disconcerting. "Boy is it weird watching your phone use itself," she noted after giving Gemini its first commands.
The feature works through what Google calls a "virtual window" - essentially Gemini opening and controlling apps on your behalf while you watch. Right now it's limited to food delivery and rideshare applications, but the implications are massive. You can tell Gemini to order dinner, and it'll open your preferred delivery app, browse options, add items to cart, and complete the checkout process. Same goes for booking an Uber to the airport.
This isn't the first time Google has teased autonomous AI capabilities. The company alongside the Galaxy S26 launch, positioning it as the next evolution of mobile assistants. But announcements and actual deployment are different beasts entirely. What shipped this week represents a fundamental shift from reactive AI - answering questions, setting timers - to proactive agents that execute complex, multi-step tasks.









