SwitchBot just threw down the gauntlet in the household robotics race. The smart home company is unveiling the Onero H1 at CES 2026, calling it 'the most accessible AI household robot' that can handle everyday chores from laundry folding to window washing. With articulated arms, on-device AI, and a wheelbase that lets it navigate around your home, it's the company's most ambitious move yet into the generalist robot space.
SwitchBot is making its boldest move yet into the household robotics arena. The company unveiled the Onero H1 at CES 2026, positioning it as the next frontier in home automation after years of focusing on specialized devices. Videos shared ahead of the announcement show the wheeled humanoid completing a surprisingly impressive range of household tasks, from filling coffee machines and making breakfast to washing windows, loading washing machines, and folding clothes with articulated precision.
What makes the Onero different from other humanoid robots making the rounds at tech conferences is SwitchBot's design philosophy. Rather than chasing the legs-and-torso form factor favored by Boston Dynamics and others, the company went with a pragmatic approach: articulated arms and hands mounted on a wheeled cylindrical base. It's not trying to walk up your stairs or navigate uneven terrain. Instead, it's built to handle the smooth, flat surfaces most of us actually live with while maintaining the dexterity needed for delicate tasks.
The robot packs some serious hardware under that cylindrical shell. SwitchBot equipped the Onero with 22 degrees of freedom across its articulated joints, which gives it impressive range of motion compared to simpler robotic arms. For context, Boston Dynamics' humanoid Atlas achieves 29 DoF in its upper body alone, so the Onero sits in that ballpark of precision. The perception system relies on multiple cameras embedded throughout the robot's head, arms, hands, and midsection, feeding real-time visual data to power its movements.
Here's where things get technically interesting. The Onero runs an on-device OmniSense vision-language-action model that processes visual information, depth data, and tactile feedback to understand what it's looking at and how to interact with it. That's not just pattern matching against pre-programmed tasks. The VLA model enables the robot to learn and adapt across different household scenarios, recognizing object shapes, positions, and interaction states on the fly. It's the same kind of multimodal intelligence that's been powering recent breakthroughs in AI, but compressed down and optimized to run locally on the robot without constantly phoning home to cloud servers.












