Amazon is launching new Echo hardware this week to supercharge Alexa Plus, its AI-powered voice assistant upgrade. Early testing reveals a promising but frustrating experience - while natural language commands work better than rigid phrases, response times stretch up to 15 seconds and basic functions that once worked reliably now fail inconsistently.
Amazon just bet big on fixing its voice assistant problem - and the results are mixed. The company's rolling out new Echo hardware this week designed to supercharge Alexa Plus, the AI-powered upgrade that completely rebuilds the assistant from the ground up. After months of early access testing, the verdict is clear: there's real potential here, but significant hurdles remain.
The transformation is dramatic. Where the old Alexa demanded precise commands like "Turn on living room lights," Alexa Plus handles natural speech like "dim the lights in here, adjust the thermostat down a few degrees, lock the front door, and turn the upstairs lights off. Oh, and remind me to take the trash out in the morning." It all happens seamlessly - exactly what smart homes have promised for years.
But Amazon had to completely tear down the old system to build this new one. According to Panos Panay, head of Amazon's devices division, Alexa Plus runs on entirely new architecture that feels more powerful yet less reliable than its predecessor.
The performance issues are immediately noticeable. Simple requests that once took seconds now stretch up to 15 seconds for responses. While controlling lights or thermostats remains fast through local Matter connections, waiting over 10 seconds for weather updates or music becomes tedious. Even more frustrating, basic functions that worked reliably before now fail inconsistently.
Take something as simple as "Turn on the bathroom fan for 15 minutes." The old Alexa executed this flawlessly. Now, Alexa Plus says it needs to create a routine, then forgets to run it. Or it confirms the action, turns the fan on, but never shuts it off. Multiple attempts yield different results each time.
This unpredictability stems from Amazon's architectural choice to use large language models as translators. The LLM interprets natural language requests, then hands them off to deterministic systems - APIs, device controllers, or local connections. When translation fails or API gaps exist, the handoff breaks down.
"That's the paradox of LLMs," explains the challenge. They excel at parsing human language but aren't designed for consistency. Ask the same question twice and you'll get different answers. This nondeterminism works great for brainstorming but creates problems when you just want your morning coffee to brew reliably.