A weekend experiment with Google's new Gemini for Home AI reveals the unsettling reality of AI-powered surveillance. Tech reporter Jennifer Pattison Tuohy subjected her family to 72 hours of constant monitoring, discovering that while the system accurately describes real-time events, its daily summaries drift into fiction - hallucinating family members who weren't home and fabricating social interactions that never happened.
Google's Gemini for Home just got its first real-world stress test, and the results paint a troubling picture of AI surveillance gone wrong. Tech reporter Jennifer Pattison Tuohy turned her house into a monitoring lab this weekend, installing multiple Nest cameras throughout her home to see if AI-powered surveillance actually delivers on its promise of smarter security. What she discovered instead was an AI that accurately narrates the present but fabricates the past.
The immediate alerts worked as advertised. Instead of generic "person detected" notifications, Gemini for Home delivered specific descriptions: "R unpacking items from a box" or "Jenni cuts a pie / B walks into the kitchen, washes dishes in the sink." These granular details represent a genuine upgrade over traditional camera alerts, helping distinguish between actual threats and routine family activity.
But the system's $20 monthly price tag comes with serious blind spots. When Tuohy's husband left carrying a shotgun, Gemini described it as a "garden tool." The AI consistently avoided identifying weapons, even when she deliberately brandished a knife at the camera. For a security system, this selective interpretation raises obvious red flags about what threats might go undetected.
The real problems emerge in Gemini's daily "Home Briefs" - AI-generated summaries that arrive each evening around 8:30 PM. These reports, designed to reduce notification fatigue by condensing the day's events, instead demonstrate how AI systems can transform accurate observations into pure fiction. On Halloween, the system reported that "Jenni and R were seen interacting with trick-or-treaters and enjoying the festive atmosphere," despite her daughter being away from home. Another summary described a cozy evening with multiple family members when only two people were actually present.
This isn't just harmless embellishment - it represents a fundamental flaw in how AI interprets human behavior. Google markets these summaries as helpful overviews, but they're essentially creative writing exercises based on security footage. The system takes factual, timestamped observations and weaves them into narratives that prioritize storytelling over accuracy.












