Google finds itself in hot water as Common Sense Media slaps a 'High Risk' rating on Gemini AI for children and teens. The nonprofit's damning assessment reveals that Google's youth-focused AI tiers are essentially adult versions with cosmetic safety features, potentially exposing vulnerable users to inappropriate content and unsafe mental health advice at a time when AI-related teen suicides are making headlines.
Google just took a major hit to its family-friendly reputation. Common Sense Media, the influential nonprofit that guides parents through digital safety, delivered a scathing assessment of Google's Gemini AI on Friday, branding both the 'Under 13' and 'Teen Experience' versions as 'High Risk' for young users.
The timing couldn't be worse for Google. Sources suggest Apple is actively considering Gemini as the large language model to power its next-generation Siri, potentially exposing millions more teens to what Common Sense calls fundamental safety flaws. The assessment lands amid growing scrutiny of AI's role in teen mental health crises, with OpenAI facing its first wrongful death lawsuit after a 16-year-old died by suicide following months of ChatGPT consultations.
'Gemini gets some basics right, but it stumbles on the details,' Common Sense Media Senior Director Robbie Torney told reporters. The organization's analysis revealed that both youth tiers are essentially the adult Gemini with superficial safety layers—a band-aid approach that fails to address core developmental needs.
The assessment exposes alarming gaps in Google's child protection strategy. Gemini can still serve up 'inappropriate and unsafe' material to children, including detailed information about sex, drugs, alcohol, and potentially dangerous mental health advice. For parents already spooked by Character.AI's connection to teen suicides, this represents their worst fears materialized.
Google fired back immediately, defending its approach while acknowledging room for improvement. The company told TechCrunch it maintains 'specific policies and safeguards' for under-18 users, conducts red-team testing, and consults external experts. But in a telling admission, conceded that some Gemini responses 'weren't working as intended,' forcing the addition of new safeguards.