OpenAI just revealed that over 230 million people every week are asking ChatGPT for health advice—sharing diagnoses, medications, and lab results with a chatbot that isn't bound by the same privacy laws as your doctor. The company launched ChatGPT Health this month, positioning the AI as a healthcare "ally" to help navigate insurance, interpret test results, and track wellness data. But here's the catch: unlike hospitals and clinics governed by HIPAA, tech companies operate in a regulatory gray zone where privacy promises live in terms of service that can change overnight. Legal experts tell The Verge that users handing over sensitive medical information to AI chatbots are taking a leap of faith with little legal recourse if things go wrong.
OpenAI is betting big that you'll trust its chatbot with your most intimate secrets. Every week, more than 230 million people are already asking ChatGPT about their health—from decoding confusing lab results to navigating insurance nightmares to making sense of scary diagnoses. The company wants to deepen that relationship. This month it launched ChatGPT Health, a dedicated tab inside ChatGPT where users can feed the AI their medical records, prescription lists, and wellness data from apps like Apple Health and Peloton in exchange for personalized insights.
The pitch is seductive: an always-available medical companion that doesn't judge, never rushes you, and speaks in plain English instead of medical jargon. OpenAI says many users already see ChatGPT as an "ally" helping them become better self-advocates in a frustrating healthcare system. CEO Sam Altman even brought a cancer patient onstage during the to share how the tool helped her understand her diagnosis.












