OpenAI just shut down viral claims spreading across social media that ChatGPT stopped giving medical and legal advice. The company's head of health AI says the rumors are completely false, with ChatGPT's behavior remaining unchanged despite policy consolidation that sparked the confusion.
OpenAI found itself playing defense against viral misinformation this weekend after false claims erupted across social media that ChatGPT had suddenly banned medical and legal advice. The rumors reached fever pitch when betting platform Kalshi posted "JUST IN: ChatGPT will no longer provide health or legal advice" - a claim that spread like wildfire before being deleted. Karan Singhal, OpenAI's head of health AI, quickly responded on X calling the reports "not true" and emphasizing that "ChatGPT's behavior remains unchanged." The confusion stems from OpenAI's October 29th policy update that consolidated three separate usage policies into one unified document. While the streamlined policy mentions restrictions on "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional," these rules aren't new. The company's previous policy contained nearly identical language about not providing "tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations." "ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information," Singhal explained in his clarification. The distinction matters because ChatGPT still provides general informational content about health and legal topics - it just won't act as your personal doctor or lawyer. This misinformation spread demonstrates how quickly policy changes at major AI companies can be misinterpreted and amplified across social platforms. The betting market angle from Kalshi adds another layer, as prediction markets increasingly track AI policy developments that could affect everything from healthcare startups to legal tech companies. OpenAI's policy consolidation actually represents a maturation of their governance approach, moving from separate rules for different products to one comprehensive framework. The company previously maintained distinct policies for their "universal" terms, ChatGPT usage, and API access - a structure that created confusion as expanded beyond just ChatGPT into areas like healthcare AI and enterprise solutions. The unified approach mirrors how other tech giants structure their terms of service, but the transition window allowed for misinterpretation. For healthcare and legal professionals who've been watching AI's encroachment into their fields, this false alarm reveals ongoing tensions about AI's role in professional advice. While ChatGPT won't diagnose your symptoms or draft your contracts, it continues serving as an educational resource that many professionals and consumers rely on for understanding complex topics.








