A viral story claiming ChatGPT helped an Australian entrepreneur cure his dog's cancer is unraveling under scrutiny. When Sydney-based tech entrepreneur Paul Conyngham told The Australian that OpenAI's chatbot helped him develop a cancer vaccine for his pet, the story spread like wildfire - exactly the kind of medical miracle Big Tech has been promising. But The Verge's investigation reveals the reality is far more complicated, raising serious questions about AI capability claims in healthcare.
The story had everything Silicon Valley wanted to hear. A determined entrepreneur with no medical background, a dying dog, and an AI assistant that cracked the code veterinary oncologists couldn't. When Paul Conyngham's account first surfaced in The Australian, it painted a picture of ChatGPT as a medical miracle worker - the kind of breakthrough that validates billions in AI investment and promises to revolutionize healthcare.
But The Verge's Robert Hart wasn't buying it. His investigation into the viral claim reveals a much messier reality that underscores a critical problem in today's AI landscape: the chasm between what these tools can actually do and what people think they can do.
According to the original reporting, Conyngham's dog Rosie was diagnosed with cancer in 2024. After chemotherapy failed to shrink the tumors and veterinarians reportedly said nothing more could be done, Conyngham turned to OpenAI's flagship chatbot. The narrative that spread across social media suggested he used the AI to design a personalized cancer vaccine that ultimately saved his pet's life.
It's the kind of story that feeds into OpenAI's broader ambitions in healthcare and pharmaceutical research. The company has been aggressively positioning its large language models as tools capable of accelerating drug discovery and medical breakthroughs. has repeatedly suggested AI will transform medicine, and stories like Conyngham's seem to offer real-world validation.










