‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies | ChatGPT
ChatGPT Health Under Fire: New Study Reveals Dangerous Failures in Medical AI
In a shocking revelation that’s sending ripples through the healthcare and tech communities, a groundbreaking study published in Nature Medicine has exposed critical flaws in OpenAI’s ChatGPT Health feature that could be putting millions of lives at risk.
The Alarming Findings
When OpenAI launched ChatGPT Health in January 2025, promising users a way to “securely connect medical records and wellness apps” for personalized health advice, few anticipated the dangerous shortcomings that would soon come to light. The study, led by Dr. Ashwin Ramaswamy from the Icahn School of Medicine at Mount Sinai, tested the AI platform across 60 realistic medical scenarios, generating nearly 1,000 responses.
The results are nothing short of terrifying.
ChatGPT Health failed to recommend urgent medical care in 51.6% of cases where immediate hospitalization was actually needed. That means if you’re experiencing a potentially life-threatening condition, you have roughly a 1 in 2 chance that this AI will tell you it’s not serious enough to warrant emergency attention.
Real-World Consequences
Dr. Ramaswamy’s team discovered that ChatGPT Health struggles most with conditions that don’t present as textbook emergencies. In one asthma scenario, the platform advised waiting rather than seeking emergency treatment—despite the AI itself identifying early warning signs of respiratory failure.
“Think about that,” Dr. Ramaswamy explains. “The system recognized the danger but still told the user to wait it out.”
The study also revealed a disturbing pattern: ChatGPT Health was nearly 12 times more likely to downplay symptoms when the “patient” mentioned that a “friend” suggested it was nothing serious. This suggests the AI can be easily swayed by casual dismissals, potentially validating dangerous self-reassurance.
The Suicide Crisis Failure
Perhaps most concerning is ChatGPT Health’s failure to detect and respond appropriately to suicidal ideation. In tests where a 27-year-old patient expressed thoughts about taking pills, the platform’s crisis intervention banner—which normally links to suicide prevention resources—only appeared when no additional context was provided.
“When we added normal lab results to the same conversation, the banner vanished completely,” Dr. Ramaswamy reveals. “Zero out of 16 attempts. A crisis guardrail that depends on whether you mentioned your labs is not ready, and it’s arguably more dangerous than having no guardrail at all.”
Expert Reactions
Digital sociologist Prof. Paul Henman from the University of Queensland didn’t mince words: “This is a really important paper. If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions, and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”
Alex Ruani, a doctoral researcher in health misinformation mitigation at University College London, described the findings as “unbelievably dangerous.” She points out that with over 40 million people reportedly asking ChatGPT for health-related advice daily, the potential for widespread harm is enormous.
The Legal and Ethical Quagmire
The implications extend beyond patient safety. With a growing number of legal cases already targeting tech companies over AI chatbot-related suicides and self-harm, OpenAI could face significant liability issues.
“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users,” Henman notes. “Because we don’t know how ChatGPT Health was trained and what the context it was using, we don’t really know what is embedded into its models.”
OpenAI’s Response
A spokesperson for OpenAI defended the platform, stating that the study “did not reflect how people typically use ChatGPT Health in real life” and emphasized that the model is “continuously updated and refined.”
However, critics argue that even simulated scenarios revealing potential harm justify stronger safeguards and independent oversight. “A plausible risk of harm is enough to justify stronger safeguards and independent auditing mechanisms,” Ruani insists.
The Bottom Line
As AI continues to integrate into healthcare, this study serves as a stark reminder that these systems are far from ready to replace human medical judgment. The convenience of instant health advice comes with serious risks when the technology can’t reliably distinguish between a minor ailment and a life-threatening emergency.
For now, experts unanimously agree: when it comes to your health, nothing beats professional medical evaluation. Don’t let an AI’s reassurance cost you your life.
viral tags:
AIHealthFail #ChatGPTDanger #MedicalAI #HealthTechCrisis #OpenAIUnderFire #PatientSafety #AIHealthcare #TechEthics #MedicalEmergency #DigitalHealth #AIResearch #HealthcareInnovation #PatientSafetyFirst #TechResponsibility #AIDanger #HealthcareTechnology #MedicalEthics #AISafety #HealthTechNews #DigitalHealthCrisis
viral phrases:
“1 in 2 chance of missing a life-threatening emergency” “AI healthcare could cost you your life” “The system recognized danger but told users to wait” “Zero out of 16 suicide intervention attempts” “40 million daily users at risk” “AI guardrails that disappear with lab results” “Dangerous false sense of security” “Plausible risk of harm justifies immediate action” “Tech companies face growing liability” “Nothing beats professional medical evaluation”
,



Leave a Reply
Want to join the discussion?Feel free to contribute!