What ChatGPT Health Signals for the future of health care
With OpenAI’s ChatGPT Health, the line between a general purpose AI and a full scale health platform begins to blur, raising questions about trust, safety and accountability


For years, the reflex was simple: Feel a symptom, Google it. Typing vague discomforts into a search bar and letting the internet diagnose you became so routine that ‘Dr Google’ turned into a cultural shorthand.
But now, with the rise of general purpose AI (artificial intelligence), that same instinct is increasingly being replaced by something more conversational and personalised. Enter ChatGPT Health, OpenAI’s dedicated health experience embedded inside a general purpose chatbot.
Unlike the old search and scroll approach, ChatGPT Health invites people to connect their medical records, lab results and wellness apps, allowing the AI to tailor answers based on personal health data. Health queries were already one of the most common uses of ChatGPT—OpenAI says over 230 million people globally ask health and wellness questions on the platform each week.
OpenAI claims that the ChatGPT Health has been designed in close collaboration with physicians and is meant to help people play a more active role in understanding and managing their health—not to replace clinicians.
And it is launching into a market that is moving at exceptional speed. In January, the pace of activity in the sector underscored how quickly the landscape is shifting. OpenAI acquired health care startup Torch, Anthropic rolled out Claude for Healthcare and Sam Altman-backed MergeLabs closed a $250 million seed round at a $850 million valuation.
Experts reckon that this moment is a turning point. “It’s shifting from specialised clinical AI systems to more versatile tools that can handle various health care tasks,” says Jaspreet Bindra, co-founder of AI&Beyond. “The entry of general-purpose AI tools into health care opens up new possibilities and potentially improves patient outcomes.”
Building on that promise, Dr Sandeep Budhiraja, group medical director, Max Healthcare, says the best-case scenario will be that “AI brings in equitable access to quality health and wellness support at population scale. If that promise materialises, it may tackle the problems of health care cost and affordability”.
So far, some of the most common use cases for AI in health care have been administrative tasks, patient education and predictive analytics. “It is speeding up imaging workflows, cutting documentation time and helping clinicians focus on the parts of care that require judgement,” says Kumar Surender Sinwar, founder and CEO of mlHealth 360. “The hype tends to appear when we talk about AI replacing doctors. Health care decisions are rarely binary, and anyone who has spent time in a hospital knows how much nuance is involved.”
This contrast becomes clear when comparing conversational AI with clinical AI tools already in use. Clinical systems used by companies like Qure.ai, Cloudphysician and others operate within tightly controlled environments. “These are point solutions with clear validation pathways and defined failure modes. General-purpose AI changes everything because it's conversational and unpredictable,” explains Vikas Singh, chief growth officer, Turinton Consulting. He adds that general LLMs (large language models) provide powerful capabilities but often poor health care implementations without proper constraints.
The challenges that come with applying AI in health care are fundamentally different from those in other industries because “health care AI deals with more sensitive and varied data, requiring highly specialised approaches,” says Himanshu Puri, health care lead at Mastek Global.
He underscores that data bias remains one of the biggest risks: If models are trained on datasets that don’t represent diverse populations, they inevitably “perpetuate existing disparities”, leading to skewed predictions, misdiagnoses or uneven treatment recommendations.
Puri explains that such biases can seep in at multiple stages—through demographics, data collection practices or even algorithmic design. Adding to the complexity, today’s AI regulations still offer “blurred definitions around use, consent, and data integrity”, creating uncertainty for patient related applications. To counter this, he stresses the need for deliberate mitigation strategies such as diverse dataset curation, debiasing algorithms, and data augmentation techniques to ensure models serve all patient groups equitably.
There are also fundamental safety tensions that conversational AI must navigate. “If they are tuned to be extremely conservative, they become safe but largely unhelpful, defaulting to generic advice and frequent escalation aka false positives. If they are tuned to deliver a smooth, reassuring experience for most users, they inevitably risk missing a small but clinically significant subset of cases,” explains Prashant Warier, founder & CEO, Qure.ai. Such missed cases—the false negatives—are where the hardest legal, regulatory and ethical questions arise.
This illustrates a structural gap. Health care systems are built to minimise missed diagnoses, even if that leads to over-testing or over-referral. Consumer-facing AI systems, by contrast, are optimised for clarity, ease and user confidence. As Warier puts it, “Aligning these two incentive structures is difficult, and this—not model capability—will be the defining challenge for conversational health systems.”
Looking ahead, Warier believes the future of conversational health AI will depend less on how natural these systems sound and more on how well they manage uncertainty, integrate with clinical data and earn the trust of those ultimately responsible for patient outcomes.
First Published: Jan 29, 2026, 12:48
Subscribe Now