"AI will be as common in healthcare as the stethoscope."

Dr. Robert Pearl believes tools like ChatGPT will make patients healthier, providers happier, and medical bills smaller

Published: Nov 19, 2024 10:50:38 AM IST
Updated: Nov 19, 2024 10:59:07 AM IST

By replacing some of what clinicians do today, generative AI can free up time for them to delve deeper into the socioeconomic and psychological determinants of health and expand the doctor-patient relationship.
Image: ShutterstockBy replacing some of what clinicians do today, generative AI can free up time for them to delve deeper into the socioeconomic and psychological determinants of health and expand the doctor-patient relationship. Image: Shutterstock

Every year, an estimated 800,000 Americans die or are permanently disabled after a medical misdiagnosis. More than a million die as a result of complications from manageable chronic diseases such as diabetes and hypertension.

Dr. Robert Pearl, former CEO of the Permanente Medical Group, has traced these dismal outcomes to a toxic culture among doctors and a broken healthcare system that puts corporate profits above patients’ well-being.

Now, he believes, a remedy has arrived: generative artificial intelligence, the deep learning models that draw on huge quantities of information to answer complex questions in the blink of an eye. “I see it as the holy grail we have wanted for almost a century,” says Pearl, a lecturer in organizational behavior at Stanford Graduate School of Business and a clinical professor of plastic surgery at Stanford University School of Medicine. In his new book, ChatGPT, MD — written with assistance from OpenAI’s popular chatbot — Pearl argues that this technology stands apart from previous innovations because it goes beyond democratizing knowledge by offering personalized expertise that will give patients better outcomes and make doctors’ and nurses’ jobs less stressful.

Stanford Business spoke with Pearl about how he thinks generative AI will transform medicine for the better. 

In your book, you describe a case that haunted you from your time as a practicing plastic surgeon. You and your colleagues diagnosed a massive bump on a newborn’s neck as a benign lymphangioma, but it turned out to be a rare and aggressive form of cancer. You regretted not performing a biopsy sooner. How could generative AI have prevented this situation?

Read More

There’s an expression in medicine: “When you hear hoofbeats, think about a horse, not a zebra.” Everyone assumed this was a lymphangioma, because that would have been the case in 99% of people. We fell prey to confirmation bias, finding the pieces that fit with our mindset and ignoring the ones that didn’t. If we had a generative AI tool, we could routinely input all the information we had, and it would have pointed out the possibility that the diagnosis could be different. I don’t think it would have made a difference in this case because it was such an aggressive tumor, but there are a lot of cases where it would.

Q. How is AI already being used in healthcare today?

There are three kinds of AI: Rule-based AI uses clinician-created algorithms to, say, offer a diagnosis based on someone’s symptoms. Narrow AI uses huge datasets, neural nets, and deep learning to solve a narrow problem, such as reading mammograms or brain scans. Generative AI draws on all the information on the internet and nearly every book and article ever written to answer any question. So far, it’s mainly being used for the administrative side of medicine, such as recording data in electronic health records or coding procedures for insurance companies. Now, patients are starting to use it to figure out their diagnosis, potential treatments, and the right questions to ask. But the tools aren’t yet ready to be used without clinician oversight.

Q. What potential does generative AI have to help patients in the future?

From the printing press to the internet to the iPhone, past technological advances gave patients access to more knowledge. Generative AI gives people the expertise to personalize that information. You’ll be able to input your medical history, lab results, medications — even your entire genome sequence — and get very specific answers.

In the short run, clinicians are going to use it to help patients better manage chronic diseases, which now impact nearly 60% of all Americans. If these diseases are well managed, you can decrease complications by 30% to 40%. We have excellent and inexpensive wearable devices that monitor blood pressure, pulse, blood glucose, and more. But patients don’t know how to interpret the data, and doctors don’t have time to review it. They say, “Let me change your medication, and I’ll see you in four months.” In the future, generative AI can guide patients with personalized recommendations for diet and exercise based on their needs and preferences and make sure doctors aren’t missing anything. Imagine if we had 30% to 40% fewer heart attacks, strokes, cancers, kidney failures. We’d have a much healthier country, we would all have more fulfilling lives, and the cost of medicine would plummet.

Also read: 5 challenges to ensuring cyber assurance in the medical AI business

Q. What about medical professionals? Should they be worried about their jobs?

It’s not going to eliminate doctors and nurses; it’s going to augment what they do. By replacing some of what clinicians do today, generative AI can free up time for them to delve deeper into the socioeconomic and psychological determinants of health and expand the doctor-patient relationship. That’s going to improve fulfillment and diminish burnout, which are so problematic among doctors today. As we start loading information from people’s electronic health records and hospital monitors into generative AI tools, we’re going to better understand optimal ways to manage diseases and do procedures. In 5 to 10 years, we have the possibility to completely transform American medicine.

Q. You write that harnessing these benefits isn’t a given. What could stand in the way?

The technology is already really good and will continue to improve. The biggest challenge is our willingness as clinicians to empower patients. It’s going to require us to answer patient questions that are more challenging and to become more scientific in the care we provide. And it’s going to require us to change the reimbursement system from fee-for-service to a value-based model of care. No one is going to do something that erodes their income — we need an approach that rewards doctors for keeping people healthier rather than solely trying to reverse a complication when it ensues.

Q. When it comes to generative AI, people have concerns about misinformation, privacy breaches, and bias. To what extent should we be worried about this technology doing harm in medicine?

We should be worried about those things, but I encourage people to put that in the context of what currently exists. Right now, I worry about the security of my electronic health records. People seek information on Google, which makes money by selling your privacy. The internet and social media are filled with misinformation. Will generative AI increase these risks? I don’t think so — we need the same level of technical oversight and expertise to protect people. And in some ways, generative AI tools will be less likely to produce misinformation, since they derive their expertise from a massive number of textbooks and peer-reviewed journals. As a result, generative AI is more likely to reduce misdiagnoses than to increase them. Similarly, bias has certainly been highlighted in AI, but most of that bias is a reflection of what happens in clinical practice today. There’s actually an opportunity to use generative AI to reduce bias by writing prompts that prioritize this.

Q. You call ChatGPT a “co-author.” Why did you tap this tool to help write the book, and what was that like?

When I decided to write about generative AI, I faced a problem: If I wrote a book the typical way, it would take two years, and by the time it came out, it would be outdated. So I started by loading the 1.2 million words I had written into ChatGPT, so it understood my voice and writing style. Then I approached it as a research assistant, exchanging up to 200 different versions of the draft. I’d write a first draft, then ask ChatGPT for suggestions based on different audiences or to adjust the tone. The experience was positive overall — though it did hallucinate an exploration of the North Pole — that’s why I documented every fact I used. Working with ChatGPT, I was able to finish the book in six months, and it’s a better book than I would have written myself. (All profits go to Doctors Without Borders.)

Q. What’s your advice to healthcare leaders who want to successfully guide their organizations through this transformation?

Doctors have tremendous fears about how generative AI is going to change their practice, their income, their liability. Leaders will have to give people the courage to move forward and see why the risks of not doing so are greater: With the increasing cost of healthcare, the alternative is making access to care more difficult, which will create a vicious cycle where people become sicker and sicker and costs rise even faster. Transformation doesn’t happen by putting a toe in the water. Leaders have to embrace the technology, innovate, learn, and adjust accordingly. Already, 40% of physicians are willing to use generative AI when interacting with patients. Eventually, AI will be as common in healthcare as the stethoscope.

This piece originally appeared in Stanford Business Insights from Stanford Graduate School of Business. To receive business ideas and insights from Stanford GSB click here: (To sign up: https://www.gsb.stanford.edu/insights/about/emails)

X