AMA Update covers a range of health care topics affecting the lives of physicians, residents, medical students and patients. From private practice and health system leaders to scientists and public health officials, hear from the experts in medicine on COVID-19, medical education, advocacy issues, burnout, vaccines and more.
Featured topic and speakers
The following description was written by ChatGPT based off the full transcript of this episode:
"In this AMA Update video and podcast, John Halamka, MD, MS, president of the Mayo Clinic Platform, joins AMA Chief Experience Officer Todd Unger to discuss the integration of artificial intelligence (AI) and specifically, ChatGPT, in medicine. They explore the Mayo Clinic Platform and its use of generative AI, as well as the potential benefits and downsides of this technology in health care. The discussion touches on the importance of credible sources and the potential for misinformation, as well as the need to balance the reduction of human burden with the risk of harm from incorrect diagnoses. Overall, the video and podcast offers insights into the future of AI in health care and its responsible integration."
Speaker
- John Halamka, MD, MS, president of the Mayo Clinic Platform
Transcript
Unger: Hello, and welcome to the AMA Update video and podcast series. Today, we're talking about a topic that has been all over the headlines, artificial intelligence, or AI, and ChatGPT and how we might integrate it responsibly into medicine. I'm joined by Dr. John Halamka, president of the Mayo Clinic Platform.
Today, he's calling in from the Arizona campus of the Mayo Clinic in Scottsdale. I'm Todd Unger, AMA's chief experience officer in Chicago. Dr. Halamka, welcome.
Dr. Halamka: Well, hey. Thanks for having me.
Unger: Well, before we dive on into ChatGPT, I just want to find out a little bit more about the Mayo Clinic Platform. We're all familiar, of course, with the Mayo Clinic, but the Mayo Clinic Platform is different. Can you tell us a little bit more about what its purpose is?
Dr. Halamka: So I've worked in academic health care for 40 years. And ask yourself, are hospitals software companies? Not so much. Are hospitals very good at agile adoption of technology? Well, not always, but they could be.
And so Gianrico Farrugia, the CEO of Mayo Clinic, in 2019, said, how do we instrument Mayo Clinic and its 75,000 employees to gather insights from patients of the past and create the AI, the decision support tools, productivity of the future? And Platform as that collection of technology and business processes and people to do that.
Unger: That sounds like a big job. New era that we're about to enter right now, of course, technology called generative AI is a key part of that. And for those who don't know, can you tell us a little bit—I'll challenge you—20 to 30 seconds to explain the concept of generative AI.
Dr. Halamka: As humans, we spend our lives learning how to speak based on the patterns of others speaking around us. Generative AI is not thought. It's not sentience. It's simply looking at if you were to complete a sentence, what's the next word you'd add? And let me look at 10 million other people who tried to say the same thing, and we'd generate a sentence.
Unger: I'd say you met the challenge, that was a very good explanation. So ChatGPT is currently the most well-known innovation in generative AI. And right now, it feels like we see something new every week, kind of a race to see who can develop it and improve it the fastest. Can you talk about what it is and where it's headed?
Dr. Halamka: So if, again, you take the body of literature, medical literature, and we are able to say, oh, the last 800 papers on a particular topic, here's what others said. I think most clinicians would find that pretty useful. The challenge, of course, is these generative models are trained not on, say, JAMA, they're trained on Reddit. So what that implies is that they're only as good as the training set that were the exemplars as to what word should come next.
So that's why they, at times, hallucinate. I think what we're going to see coming in the next couple of quarters is more health care-specific generative AI from credible sources that are well-disclosed and we can trust.
Unger: Learning everything on Reddit still passed the USMLE, from what I understand. So obviously, a lot of innovation going on in this. I mean, you said something that I think is what is at everybody's back of their minds right now, which is, it all depends on what the source is.
And so when we think forward to this, there are obviously going to be upsides and downsides to a technology like this, especially as it relates to health care. Let's talk a little bit more about that. How do you see the positive part of this and where this could go awry?
Dr. Halamka: Sure, so the FDA looks at software as a medical device in terms of risk. Now, I think we would agree that if I use ChatGPT to say “appeal a claim denial.” OK. If it got some of the facts wrong—I mean, what's the risk? Unlikely that somebody is going to be harmed.
If I use ChatGPT for diagnosis of a complex medical condition, high potential for harm. So I think in the short term, you'll see it used for administrative purposes, for generating text that humans then edit to correct the facts. And the result is reduction of human burden. And if we look at some of the, what I'll call crisis of staffing and the great resignation and retirement of our clinicians, burden reduction is actually a huge win.
Unger: Now, people obviously have concerns regarding misinformation. I know one thing I'm concerned about is—project yourself into the future—we'd love it if whatever AI was using AMA and, of course, many other forms of legit medical information to answer people's questions that they type into Google, but there are obviously other sources, ones that we would consider misinformation.
In an article that you co-wrote for the Mayo Clinic Platform, you mentioned a 2022 Stanford University artificial intelligence report that found that most generative models are truthful, get this, only 25% of the time. So how do we take this into account as we move forward in this space?
Dr. Halamka: So I think of AI as not artificial intelligence but augmented intelligence. So let me give you an example. I did use ChatGPT and said, write a press release about Mayo Clinic's association with a new technology company. And you've read press releases. They start with "What is it you're doing?" There's a quote from a CEO and another CEO, and then there's some conclusion.
Well, it generated a perfect, eloquent and compelling press release that was totally wrong. So then I went in and edited all the material facts. And the end result was a perfectly formatted document that I could send off and done in five minutes, not one hour. So think of it as augmenting your capacity and not replacing our clinicians.
Unger: That's very interesting in terms of building those frameworks. It's kind of a shortcut but still requires you to, of course, insert the facts and the meaning into that. There are other concerns about how the technology could be used in questionable ways. What uses for ChatGPT would you rather not see gain traction in the medical community? And I know it's early to answer a question like that, but what are you looking at?
Dr. Halamka: So again, I look at where I want decision support. I would love to be able to say, we programmed ChatGPT with 10 million de-identified patient charts, and patients like the one in front of you had the following treatments from other clinicians in the past. I mean, that would be lovely. We're not there yet.
So just, again, assume that it is going to take a lot of text and predictably complete sentences that certainly look human-like but have no thought behind them. So don't depend on it when there is critical thinking or reason.
Unger: I should have thought ahead and had ChatGPT develop this entire interview but I was not thinking far enough ahead in that. In a recent talk on this same subject, you also said that we need to disrupt our own business models. Doctors and administrators in health care will not be replaced by AI. However, doctors and administrators who use AI will replace those who don't.
And I'd guess that most doctors are not familiar enough with this type of tech at this point. How do they stay ahead of the curve and make sure they are one of those folks that are using the technology the right way?
Dr. Halamka: Well, let me give you a case example. So I'm just about to turn 61. I care about the accuracy of colonoscopy. It just turns out, if you look at colonoscopy interpretations across this country, about 20% of the lesions in an endoscopy are missed by human interpretation. Mayo Clinic has developed an algorithm—trained by humans, of course—that reduces the error rate to 3%.
So as a clinician doing endoscopy, you're going to be a professional, you're going to be empathetic, and you're going to be technically very competent, and you'll be augmented by, hey, a guardian angel looking over your shoulder that said, oh, maybe you want to look at that little ditzel up in the right-hand corner.
So that's where I just think it's going to be highly useful. It will make us better physicians by having the oversight and experience of others who've come before us.
Unger: Now, given the concerns that we discussed and the pace at which all of this seems to be moving, it would make sense that we'd want to structure some kind of guardrails for use of this technology. Tell us more about the Coalition for Health AI and the guidelines that it's helping to create.
Dr. Halamka: So about a year ago, a number of us came together and said, if we're going to use health AI, we better understand its utility, its bias, where it's going to do no harm. So let's put together implementation guidance. We brought together America's clinical academic leaders, government and industry leaders, and created this coalition for health AI that is completely open source. You'll find it at coalitionforhealthai.org.
And it has all the guidelines and best practices. How do we measure bias? How do we assure that we're doing no harm? How do we incorporate these new technologies into workflow? So we'll be working together. These questions are so difficult to answer. It cannot be done by any one agency or organization. It's going to require a community—all of us getting us to the effective use of AI.
Unger: Dr. Halamka, we see, reading about all sorts of ways that people are using ChatGPT, from writing college papers to cover letters. You name it. I mean, it seems inevitable that this kind of technology is going to be developed fast and it's going to be used across a lot of different aspects of our lives. What are you most excited about personally?
Dr. Halamka: As I look at the next generation of physicians, I think the next generation of physicians will be less memorizers and more knowledge navigators. And so again, if ChatGPT and its ilk were able to digest large amounts of literature, credible literature, large amounts of patient history from that 4,000-page chart and then be able to synthesize it into a form where then a physician can add the cognition, the decision making and the reasoning, you're going to be a much more satisfied clinician.
They'll be more productive. You'll be practicing at the top of your license. So how about this—if in my generation, we can take out 50% of the burden, the next generation will have a joy in practice.
Unger: That is a great way to end this episode, and that could be the real promise here. You said the magic word, huge reduction in burden. Dr. Halamka, thank you so much for being here today. That's it for today's episode. Really appreciate your insights into the future of AI and ChatGPT in health care. I know it's a scary topic, it's moving fast. Of course, we'll keep our eye on that.
We'll look forward to talking to you again and getting update in the coming months because it's only the beginning of the discussion. We've got a lot to look forward to. We'll be back soon with another episode. In the meantime, you can find all our videos and podcasts at ama-assn.org/podcasts. Thanks for joining us. Take care.
Disclaimer: The viewpoints expressed in this video are those of the participants and/or do not necessarily reflect the views and policies of the AMA.