Digital

AI, health care and the strange future of medicine

. 38 MIN READ

Moving Medicine

AI, health care, and the strange future of medicine

May 28, 2024

AMA President Jesse Ehrenfeld, MD, MPH, leads a discussion with three other physicians on the use of AI in health care. Panelists are Claire Novorol, MD, PhD, the founder and chief medical officer of Ada Health, a digital health company using AI to help diagnose and treat patients; Mark Sendak, MD, MPP, a population health data science lead at Duke Institute for Health Innovation; and Alex Stinard, MD, an emergency room physician and regional medical director of Envision Health Care in Florida.

Speakers

  • Jesse Ehrenfeld, MD, MPH, president, AMA
  • Claire Novorol, MD, PhD, founder and chief medical officer, Ada Health
  • Mark Sendak, MD, MPP, population health data science lead, Duke Institute for Health Innovation
  • Alex Stinard, MD, emergency room physician and regional medical director, Envision Health Care

Host

  • Todd Unger, chief experience officer, AMA

Listen to the episode on the go on on Apple Podcasts, Spotify or anywhere podcasts are available.

Achieving optimal health for all

AMA membership offers unique access to savings and resources tailored to enrich the personal and professional lives of physicians, residents and medical students.

Limited-time half-price dues when you join!

Unger: Welcome to Moving Medicine, a podcast by the American Medical Association. Today’s episode is a session with AMA President Dr. Jesse Ehrenfeld at the South by Southwest Conference in March 2024. In it, he moderates a panel of four physicians discussing AI, health care and the strange future of medicine. Panelists include Dr. Claire Novorol, the founder and chief medical officer of Ada Health, a digital health company using AI to help diagnose and treat patients. Dr. Mark Sendak, a population health data science lead at Duke Institute for Health Innovation. And Dr. Alex Stinard, an emergency room physician and regional medical director of Envision Health Care in Orlando, Florida. Here’s Dr. Ehrenfeld.

Dr. Ehrenfeld: We've got four technology experts. They're not just technology experts, they're all physicians. My name is Jesse Ehrenfeld. I'm an anesthesiologist, I'm a physician, and I also happen to be, this year, the president of the American Medical Association. When I think about AI, AI-enabled tools, their potential to radically transform the health care system, I think about them from the lens of my work as a physician, my work as an educator in medical school, but also as a public health advocate. Look, we all know that health care is a mess today. Has anybody tried to find a new primary care doctor lately or schedule an appointment with a specialist? 49 days is the average wait time to get in to see somebody for routine care. Long delays, record demand, workforce shortages and clinicians who are increasingly burned outa lot of challenges.

So, we know that if we're going to get out of this mess, we're going to have to lean on technology. We know that AI has to make things better. The question is how do we make sure that that happens. I can't tell you what the future of medicine is going to be, but I can tell you that if physicians aren't driving and helping shape the development of these tools and technology is we're going to have more tech that doesn't work. There's more of a burden than an asset.

So before we get started, in the interest of full transparency, I want to mention that I do serve on the AI advisory counselor for Augmedics and as does Alex, a publicly traded virtual scribe company. I'm also consultant to Masimo, a medical device company, and an advisor to Extrico Health, a health data startup, but I'm here today moderating this conversation.

So let's dive in. Let me ask each of you to introduce yourselves. Claire, if you don't mind starting. Tell us a fun fact about yourself and what you're currently doing in the AI space.

Dr. Novorol: Hi, everybody. So I'm co-founder and chief medical officer of Ada Health. As you may be able to hear, I'm from the U.K. I work as a doctor in the U.K. and co-founded Ada more than 12 years ago now. By way of Cambridge and then London in the U.K., Berlin for a number of years, I made my way over here, and for the last two years, I've been living here in the U.S. Ada is best known for its consumer symptom assessment app. We have over 10 million registered users of that app, over 30 million completed health assessments, hundreds of thousands of five-star reviews in the app store. It's a probabilistic symptom assessment covering thousands of conditions, symptoms, findings, risk factors, and we then help people take their next steps. The end of using the assessment is completely free to download and use and try out.

Things that we're working on right now, so we also work with many large health systems across the world, in Europe, here in the U.S., several of the largest health systems, also with government health systems, health plans, and we work with life science companies as well. The focus there really is on accelerating the path to diagnosis and treatment for people who have conditions that are often hard to diagnose, underdiagnosed with delays in accessing treatment, so rare diseases and even common diseases that are often underdiagnosed.

I think the other thing maybe just to mention, our system is not a generative AI system, but we are doing a lot of work on integrating with these new generative AI tools and complimenting what we do with that, which I'm sure we'll get into a bit.

Dr. Ehrenfeld: Perfect. Thanks, Claire. Mark, fun fact. What are you doing in AI?

Dr. Sendak: So, I've been a population health data science lead at the Duke Institute for Health Innovation for nine years. Every year, we work on about 10 projects that are internally sourced. We function as an internal R&D group within the health system, hard money funded, and probably about 30 to 40% of our portfolio over the years has been machine learning AI. So, we've done chronic disease management, sepsis, acute care, ED triaging, perioperative scheduling. Then two to three years ago, we started a multi-stakeholder national collaborative called Health AI Partnership. So now, I spend a lot of time also thinking nationally what are the policies/regulations that we need to be advocating for it to help disseminate best practices on the ground within health care delivery organizations.

Dr. Ehrenfeld: Perfect, and we'll get into a lot of that, obviously, in the course of the conversation. Alex, what do you do?

Dr. Stinard: All right. My name is Alex. I work for HCA Healthcare. We're a pretty large health care organization, the largest provider of inpatient medical services in the country. I do emergency medicine. I'm at the synergy of care and AI. I work with Augmedics and HCA developing ambient documentation solutions so that it's basically having a computer listen to our conversation and make a note. So, we're trying to make it so that being a doctor is not so painful because it is painful right now.

Dr. Ehrenfeld: Thanks for that. Just so people understand the painat the peak of COVID for a hot second—physicians, nurses, health cares, we were cheered. I remember walking into the hospital one morning and there was this chalk art like, "Thank you. Thank you for what you're doing." Then suddenly, we got demonized and attacks on science started happening. That's, I think, one of the unfortunate legacies of COVID. At the peak of the pandemic, two out of three physicians in the U.S. were burnout, clinical signs and symptoms of exhaustion and burnout. That's receded a little bit, but unfortunately, it hasn't gone away, and it's driven by the tremendous pressures on the health care system. We do not have enough doctors, we do not have enough nurses to take care of the exploding demand that exists every day in America. People are living longer, more chronic disease, not enough of us to go around—which brings us this conversation about how can the tech actually improve things, scale capacity, augment what we're able to do.

So it's clear AI is on the top of minds of physicians, health care workforce professionals, but some have been using similar tools for a while now. What's changed in the last few years? You've got a company, you're not new. You've been practicing for a while at HCA and you've been thinking about technology for quite a while. Something feels different about this moment, this era. What is it? What's changed? What should we be thinking about?

Dr. Novorol: So, we've been doing this for more than 12 years. I guess if I look back 12 years, first of all, there was a lot of excitement and ask or the beginnings of the excitement and aspirations and a lot of talk, but not much really happening in health care back then. Then I think it started with simple point solutions, admin tools, nonclinical AI solutions. I guess then, of course, you had the narrow AI machine learning solutions for very specific tasks in radiology and so forth. Trained for that very specific use case.

Ada, what we do, this is also its purpose built for a very specific task. It covers the breadth of medicine, thousands of conditions, finding symptoms, and so forth, but it's trained for that very specific task. We have seen that evolution, but of course, recently what we've seen is these general-purpose AI tools across the board.

So obviously, generative AI, LLMs, tools like ChatGPT have really gone mainstream and people are starting to look at how do we use these in health care, across all industries, but in health care. That's obviously really interesting because they've been trained on the whole internet, so they haven't been purpose-built just for health care. There's so much potential there, and we really need tools with potential like that in health care, but there's obviously more risk.

We really need that focus on safety and quality and working with clinicians, working with the systems, partnering with them, and really thinking about those things. The speed of some of the change that's going on and the testing, it's so mainstream. Anyone can test these things out. I think that's so different, and I'm sure we'll get into it, but obviously, some of the challenges with them like hallucinations and non-deterministic. So, you can put the same input in, but you will get a different output each time. These are really challenging things to overcome.

Dr. Ehrenfeld: Mark, Alex?

Dr. Sendak: So, three quick things. One, to build on Claire's comment, I think the product life cycles, you see a lot of use cases that we can implement and test things more quickly. So, when you're working on certain use cases now, you don't have to start from ground zero of curating data sets, training models. You have something off the shelf that you can start to test and then start to optimize. So, there's this whole set of tasks that we can start to tackle more quickly.

The second one, which I'm curious the makeup of this room, probably big companies, small companies—something I have seen from an innovation standpoint. It hearkens back to when I first got started in this space, it was like 2012, 2013, the massive EHR companies were rolling out. You had massive contracts, hundreds of millions, some health systems. It was billions of dollars to do installations and it really tampered down internal R&D and internal product development because you had a lot of reliance on external large companies to build solutions for everything. I'm seeing that right now again with the way that people perceive large incumbents being able to tackle everything. So, it does create this environment again where we have to start to show people again that, "Oh, there's a lot of expertise internally that we can harness to solve problems."

Then the third one is the regulatory piece. I think increasingly, health systems have been implementing algorithms in clinical care for decades. Just the last two years, there's a lot of forces, different regulatory agencies, states coming out with things that health systems are struggling to make sense of what they should be paying attention to, how this adapts what they do in practice. So, I think it's the technology, but then also some of these other actors, there's a lot more energy.

Dr. Ehrenfeld: We're definitely seeing, I think, that regulatory pressure because of concerns related to some of the challenges around how do we not exacerbate bias, how do we not harm patients of color because an algorithm was trained on a dataset that had bias in it, how do we make sure that we improve access and equity through the deployment of these tools. We have seen some spectacular failures of algorithms and AI tools when deployed at scale. Alex, what's your thought? What's different now than we were a few years ago?

Dr. Stinard: So I noticed when I was wearing Google Glass to see patients a few years ago, there was excitement.

Dr. Ehrenfeld: Of course you were wearing Google Glass.

Dr. Stinard: Certainly, when people looked at me they're like, "What is that? What are you doing?" but now when I'm saying, "Hey, I'm using the AI Scribe," there's a lot of excitement. So, the patients have excitement and the physicians have excitement. So, I'll get emailed yesterday, text yesterday, "Hey, when are we going to get that AI Scribe? I want to really get my hands on that." Then when patients say, "Hey, you're using AI Scribe? I'd love it if you do that." So, I think just the buzz of AI hitting the community makes everybody super excited about it, but I think they're excited because they use it in their daily life.

So, if you think about a year ago, you were saying, this is like what's ChatGPT, what's Gemini, what's Claude, but now that it's getting incorporated into their life, they see the actual true value of it. So, it's like, "Wow, I can clean up an email really easily. Maybe my doctor should be using those same tools when they're writing their note."

So, this is another thing I noticed is this is the first year, so I've been doing medicine about 20 years, that we had less clicks. So my wife, she knows about clicks, but my kids, they don't really know about clicks because they only think about clicks on their phone, not clicks on a mouse, but we do most of our work on a desktop, but to say this is the first year where we spent less time at our computer, meaning probably more time with the patient, not more time in the doctor's lounge, but this is the first year where I can say we spent more time with our patient rather than with the computer.

Dr. Ehrenfeld: Less clicking, certainly a laudable goal. AMA study and others have shown that for every hour we spend seeing patients talking to you in an exam room or the clinical setting, it's two hours on paperwork in the EHR. That should be flipped around. Obviously, there are opportunities for the tech to help with that. So, a lot of challenges in developing these tools in a health care system. What are some of them?

Dr. Stinard: So, when we are seeing a patient, health care is pretty important, meaning that literally there's lives on the line. So, we want to make sure that we are validating everything that we do by the computer with a doctor. So, we're not trying to have the computer running by itself making medical decisions. We certainly would love to have a double check on the doctor, meaning, "Hey, did you send home that patient with that heart attack? Hopefully not because you looked at that lab," but if as a doctor you have hundreds of numbers to check each shift, there's a possibility you could overlook something. So having a double check, but also making it so you have help just with your everyday tasks like I wouldn't really try to do my email without spell check. Do I really want to go to work without that extra check?

Dr. Ehrenfeld: I like that. I'll tell you, I am an anesthesiologist. I see patients and I had this very strange experience the other day where I'm a pretty sophisticated electronic health record user. I'm an informaticist, president of the AMA. So, I look at the chart, I see my patient, I talk to her about surgery and anesthesia, and I walk out of the room and just as I'm about to leave she goes, "Doctor, one other thing." Turn around. She goes, "I don't want to see what happened the last time happen this time." So, I'm like, "What happened last time that I didn't see in the note?" She goes, "Well, I had a cardiac arrest in the recovery room." I was like, "Yeah, I don't want that to happen either."

I went back and buried in a little tiny nursing note in a place that I would never have thought to look in no structured way is this indication that, yes, she in fact had a cardiac arrest. That critical piece of information shouldn't be hidden. It shouldn't be latent. It should be served up in a way that is obvious and actionable when we need it. Obviously, there are ways that these tools can help. All right. Mark, Claire, challenges?

Dr. Sendak: So one of my favorites, which is not the tech answer, but it's change management. So for me, there's been these themes of successful projects. We've had a few on the inpatient side where we actually reconfigured our rapid response team to be a patient response program that was proactively—

Dr. Ehrenfeld: What's a rapid response team?

Dr. Sendak: Good point. I think this was inspired by Toyota that had this cord in the assembly line, that you could pull the cord and it would stop the assembly line and people would go to that place and solve the problem. So, they brought that into health care. I think this was in the 2000s, where sometimes it is a button or a cord where in a room you can initiate a rapid response where people come to the room and address some acute event that is happening with a patient. So it's interdisciplinary, typically very responsive and reactive.

So, what we did is we started with sepsis, then we did cardiac deterioration, we did ICU transfers, mortality. So for each of these, we have algorithms that are running to proactively identify high risk, and then these interdisciplinary teams will go see those patients. So this for me is an example where when you're building tools in technology and health care, there is this dogma to seamlessly integrate into clinical workflows, meaning that you have to take what you're doing and put it in the way that the clinician is doing their job already, but oftentimes if you want to actually change the way that health care is delivered. So you need to change the way roles are structured, you need to change communication channels. So for me, that's the untapped potential because if we start to relax some of the constraints of our workflows, of our structure, it opens a lot of opportunity to actually deliver care in a different way.

Dr. Ehrenfeld: Thanks. Claire, you mentioned hallucinations. Other challenges or you can talk about hallucinations.

Dr. Novorol: Yeah, no, I will build on the change management, and I think this is so critical. Of course, health care is complicated and these health systems that we want to integrate our AI tools into are complex. So that's one of the biggest challenges. Actually, if we go back 12 years to when we started Ada, we started building our probabilistic system with the goal of supporting doctors in clinical decision making and diagnosis. We quickly learned. We had a prototype and we were testing it with doctors, primary care, physicians in clinics. They just didn't have the time. They didn't have the time to do double entry, enter this information twice, and didn't necessarily believe they were going to get enough benefit from using it to spend that time.

So, we actually flipped our approach and translated all that medical knowledge into patient-friendly questions and information so that we could put it in the hands of the patients and actually gather that information before the doctor consultation as a pre-assessment. The patient has the time and all this information, and instead of having the doctor have to ask all of those questions during the consultation, you collect that information from the patient in advance.

Each question is built on the ones before, what's the most relevant next question to ask, thinking much like a doctor thinks, reasoning in that way, and then not only steer the patient appropriately, but hand all of that information over to the clinician into the health record so that it saves them time and it's there if they want it. They don't have to spend any time putting extra information in, but it's saving time rather than taking time. I think those are the challenges that you have to think laterally and creatively around because I think we would still be banging our heads against a brick wall if we were really, really trying to put this in the hands of doctors and say, "You must spend the time. You must put all of this information in yourselves."

Dr. Ehrenfeld: AMA does a lot of surveys of our members, physicians nationally, nationally representative survey of physicians U.S. that we did in the fall. Four out of 10 physicians roughly today in the U.S. are using AI, using AI tools. Now, it's mostly the unsexy backend operation stuff. It's supply chain management, scheduling, billing, fighting with the insurance companies, your prior authorization, those kinds of tools, but where do you guys see AI going in the short and the long term, and what do we need to make sure that we're successful in having deployments that actually can solve problems?

Dr. Stinard: So I guess with the AI, you'd hope that we have better health care, cheaper. I think that's what we'd all vote for and making sure that everybody has access. So those are pretty big goals, meaning saying that we're going to have health care that is available to everybody, that costs the same as a gym membership. Do you think we could do that? That would be life changing. I think that's going to take a little while, but currently right now, if we can just get to the point where we can make health care more attainable, meaning that you can access your information, it's your information, you can access it the way that you want to access it. Most people want to access it on their phone.

So, I think making it so that you can have access to your own information. If you're able to look at that information and be able to organize it in a way that you as a patient can understand and it's not in jargon like a physician, that would also be very empowering, and then take it to that next level to say, "Can we take action on that information?" meaning that you have a clinical decision support that's supervised by a physician to say, "Now I have information, I can analyze that information and I can take action on that information," meaning that we can try to make you feel better and live longer. That's what we're trying to do.

Dr. Ehrenfeld: Perfect, Alex. Claire? Mark?

Dr. Sendak: So, I'm really happy Claire's here because her company is patient-facing, and I would say historically, I work in a health system. I've been in a health system for 13, 14 years. We build a lot of tools. We're really scared to implement things that are patient-facing. This is I don't think unique to any single institution. Health care remains highly centralized. Expertise remains highly centralized. So for me, yes, maybe we'll see bigger diversity in clinical operational use cases within hospitals, but I see the real shift in being the opportunities for patients to be much more users, consumers of AI that is built on data that historically would not have been. I don't think the health systems weren't building those products. So, it's going to happen. It is happening, and I think we're going to have to start to build those bridges between different communities trying to serve these needs.

Dr. Novorol: In order to be patient-facing, you need to ... we're a clinical tool, so there's a lot of use cases for AI and very valuable use cases on the nonclinical side, admin side, improving workflows, creating efficiencies, but then there are also clinical AI tools, and we are one of those. We do make the workflow more efficient and hand over the information to the clinician and so forth, but we are doing clinical assessment. That means you need an enormous focus on safety, on quality. We produce peer reviewed evidence. We are a regulated class two medical device in Europe. We have years of safety data, data on performance efficacy out in the real world. That's a huge mountain to climb and needs years of dedicated focus.

I agree with you on the importance of the patient-facing side of these tools, but I think that's something with these new generative AI capabilities, I think it will take longer to put those in the hands of patients. I think that that safety side, the reliability, being confident that you get the same answer each time with the same inputs, you don't have these hallucinations, your completely confabulated sentences that sound highly plausible to the layperson but a completely made up, we need to really overcome some of these challenges and be very confident in those before we put them in the hands of patients.

Dr. Ehrenfeld: So let's stay with the patient concept because we've got a lot of patients sitting here, potential patients in the audience. When you walk in to a clinic or health care setting or you download an app, a health app that's got AI in it, as a patient, what should you be asking? How do you know? Can I trust this tool or device or answer that I'm getting from somebody who's using this health AI tool?

Dr. Novorol: It's a really important question. There are certain things that patients, consumers should be looking for and asking themselves, but I would start with the fact that it is hard for the consumer to do some of these things and to know that this is a company they can trust, that's credible and a product they can trust. So ultimately, the burden of responsibility does sit with us and with the industry to really have high standards and ensure that those are met. So I would say all of the organizations building these tools and deploying them, partnering with AI companies really, really need to take a very responsible approach.

As a consumer, I think you look and you see how seriously does this company take safety and quality and measure that and evidence that. Do they have clinicians in the team? That's one thing that you can look at. It's no guarantee, but that does show a commitment to being clinically led. What do they do with my data? How serious is this company around privacy, security, HIPAA-compliant, GDPR compliance? Do they sell my personal health data or do they have guarantees that they'll never share it without my consent for the specific use cases I ask them to share? All of these, all of these things you really ideally would look out for, but patients and consumers don't read the small print most of the time, so we really have a responsibility as an industry.

Dr. Ehrenfeld: That's great. The whole concept of what are companies required to do, then what do they actually do, what's voluntary, what's regulated is a bit of a moving space, Mark, and I know that youth have a lot of thoughts about that. I'll give a terrible example of something horrible that a company did that I saw, which is you all know HIPAA, privacy, patient data. HIPAA only applies to certain entities, covered entities. You have to meet the definition of a covered entity, meaning that you provide health care in most circumstances, but if you're just a company and you're not a covered entity, HIPAA doesn't apply to you.

So, there's a company out there and I don't need to name them, but they label their app as HIPAA-compliant. HIPAA doesn't apply to them because they're not a covered entity. So, they can sell the data, they can do anything they want with it. They're technically HIPAA-compliant because HIPAA has nothing to do with this company. That is completely misleading. Companies should not do that, and yet we're starting to see examples of that unfortunately on the consumer, patient side. Alex, Mark, other thoughts?

Dr. Stinard: So I can give you a story about as a patient. I was a patient last week. I think everybody's going to be pretty familiar with this story. So I went to my doctor and I had an appointment. I went to the room. I got to sit for two hours. Then when the doctor came, I've had the same doctor for 10 years, we talked about what was going on, and then when he was leaving—we call it this doorknob question—I forgot I had to ask him another thing. I felt almost like I was being a burden even though we're friends because he's so far behind. I know he's behind because it took him two hours to come see me and I know he wasn't having a latte.

Then we talked about what was going on and then we felt rushed. So it was like as a patient, he comes in, it's almost like a transaction. He's like, "I see what's on your chart. I know why you're here. I'm trying to get in and out because I'm already late and I need to get this done." So, it's like we're not having the niceties of how the family's doing. I'm not able to express myself because it's like, "Get that check mark done. You're here for your blood pressure," and then, "Oh, yeah, I forgot I had a question about my diet," but then to ask that, it's almost like me as a burden saying that I waited to come talk to my doctor, which maybe I only see once or twice a year, but to say that I feel rushed, that I can't really express myself to them, and that's not because he's a bad doctor, it's because of the workload he has.

So, some of the ways that we're trying to solve that just at the here and now would be let's take that AI tech, meaning we'll try to do a pre-chart for him. So, a pre-chart would be, "Hey, what's going on with Alex? What medicines is he on? What's been in his past medical history? We're going to validate that." So, then part of his chart's already made of who am I. Then the ‘why am I here visit’ part, we have a recording going on with automatic speech recognition saying, "Hey, this is a transcription of what occurred in the space," so then he doesn't have to be at a computer looking at it and then saying, "Oh, my working memory only has 20 pieces of information, so I need to write this down because I might forget it because I might have to do this at my pajama time," when he's doing his notes tonight.

So, then he's doing part of the note now, making sure he doesn't forget critical facts and then trying to do it late at night. So then you could see why he feels rushed, but if we're able to have an ASR, the whole transcript form that he can look at later, plus he can just talk to me as an individual, not feel rushed, then I can get everything I need out—and then even a benefit on top of that is it's going to write the note for him the way in his language, the way he thinks, and the way he writes his notes so that he can just spend that time with me talking and coming up with a real plan because that's why I went to the doctor, not to be, "Hey, I'm here for a check mark. I need to get my blood pressure checked," and I'm in and out. So it's like that's the real goal is to say, "Hey, let's make it so that right now you can go see your doctor and not feel rushed and get everything done," and then make it so that the doctor doesn't actually feel rushed.

Dr. Ehrenfeld: I think that's right. When I go and see patients, which I do most weeks, the value I bring isn't by filling out pieces of paper or clicking buttons. It's by putting my hand on the shoulder of the patient and helping them through, in many cases, a hard moment because I'm an anesthesiologist and they're coming in for surgery. The AMA has a pilot on right now that's on that pre-charting spectrum, which is to basically ... we all continue to learn and go to continuing medical education meetings. It's not a very efficient way to get me the knowledge that I need in the moment, but there's a pilot that we've got going on. It's using AI. In many circumstances, we can predict which patients clinicians are going to see because they're scheduled, their panels and it pre-screens patients that are coming in, conditions maybe you haven't seen in a while, things that may be unusual. Then it serves up these little bite-sized nuggets of clinical knowledge that might be useful.

In early pilots, a very short brief interaction with this tool, 10 minutes a week is changing clinical decisions for patients in those practices where this is being piloted. So those kinds of AI tools that can pre-screen, help focus information, collect information I think are really exciting. Mark?

Dr. Sendak: So, on the patient perspective, one very concrete, I know we're here in Texas, I know that there's interest in women's health, when we talk about the implications of privacy and non-HIPAA covered data collection, there's been FTC, Federal Trade Commission action taken against companies that are selling data. I think depending on the medical condition, how you share data with tools that are not your doctor's office because, unfortunately, that's what HIPAA primarily covers. Your data and data about you and the medical conditions in your data that may be shared with companies, with other state actors, I think it's just you have to be very careful.

The other piece on the patient perspective, so in the fall I had a surreal experience where I went to the hospital with a loved one who had sepsis as we found out. I was with this person prior to going to the hospital. We went to an acute care visit, then to the hospital. I helped build the sepsis algorithm that runs at the hospital that we were at. I know everything that goes into it. I know the definitions of sepsis. So, it was clear to me the trajectory that this person was headed on. We were still in a packed ED. We got the provider up front where we had an initial triage. We waited back in the waiting room for another hour or two, waited to get IV antibiotics started. We ended up going to an overflow section.

So, this was an adult who ended up going to an overflow bed in a pediatrics unit. Lovely fish on the walls, but it was, I think for me, a reminder that you can have the most amazing technology, but there's still the core operational challenges that hospitals face. At the end, the algorithm was going to do nothing unless this person got the treatment that she needed, which she did, but it wasn't the technology.

Dr. Ehrenfeld: All right. So, I've got two quick questions for each of you. So Claire, here are two questions. In a lot of range, very broad range around the accuracy of AI platforms, how can somebody vet the quality of health information that they're getting from an AI platform that they might use? That's my first question.

Dr. Novorol: Yes, and for the layperson often difficult, but it's about using products built by reputable organizations that take quality and safety, accuracy really seriously. That's something that we've done for years. We publish in reputable journals like the British Medical Journal. We test our tools on edge cases, difficult cases, less common types of presentations, thousands of different presentations against other tools like ours, against doctors, seeing how they perform. We're using our tools for triage and navigation to care and letting people know what they might have and what to do next. So it's really, really important that we do that thing, but as I say, I think ultimately it's on the industry to ensure this and that's where regulations and standards really, really come in.

Dr. Ehrenfeld: All right. So, what should people know about their health data, how it's kept private? What do people need to know about their health data privacy? If they're going to put something in to a system, what should they be asking? Then I'll come to you.

Dr. Novorol: What's happening? What are you doing with my data? How are you using it? What are you using it for, and are you going to share it with anybody without my consent? Because you don't want that data shared without your consent. There are companies that in the small print you'll see that they will use your data for marketing and advertising purposes and so forth. So, you really want to be careful and read that small print and check the opt out box—but really, health care applications, they should be upfront, they should get explicit consent for sharing your data. You need to know who it's being shared with and why, for what purpose and explicitly consent to that. We're really, really big on privacy and security. We have to be in our space. We comply with GDPR. We're actually a European headquartered company, actually in Germany, and you don't get straight to then that, Germany and GDPR. We ensure that when your health data, your personal health data is stored separately to your identifying data and they only ever come together on your mobile device with your secure token, with your password and so forth. The thing is people are not going to read all of this small print and know all of this, but you only want to share your data with companies that are telling you, "We'd never share this without your consent, only for purposes of your health care and when you tell us can."

Dr. Ehrenfeld: I was at a tech health innovation challenge and a bunch of companies doing the pitches and all the things. One of the companies, part of their business model is how they're going to monetize the patient data. I just vomited in my mouth as soon as they got up on stage talking about this. All right, Mark, you can follow up on that, but my questions for you is bias. Bias is such an important part of the AI conversation. How do we mitigate bias?

Dr. Sendak: Because I want to tie her response back to something you said when we opened up about COVID and about the loss of trust in science and expertise because ... So, I was working full-time on innovation in a health system, COVID hits, and then I got redeployed for 16 months to work on COVID full-time. Then after that, I went back to working on innovation and AI, one of the most rattling things for me. So, when we talk about what should somebody ask about an AI tool to trust it, those are really complicated tools. They're probabilistic. The accuracy measurement is on the aggregate. It's going to be wrong sometimes. Sometimes it's going to be right. I understand why communicating about effectiveness about AI is hard.

Where I still struggle, we couldn't communicate that a mask works. We couldn't communicate that a vaccine works, and those should be pretty simple. So, I think this is a bigger question around there will be times when we find that there are things that are effective and safe and how to communicate that and build trust. It's not about measures because that for me was such a vivid example of where we couldn't communicate trust in something where it was simple.

Dr. Ehrenfeld: So biggest impact, you're doing all this population health work at Duke. Biggest impact of AI from a population health perspective, what do you think?

Dr. Sendak: Totally not related to COVID. It's chronic disease management, acute care management. We've changed the way our ACO functions. Then the point about bias. How can we mitigate bias? Through Health AI Partnership, we spent almost six to eight months convening stakeholders talking about what health systems can do internally to assess technologies, to assess for impacts on health inequities. We came up with a list of 37 procedures across the product life cycle. I can point you there. Go to healthaipartnership.org, but it's complicated. There's not a simple set of analysis and then you're checked out. You have to monitor after implementation and everything.

Dr. Ehrenfeld: All right. Alex, regulation. So, what do we need from a regulatory standpoint? What's getting in the way?

Dr. Stinard: So, I think until we have AGI, we're going to need human in the loop, especially for things that matter by life and death, meaning that for the regulation, we need to ensure until we have a super smart AI, that we have a human that specialized. So, this would be in medicine we have a physician supervising physician tasks. So, I think that's super important. Accuracy, accountability and transparency, that's where it's all at.

Dr. Ehrenfeld: Are you concerned about liability?

Dr. Stinard: I'm not concerned about liability because I think if we keep up the same until we have a super smart AGI, we have a human in the loop so that the patient is safe because that person, meaning that doctor, is responsible for their patient. So if there is a malpractice, it goes back to that doctor, meaning that's the accountability, the transparency there.

Dr. Ehrenfeld: All right. Let me just share a point on transparency. Transparency is so key. When I walk into an operating room, I may not know what the algorithm's doing when I turn a ventilator on, but I ought to at least know that the ventilator has AI built into it. We can take a really poignant example from another industry, the airline industry, where this did not happen. So, if you think about the Boeing 737 Max problem, not the one from a few months ago, the one where we lost tragically two airliners. It was an AI safety system that was built by Boeing to deal with the aerodynamics of these planes that the nose naturally goes up.

The pilots of those two doomed airliners did not know this AI safety system existed. It was not in the operations manual. There was no specific training. We simply cannot allow that to happen in health care as we start to build and incorporate these tools. We all just need to know that there's AI doing something even if we don't know what it's doing so that we can turn it off for or step in, be that human loop. With that, let me say thank you to this amazing panel. Thank you to the best audience at South by Southwest. See you next time.

Unger: This has been Moving Medicine, a podcast by the American Medical Association. Subscribe today to never miss an episode. Thanks for listening.


Disclaimer: The viewpoints expressed in this podcast are those of the participants and/or do not necessarily reflect the views and policies of the AMA.

FEATURED STORIES