Developing a cancer prognosis, responding to patient messages, predicting adverse clinical outcomes, providing documentation support and even recommending optimal staffing volumes are all ways that augmented intelligence (AI)—often called artificial intelligence—is being used in health care.
Results of a recent survey conducted by the AMA show that physicians are intrigued by the transformative potential of AI to enhance diagnostic accuracy, personalize treatments and reduce administrative burdens. But there is also concern about its potential to exacerbate bias, put privacy at risk, introduce new liability concerns, and offer seemingly convincing yet ultimately incorrect conclusions or recommendations.
Current and potential uses for AI, barriers to widespread adoption and risks physicians need to be aware of are explained in a comprehensive report, Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care (PDF), that was produced by the AMA and Manatt Health.
“AI is going to change everything in medicine. I just can’t tell how or when, but I know that AI is not going to replace doctors—but doctors who use AI will replace those who don’t,” AMA President Jesse M. Ehrenfeld, MD, MPH, said during a session at the ViVE health technology conference in Los Angeles.
“We are trying to make sure that whatever we end up with works for physicians and our patients,” Dr. Ehrenfeld said. “Unfortunately, if we don't have trust, if we don't have confidence that these tools and products actually work, it will kill the marketplace.”
Dr. Ehrenfeld is an anesthesiologist, health informaticist and co-chair of the Association for the Advancement of Medical Instrumentation’s AI committee. He noted that physician practices, the health care workforce and health care facilities “are desperate for these tools.”
“We needed them yesterday—not tomorrow—if we're going to scale the capacity of our delivery system to meet ongoing needs,” he told the crowd at ViVE.
Citing the new AMA-Manatt report, Dr. Ehrenfeld noted that about 40% of U.S. physician practices use some type of AI today, but it’s mostly for “back-end, administrative office things.”
He added, however, that there is great potential for “exciting things” such as tools to engage patients in their own health and manage their chronic conditions, and he predicted that these tools “will be transformative.”
692 devices approved
But getting from here to there will not be easy. The AMA-Manatt report notes that “adapting to an AI-enabled future will necessitate dramatic changes in medical education, practice, regulation and technology.”
It notes that, as of last fall, the Food and Drug Administration (FDA) had approved 692 AI or machine-learning medical devices. Of these, 531 are in radiology, 71 are in cardiology and 20 are in neurology.
The first time the FDA authorized a device that provides an autonomous diagnosis was in 2018 and it was for a system that diagnosed diabetic retinopathy via computer. It was developed by Michael Abramoff, MD, PhD, a University of Iowa Carver College of Medicine professor of ophthalmology, who contributed to the AMA report.
The University of Iowa Hospitals and Clinics is a member of the AMA Health System Program that provides enterprise solutions to equip leadership, physicians and care teams with resources to help drive the future of medicine.
Adoption requires transparency
The report was developed using a physician survey, a series of interviews with AI experts and discussions with specialty society representatives. Use cases of how physicians are using AI across specialties and future use cases that will develop in scale and sophistication are identified.
AI capabilities for identification, translation, summarization, prediction, and suggestion are outlined in the report, and clinical scenarios where these may show up in practice are provided.
Physician challenges and risks explained in the report include understanding how a series of inputs contributed to the output produced by the AI model and knowing the sources of data that were used to train the AI device.
“If I walk into an operating room as an anesthesiologist, and I turn on the ventilator and there's an AI algorithm that's doing something, I ought to know that there's an AI algorithm that's influencing what's happening,” Dr. Ehrenfeld said at ViVE.
As “the human in the loop,” Dr. Ehrenfeld said that knowing that an AI algorithm is in play would help him control the situation and correct problems when they arise.
Ensuring that level of health care AI transparency is part of why physician advocacy is so critical, he noted.
As the number of AI-enabled health care tools continue to grow, it is critical they are designed, developed and deployed in a manner that is ethical, equitable and responsible. The use of AI in health care must be transparent to both physicians and patients. The AMA has developed new advocacy principles that build on current AI policy. These new principles (PDF) address the development, deployment and use of health care AI, with particular emphasis on:
- Health care AI oversight.
- When and what to disclose to advance AI transparency.
- Generative AI policies and governance.
- Physician liability for use of AI-enabled technologies.
- AI data privacy and cybersecurity.
- Payer use of AI and automated decision-making systems.
Dr. Ehrenfeld joined an outstanding crew of AMA subject-matter experts for a recent webinar that detailed how doctors are navigating this new technology and leading the way in how AI is designed and implemented in health care today. Register to watch it on demand.