Enthusiasm for embracing augmented intelligence (AI) technology that can be applied to an array of areas in health care, such as evidence-based clinical decision support for diagnosis and treatment, is tempered with caution by memories of the disastrous rapid rollout of electronic health records.
This has resulted in an urgency to “do it right, from the start,” said Michael Abramoff, MD, PhD, a professor of ophthalmology at the University of Iowa.
“AI has enormous potential for lowering cost, improving access and improving quality,” he added. “But many have justified—and some unjustified—concerns about AI in health care such as patient safety, AI bias, job loss, ethics and loss of privacy.”
Dr. Abramoff is the founding CEO of IDx, the first company to receive Food and Drug Administration de novo market authorization for an autonomous AI diagnostic system—meaning it offers a diagnosis without clinician input. He made his comments in an AMA Physician Innovation Network (PIN) online discussion, “Preparing for Augmented Intelligence in Health Care.”
PIN’s mission is to cultivate and connect the worlds of medicine and health care innovation to ensure solutions meet the needs of physicians, care teams and patients. Physicians, residents and medical students taking part in PIN can help solve pressing problems by providing their perspectives and connect with like-minded peers who have shared interests.
In the PIN discussion about AI—often referred to as “artificial intelligence” in popular culture—another concern voiced by Dr. Abramoff was about what he dubbed “glamor AI,” a circumstance in which “we would pay globs of money for AI that is technologically exciting and ‘cool,’ but which does not improve patient outcomes and otherwise does not advance the quadruple aim.”
The wide-ranging discussion covered myriad topics, including:
- Payment paradigms.
- Regulatory oversight of AI-enabled clinical-decision support and diagnosis-support software.
- Ethical considerations.
- Bias and equity.
- Privacy and security.
- Tools for practice integration.
New policy adopted at the 2019 AMA Annual Meeting addressed definitions of key AI terms, clinical efficacy and safety, equity, liability, usability and workflow integration. The new policy adopted by the AMA House of Delegates is based on the principle that AI should advance the quadruple aim of enhancing patient care and population health, reducing costs, and supporting physicians’ professional satisfaction. A related policy was adopted regarding integrating AI into medical education and training.
Practicing in mixed company
A mixed environment consisting of physicians trained and untrained in using AI was brought up in the discussion, and Marlene Grenon, MD, chief medical officer for health insurance company Evry Health, predicted there will be some initial workplace conflict.
“Most of the friction will probably happen in the next decade, as these technologies are making it to the workplace and current generations of physicians have not been trained on them,” said Dr. Grenon, a vascular medicine specialist. “However, for current and future trainees who grow up with ubiquitous technology and AI, the use of such solutions in health care will likely become expected.”
Panelists also highlighted the role of medical specialties in accelerating adoption of AI.
“Specialty societies have to be able to define—in a manner that referring physicians and patients can understand—when the technologies are suitable and valid,” said Michael Repka, MD, a professor of ophthalmology and pediatrics at Johns Hopkins University School of Medicine.
Ezequiel Silva III, MD, a member of the American College of Radiology’s Board of Chancellors, detailed how his specialty is working with physicians and developers alike.
“It makes sense that radiology would be a leader among the specialties as the digital nature of imaging makes it ripe for impact from AI,” said Dr. Silva, medical director of radiology at Methodist Texas Hospital in San Antonio.