Digital

As health care AI advances rapidly, what role for regulators?

. 4 MIN READ
By

Jennifer Lubell

Contributing News Writer

Jesse M. Ehrenfeld, MD, MPH, knows that automated technology could improve his efficiency as a clinician.

“There’s not a day that goes by … where I don't see opportunities where the care that I could deliver could be enhanced by some of these tools,” said Dr. Ehrenfeld, the AMA’s president-elect and a practicing anesthesiologist.

Your powerful ally

The AMA helps physicians build a better future for medicine, advocating in the courts and on the Hill to remove obstacles to patient care and confront today’s greatest health crises. 

It's an exciting time for medicine. Physicians in the not-too-distant future may see hundreds of choices for a specific health care AI tool for a given clinical purpose, he added.

The AMA House of Delegates uses the term augmented intelligence (AI) as a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.

AMA surveys show that adoption of digital health tools has increased significantly over the last few years. One in five U.S. physicians now uses health care AI, but most of this use is currently limited to supporting back office functionality.

“There's a lot of interest, there's a lot of growth. But despite all of that, there's a lot of uncertainty about the direction and the regulatory framework for AI,” said Dr. Ehrenfeld, senior associate dean and tenured professor of anesthesiology at the Medical College of Wisconsin.

He and Jeffrey E. Shuren, MD, director of the Food and Drug Administration’s (FDA) Center for Devices and Radiological Health, spoke at a forum on medical device regulation hosted by the American Enterprise Institute, a Washington think tank.

The existing regulatory paradigm for hardware devices is not well-suited to regulate these tools. Not all digital technologies live up to their promise, said Dr. Ehrenfeld. Among other roles, he co-chairs the AI committee of the Association for the Advancement of Medical Instrumentation (AAMI) and co-wrote an article, “Artificial Intelligence in Medicine & ChatGPT: De-Tether the Physician,” published in the Journal of Medical Systems. AAMI also released a special report on AI.

Related Coverage

Why generative AI like ChatGPT cannot replace physicians

Physicians have a critical role to play in this endeavor.

Without physician knowledge, expertise and guidance on design and deployment, most of these digital innovations will fail, he predicted. They will not be able to achieve their most basic task of streamlining workflows and improving patient outcomes.

Stay up to date on AI

Follow the latest news on AI and its applications and effects for health care—delivered to your inbox.

Health care AI subscribe

The AMA is working closely with the FDA to support efforts that create new pathways and approaches to regulate these tools, said Dr. Ehrenfeld.

Any regulatory framework should ensure that only safe, clinically validated, high-quality tools enter the marketplace. “We can't allow AI to introduce additional bias” into clinical care, he said, cautioning that this could erode public confidence in the tools that come to the marketplace.

There also needs to be a balance between strong oversight and ensuring the regulatory system isn't overly burdensome to developers, entrepreneurs, and manufacturers, “while also thinking about how we limit liability in appropriate ways for physicians,” added Dr. Ehrenfeld.

The FDA has a medical device action plan on AI and machine-learning software that would enable the agency to track and evaluate a software product from premarket development to post market performance. The AMA has weighed in on the plan, saying the agency must guard against bias in AI and focus on patient outcomes.

Related Coverage

7 tips for responsible use of health care AI

Dr. Shuren said the FDA can only do so much to improve regulation of innovative devices. “We have to think about other models” involving accredited third parties, he said. “There’s no one entity that can do this.” He further indicated that the FDA is unlikely to have the staff capacity to individually evaluate all AI-powered medical algorithms in the future.

At the practice level, physicians should be asking themselves four fundamental questions before integrating these tools into their workflows, Dr. Ehrenfeld suggested.

The first is: Does it work? “Just as we do for a drug, a biologic, we've got to see the clinical evidence for efficacy so that we can weigh the risks and benefits of a tool,” he said. Insurance coverage is another question. Will the doctor get paid for the product?

Third, who's accountable if something goes wrong? “What about a data breach? Who's responsible for those issues?” he noted, adding that data privacy is particularly important when dealing with such large data sets.

The last question is: Will it work in my practice? If these tools don't do something to improve an outcome or efficiency or provide value, “you got to ask yourself, why bother?” said Dr. Ehrenfeld.

FEATURED STORIES