Physicians are adopting health care augmented intelligence (AI)—commonly referred to as artificial intelligence—at an increasingly rapid pace and their enthusiasm for the technology is rising.
But more needs to be done to increase physicians’ trust in AI and to increase the likelihood that a growing number of clinicians will continue to adopt AI tools, a new AMA survey shows.
“What physicians say is that they really need to trust the tool. Ultimately, we continue to hear that they want to make sure it supports their ability to continue to deliver care,” said Margaret Lozovatsky, MD, vice president of digital health innovations at the AMA and a pediatrician in Charlotte, North Carolina. “Design, user interfaces that are centered to the physicians’ need, workflow integration—all will be integral to getting the trust of our clinicians so they don’t feel like this is being done to them but that they will feel like this is being done for them.”
When it comes to regulatory action needed to increase trust in adopting AI tools, increased oversight was the No. 1 thing that physicians said they needed, according to the recent AMA physician survey on health AI (PDF), which examined changes in physician sentiment toward health care AI.
Nearly half of physicians surveyed—47%—put increased oversight of AI-enabled medical devices by governing bodies such as the Food and Drug Administration (FDA) at the top of the list of five regulatory actions that would increase their trust in AI and increase the likelihood they would adopt AI tools.
However, a majority of physicians listed some other consideration as their first priority health AI oversight. Nearly one in five wanted more requirements regarding payers’ use of AI in medical necessity determinations, while 16% listed further oversight of health AI tools that are not considered medical devices. Another 11% said requiring disclosure of health AI use to patients was a top priority, while 9% listed patient consent for use of health AI as being most important.
From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors—not a burden.
Building trust at a clinic level
It’s not just the external regulatory environment that needs to change to boost physician trust in health AI tools. It will be important for health care organizations to prioritize this. For example, 88% of physicians said it was important to have a designated channel for feedback if issues should arise.
“All of us in general—even stepping outside of the care environment—have been burned by technologies that didn’t work as perfectly as we hoped,” Dr. Lozovatsky said. “Having this feedback loop on a local level and a national level is going to continue to be important so that if things aren’t working as expected we have reporting mechanisms and there can be a quick turnaround with making improvements.”
It's also critical, she said, that AI tools be integrated into workflows. Physicians, especially those who are trained in clinical informatics, are key in communicating what must be incorporated into the design to meet physicians’ and patients’ needs.
“Clinicians are using these tools to enable patient care. At the end of this is a patient we are caring for and we need to make sure if there is something in the technology suite that is not functioning as we expect that it is addressed quickly so that we don’t cause patient harm,” Dr. Lozovatsky said.
Other needs
While a feedback channel was the No. 1 issue that doctors said were of high importance, more than 80% of physician respondents said it also was important that:
- Data privacy is assured by my own practice or hospital and EHR vendor.
- AI is supported and well-integrated with my EHR.
- AI is well-integrated into practice workflows and not just a “point.”
- To get proper training or education on AI tools being used.
- Use of health AI tools is covered by my standard medical liability insurance.
- Its safety and efficacy is validated by a trusted entity and monitored over time.
- I am not held liable for the errors of health AI models.
- Is proven to be as good or superior to traditional care.
Dr. Lozovatsky said that there are a number of health care organizations across the country that have adopted thorough governance processes. The aim is to ensure that health AI tools are:
- Evaluated at the local level.
- Tested with datasets that are consistent with the patient populations they will be addressing.
- Have extensive feedback loops from multidisciplinary groups.
- Implemented with proper engagement from physicians and other health professionals.
“That is the model we would like to see,” she said. “We are certainly seeing the enthusiasm is growing for it [AI]. Adoption rates are growing at a level that I haven’t seen with anything, other than maybe the pandemic. Physicians don’t typically adopt technology so quickly, so that tells me that they are seeing the benefits of these tools.”
Learn more about AI
The AMA offers physicians the chance to explore six AI modules that are part of the AMA Ed Hub™ CME series, “AI in Health Care,” to introduce learners to foundational principles in AI and machine learning, a subdomain of AI that enables computers to learn patterns and relationships from data without being explicitly programmed by people.
Learn with the AMA about the emerging landscape of augmented intelligence in health care (PDF).
Find even more health care AI learning modules compiled together at the Ed Hub AI collection, including studies of how chatbots, large language models, natural language processing and machine learning are transforming medicine and health care.
And check out a new channel dedicated to premier scientific content, educational reviews and commentary on AI and medicine, JAMA+ AI. The channel from JAMA Network® compiles content published across JAMA®, JAMA Network Open and the JAMA specialty journals and builds on that published content with new multimedia materials, including author interviews, videos, medical news and a regular podcast.