If the last 50 years of health care innovation came about mostly through new pharmaceuticals and medical devices, the next 50 years will be shaped by a new type of invention.
“Increasingly we will see software play a larger role: solutions driven by artificial intelligence may impact patient outcomes as much as some of our most powerful drugs,” according to Suchi Saria, PhD, who is assistant professor of computer science and directs the Machine Learning and Healthcare Lab at Johns Hopkins University.
Saria and her lab colleagues develop statistical machine learning (ML) techniques with the aim of “enabling new classes of diagnostic and treatment planning tools” for individualizing the delivery of health care. She has published extensive research on this topic including demonstration of new tools for Parkinson disease, sepsis and autoimmune diseases.
With a lot of discussion about AI and ML, it is often unclear exactly what people are talking about. AI is not a technique or set of techniques. Rather, it is a multidisciplinary field with the aim of building software that can make machines act, perceive and reason intelligently, Saria said at a recent discussion on AI and machine learning at the AMA.
Machine learning is a subfield within AI that leverages data to teach machines how to act. According to Saria, teaching can occur by demonstration, by imitation, or by giving examples.
“In medicine, the significant impact of AI will come through augmentation—helping care teams practice more efficiently and effectively,” Saria said.
At the 2018 AMA Annual Meeting, the House of Delegates adopted new policy that seeks greater physician involvement in the burgeoning field of AI to ensure it reshapes care in a positive direction. Delegates also directed the AMA to “encourage education for patients, physicians, medical students, other health care professionals and health administrators to promote greater understanding of the promise and limitations of health care AI.”
Saria outlined three common myths associated with AI and ML.
Myth: AI, ML are meant to replace humans
Machines can scan vast quantities of data and pinpoint subtle signs and symptoms that clinicians might otherwise miss. By providing this information within the workflow, human capability can be enhanced. Machines can tirelessly scan the record, forage relevant information and surface these within context.
These machines can even learn over time what is most relevant. Human experts working with smart software to augment decision making will do significantly better than the experts or software alone, Saria said.
Myth: Large quantities of annotated data are needed
To gather data for the JAMA Neurology study, “Smartphones and Machine Learning to Quantify Parkinson Disease Severity,” Saria and her colleagues used weak supervision-based learning. That technique leveraged noisy data that was relatively cheap to collect.
Previously, this type of data would have been discarded. However, finding new ways to extract signal from noisy data helped lead to new measurement tools.
“How do we turn recorded streams into a quantitative measure of disease? What we can do is compare two different slices and tell you which one is when the patient is likely to feel worse—I don’t have to compare every slice,” Saria said.
There is no need to sit in a lab annotating large amounts of data. What is needed is enough of each type of data that she can tell when a patient is feeling worse in one spot than another.
Myth: AI and ML models are biased, dangerous
“There are many subtle ways in which a model can go wrong and pick up bias that is harmful,” said Saria.
For example, AI and ML models reflect people’s bias, Saria argued. If the human practice is biased, the model will pick that up from the data it’s learning from and replicate back the bias that the person is practicing.
“But this bias can be corrected for,” she said. “In fact, it may be easier to correct bias in our tools than the human experts themselves, giving a more promising path to health care quality.”
Her recent work has identified some ways of detecting and correcting for these sources of bias and unreliability. This remains an active topic of research in the community.
“The growing excitement for AI has also led to overhyped claims,” said Saria. “There is an enormous amount that we can do, but we need to do it well.”