With the rise of tools powered by augmented intelligence (AI)—often called artificial intelligence—accessing medical information has never been easier, or more overwhelming. As helpful as these tools can be, the results can sometimes lead to confusion, misinterpretation or even anxiety. That is why it is important to understand how to navigate health information online when searches are powered by AI.
The AMA’s What Doctors Wish Patients Knew™ series gives physicians a platform to share what they want patients to understand about today’s health care headlines.
In this installment, two physicians took time to discuss what patients need to know about navigating AI searches for health tips. They are:
- Margaret Lozovatsky, MD, a pediatrician and vice president of digital health strategy at the AMA.
- Ainsley MacLean, MD, a radiologist and chief medical information officer and chief AI officer for Mid-Atlantic Permanente Medical Group.
Mid-Atlantic Permanente Medical Group is a member of the AMA Health System Program, which provides enterprise solutions to equip leadership, physicians and care teams with resources to help drive the future of medicine.
It’s augmented intelligence
“We talk a lot in the AMA about augmented intelligence versus artificial intelligence. The reason we talk about that is we really truly believe that these tools can be useful in decreasing some administrative burdens, yet they’re not at a place where they can be used on their own,” Dr. Lozovatsky said. “That applies across the board for both the clinicians and patients; these tools might provide some thoughts for physicians to consider, but they’re not in a place where they can, on their own, provide medical guidance.”
“It’s important for patients, for all of us, as we’re looking things up, to review them with a critical eye and realize that at the end of the day, we need to ask the questions of our physicians,” she said.
Generative AI creates content
“It can be sentences, it can be pictures. It can be just about anything. But what’s helpful about it is it’s really an easy way to interact with technology,” Dr. MacLean said. “The content that is generated tends to be very user-friendly—and can be engaged with in the form of an AI chatbot—and so that’s what has helped to propel it into mainstream popularity as well as the amount of information that we can now process through AI.”
“Things like ChatGPT really have allowed AI to become something that’s useful and that actually makes a difference in people’s lives,” she said.
AI has long been used in health care
“We’ve been using AI in health care for a long time, and it was just last year when generative AI became something that everyone was aware of, that people started talking about it,” Dr. Lozovatsky said. “AI is really a way for us to provide decision support in health care and we’ve been using these algorithms in the background to help provide decision support to physicians so that they have more information when they’re making decisions on patient data and patient care.”
Search with AI summarizes information
“For a long time, people have been using the internet to get answers. It would be very atypical in today’s environment for someone to not come to a physician having already done a little bit of their own research,” Dr. MacLean said. “AI takes that search functionality to the next level from a perceived usefulness. So, a lot of times AI is more efficient than a routine search engine.
“With search engines, you’re presented with a lot of different websites that you then can go through. But what the AI is doing is its going through that for you and presenting the most relevant information in a way that’s really easy to consume,” she added. “It’s quick, it’s accurate and it’s also gone through more information than the average person can.”
“These generative AI tools use all the information that’s out there,” said Dr. Lozovatsky. Yet “they don’t always have a way to assess whether it’s good information or bad information. They take anything that they can find on the topic and put it together into a coherent statement.”
“It’s tempting when you get this answer that often seems coherent to believe that it’s accurate,” she said. “The reality is we have no way to assess where the information came from to come up with this answer, so it’s really difficult to understand whether it is good information, bad information and what the sources are.”
Ultimately, “it’s doing all this magic in the background and it’s just taking what is out there, even all the misinformation, and putting it together in front of you,” Dr. Lozovatsky said. “There’s some value in the fact that it summarizes information for you and as these tools get better, if they’re able to summarize information that is from reputable sources, then it will be a useful tool.”
But “one of the big reminders that is really important is to always remember that AI is not 100% correct all the time. It’s very accurate though,” Dr. MacLean said.
AI can be biased
“Another really important piece to remember is that AI can be biased,” Dr. MacLean said. “If for some reason the AI puts more weight on an article that looks at 10 people instead of an article that looks at 10,000, it may skew that answer towards a population that maybe doesn’t apply to the person asking the question.
“So, it may have intrinsic bias inadvertently. And that’s just something that’s really important for people to be aware of,” she added.
Be careful with Google’s AI
Google added an AI overview to search results “because there’s a lot of information out there and it’s very tempting to summarize it,” Dr. Lozovatsky said. “We all think about this like the idea of an executive summary that takes everything out there and puts it in one sentence.
“And that is very useful if there’s a way to understand that what it comes out with is correct,” she added. “The challenge, of course, is that we have no way of knowing that. So, while it makes sense for Google to use these tools—and with time they’re going to get better and better—it’s very challenging for us to take the output and use that to make any sort of clinical decisions.”
AI systems are also data-hungry. Many AI tools, especially those offered by large data companies such as Google, have few regulations protecting patient data and are not covered by HIPAA. As a result, patients should be cautious and deliberate about the type of personal medical information they share with online AI tools.
Context clues are often missing
“Even if the source is appropriate, when some of these tools are trying to combine everything into a summary, it’s often missing context clues, meaning it might forget a negative,” Dr. Lozovatsky said. “So, it might forget the word ‘not’ and give you the opposite advice.”
There’s a great example out there about somebody asking what to do for a kidney stone and Google AI told them to drink urine,” she explained. “The guidance was probably to drink lots of fluids and then assess your urine to make sure it’s clear.
“Google was trying to combine all of the different pieces and came out with a sentence that seems coherent, but really isn’t the guidance that we hope to provide to our patients,” Dr. Lozovatsky added.
Confirm with your doctor
“With all things technology, including Google and these AI tools, you really have to always make sure that you’re confirming with a health professional,” Dr. MacLean said. “One of the number of things that AI and other search functionalities can do if you tend to be on the more anxious side, is they’ll definitely present you with a worse possible case scenario, which is the correct thing to do because everything is always possible.
“But that’s not the case the vast majority of the time,” she added. “That’s something where a physician or other health care provider can take into account the whole story and help get you in the right place because we can’t have everyone thinking that they’re having something catastrophic every time they enter it into Google.”
Ask more specific questions
“When you look at these search engine AIs—for instance, Google’s Gemini—they’re looking through all that information. They’re finding the sources that the AI thinks are the best,” Dr. MacLean said. “Then it’s presenting that information in a way that it thinks that the person who is searching for it wants to hear.”
For example, “if you ask something really general, you’re going to get a really general answer that may lead to some stress-induced responses,” she explained.
“If you ask a more specific question, you get a more helpful, useful answer,” Dr. MacLean said. “And that’s in part because as it’s searching through this vast database that is the internet, it’s able to hone in on sources that are also asking that same type of question and give you more useful information.”
Push for more information
“Don’t hold back. Don’t underestimate the power of these machines,” Dr. MacLean said. “When you’re asking a question, you can treat it like a very advanced, smart person and you can even say, ‘Give me the top five examples and then explain why one is better than the other.’”
“Push the AI like you would any expert to see how far you can get it to provide you with the answers because when you ask less, you’ll just get garbage,” she said. “There’s that expression, garbage in, garbage out and that applies to AI data in general, but the more you can prompt it with the better.”
The AI can “drift”
“From my own personal experience, yes” responses can change depending on your question, Dr. MacLean said. “That’s something we call drift, which is when sometimes your AI product moves away from where it wants to be.”
“I’ve used some chat functionality where for a couple weeks it was providing useful answers. And then I pinged it a couple weeks later and it was all over the place and not helpful,” she said. “I’ve definitely noticed even in the span of 10 or 15 minutes, there may be a slightly different answer, even though just one of the words in my question was asked differently.”
“As AI gets better and better, the accuracy is going to improve,” Dr. MacLean said.
Watch out for AI hallucinations
“There’s also something called hallucinations, which people are aware of now, and it is when—for whatever reason—the response that’s given isn’t accurate at all,” Dr. MacLean said. “And that’s a real problem.”
For example, “when AI talks about minerals, it sometimes says to eat rocks or something like that,” she said. “That would be an example of a hallucination because it’s not as nuanced as a human being to really understand, although we’re getting closer to that point of generalized artificial intelligence.”
Another example is chest pain. If you search AI for “’I’m having chest pain,’ you’re going to get something ranging from a severe heart attack, immediately call 911 to something more mild,” Dr. MacLean said. “But if you say something such as ‘I’ve noticed that when I’m in a really stressful meeting, I start feeling my heart pounding and then I feel a little bit of chest pain,’ it might be more likely to present you with something that is less scary and more specific.”
Go back to the original source
“The No. 1 thing that we should be thinking about is what is the source that I’m reading?” Dr. Lozovatsky said. “If I’m reading something off the Mayo Clinic website or any other health care organization, then it is more likely to be accurate.”
“When you’re using these AI algorithms, we often don’t know what those sources are,” she said. But if we do know what the sources are, make sure you go back to them “and that you understand where this information is coming from and it’s coming from medical professionals.”
“At the end of the day, there are unique features to anyone’s health care needs, so asking your physicians, contacting your health care providers is really the best way to approach this,” Dr. Lozovatsky said.
Use AI as a first step
Think of using AI as “a first pass to arm yourself with some helpful information that might help you understand a little bit more about the concerns you’re having, but always go to a physician or other health care provider as the main truth for health care just like you would almost anything,” Dr. MacLean said. “AI is not personalized at this point. It’s not for you. It’s for 10,000 or more people who entered the same question and receive that answer.
“And we know that there’s nothing more personalized than health care because there’s no one like you,” she added.
Turn to your doctor for help
“When in doubt, seek medical attention,” said Dr. Lozovatsky. “At the end of the day, the only people who really know you and your health care needs are your physicians and your care providers across the spectrum.
“So, if you have a question and you are unsure, it’s important to reach out to your health care provider to ask that question,” she added.
Table of Contents
- It’s augmented intelligence
- Generative AI creates content
- AI has long been used in health care
- Search with AI summarizes information
- AI can be biased
- Be careful with Google’s AI
- Context clues are often missing
- Confirm with your doctor
- Ask more specific questions
- Push for more information
- The AI can “drift”
- Watch out for AI hallucinations
- Go back to the original source
- Use AI as a first step
- Turn to your doctor for help