Google has taken down certain artificial intelligence-generated health summaries from its search results following a probe that discovered they were spreading inaccurate medical advice. The action specifically targeted AI Overviews for searches related to normal ranges for liver blood tests.
Investigation Uncovers Misleading AI Responses
According to a report by The Guardian, the tech giant removed the AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”. This move came after the publication's investigation concluded that these AI summaries were generating “false and misleading information.”
The investigation highlighted that the AI-provided answers could vary drastically from medically accepted normal ranges. For instance, the AI Overview for liver test queries presented masses of numbers without crucial context. It failed to account for vital patient factors such as:
- Nationality
- Sex
- Ethnicity
- Age
A Google spokesperson, while not commenting on individual removals, stated: “In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Health Experts and Advocates React
The removal has been welcomed by health information advocates, though they express lingering concerns. Vanessa Hebditch, the director of communications and policy at the British Liver Trust, called the removal “excellent news.”
“We’re pleased to see the removal of the Google AI Overviews in these instances,” Hebditch told The Guardian. However, she issued a caution: “if the question is asked in a different way, a potentially misleading AI Overview may still be given and we remain concerned other AI‑produced health information can be inaccurate and confusing.”
Sue Farrington, the chair of the Patient Information Forum, also welcomed the step but emphasized it is only a beginning. “This is a good result but it is only the very first step in what is needed to maintain trust in Google’s health-related search results,” Farrington said. She added, “There are still too many examples out there of Google AI Overviews giving people inaccurate health information.”
Ongoing Challenges for AI in Health Search
This incident underscores the significant challenges tech companies face when deploying AI for sensitive domains like healthcare. A previous investigation by the same publication found that AI overview summaries appearing at the top of search results were providing “inaccurate health information, putting users at risk of harm.”
The episode highlights the critical need for robust safeguards, expert human oversight, and clear disclaimers when AI is used to interpret and present complex medical data. For users in India and worldwide, it serves as a reminder to treat AI-generated health summaries with caution and to always consult qualified medical professionals for personal health advice.