Google has taken swift action to disable its AI Overviews feature for specific health-related searches. This move comes in response to an investigative report by The Guardian, which revealed that the AI-generated summaries were providing dangerously false and misleading medical information, potentially putting users at risk.
Dangerous Misinformation on Critical Health Queries
The tech giant reportedly removed the AI Overviews for searches such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” The Guardian's investigation found that Google's AI had been giving false information about liver function tests, potentially misleading individuals into believing they were healthy when they might not be.
Even more alarmingly, the AI Overviews feature had wrongly advised people diagnosed with pancreatic cancer to avoid high-fat foods. Medical experts state this is the exact opposite of standard nutritional recommendations for such patients and could even increase the risk of mortality from the disease.
Patchy Removal and Google's Response
However, the removal appears inconsistent. The Guardian found that typing slight variations of the original queries, such as “lft reference range” or “lft test reference range,” still triggered the AI Overviews. When questioned by the publication about the removal of specific overviews, Google provided a standard response.
“We do not comment on individual removals within Search,” a company spokesperson said. “In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Notably, AI Overviews remain active for other medical queries concerning cancer and mental health, which have also been reported to provide incorrect guidance. Google defended this, telling The Guardian that these particular overviews were not removed because they linked to reputable sources. “Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high quality websites,” the company added.
A History of Troubles for AI Overviews
This incident is not the first controversy to plague Google's AI Overviews since its high-profile launch in May 2023. The feature quickly drew widespread criticism and mockery for generating absurd and unsafe answers.
In one infamous example, it recommended users add glue to pizza to prevent the cheese from sliding off. In another, it suggested eating one small rock per day for essential vitamins. These blunders forced Google to temporarily pull the feature before reintroducing it with modifications.
Despite these fixes, AI Overviews continues to land Google in hot water periodically. For instance, when recently asked ‘Is 2027 next year,’ the AI perplexingly responded: “2026 is next year and 2027 will be the year after that.” Such errors highlight the ongoing challenges of deploying generative AI for factual information retrieval, especially in high-stakes fields like healthcare.