El.kz / Artem Churssinov

Google removes some of its AI summaries after users’ health put at risk

12.01.2026 14:39

Google has removed some of its artificial intelligence health summaries after an investigation found people were being put at risk of harm by false and misleading information, El.kz reports citing the Guardian.

The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and  "reliable"

But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.

In one case that experts described as “dangerous” and “alarming”, Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.

What Google’s AI Overviews said was normal may vary drastically from what was actually considered normal, experts said. The summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and not bother to attend follow-up healthcare meetings.

The company has removed AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.

A Google spokesperson said: “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”

Vanessa Hebditch, the director of communications and policy at the British Liver Trust, a liver health charity, said: “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances.

“However, if the question is asked in a different way, a potentially misleading AI Overview may still be given and we remain concerned other AI‑produced health information can be inaccurate and confusing.”

The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range”, prompted AI Overviews. That was a big worry, Hebditch said.

“A liver function test or LFT is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers.

“But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.

“In addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful.”

Google, which has a 91% share of the global search engine market, said it was reviewing the new examples provided to it by the Guardian.

Hebditch said: “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”

Sue Farrington, the chair of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals, welcomed the removal of the summaries but said she still had concerns.

“This is a good result but it is only the very first step in what is needed to maintain trust in Google’s health-related search results. There are still too many examples out there of Google AI Overviews giving people inaccurate health information.”

Millions of adults worldwide already struggle to access trusted health information, Farrington said. “That’s why it is so important that Google signposts people to robust, researched health information and offers of care from trusted health organisations.”

AI Overviews still pop up for other exa the Guardian originally highlighted to Google. They include summaries of information about cancer and mental health that experts described as “completely wrong” and “really dangerous”.

Asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources, and informed people when it was important to seek out expert advice.

A spokesperson said: “Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high quality websites.”

Victor Tangermann, a senior editor at the technology website Futurism, said the results of the Guardian’s investigation showed Google had work to do " to ensure that its AI tool isn't dispensing dangerous health misinformation"