In Jan 2026, a Guardian investigation found that Google’s AI summaries were presenting inaccurate information…
These findings, which were deemed downright dangerous, led to Google removing some of its AI (artificial intelligence) summaries after the misleading health advice threatened to put people’s health at risk. One rather alarming example was bogus information being provided about critical liver function tests, which could possibly leave patients with serious liver infections and diseases while wrongly thinking they are healthy.
So much so that after the investigation concluded, the company went ahead and removed AI Overviews for specific search terms, including “what is the normal range for liver function tests” and “what is the normal range for liver blood tests.” Interestingly, a recent study ended up revealing something equally alarming: Google’s AI Overviews are relying more often than not on YouTube, rather than reputable medical websites, when they’re answering health-related queries.
The fact that Google’s search feature was citing YouTube over every other medical website when answering questions about medical and health issues is raising a volley of questions about the worrisome fallout of a tool being seen and used by nearly two billion people every month – especially when it comes to health.
AI: The New Search Engine
For years, healthcare and medical professionals have been raising concerns over people relying on Google searches for medical and health-related issues. Although, we’re way past that now. everyone is now depending on AI for all the answers, including critical health problems. The search engine optimisation platform SE Ranking conducted a study that analysed over 50,000 health searches in Germany.
The findings? The most cited source in AI citations turned out to be YouTube, accounting for as much as 4.43% of all citations. That’s 3.5 times more than that of netdoktor.de, one of the largest consumer health portals in the country. If that wasn’t enough, that percentage is more than twice as often as how many times the well-established medical reference of MSD Manuals has been quoted.
What’s equally alarming is that of these AI Overviews, only 34.45% of citations came from reliable medical sources. That’s not all; government health institutions and academic journals accounted for just around 1% of all AI Overview citations. Perhaps what’s most alarming in this entire saga is that there was no academic institution, medical association, government health portal, or even hospital network that came even close to YouTube’s number.

Why does it matter? Because, according to the researchers, YouTube is a general-purpose video platform and not a medical publisher – and rightly so. Not only can hospital channels and board-certified doctors and physicians upload content there, but so can life coaches, wellness influencers, and basically content creators who possess no medical knowledge, training, or experience.
One of the cases that experts deemed was particularly dangerous was when Google wrongly advised people suffering from pancreatic cancer to avoid consuming high-fat foods. According to health and medical experts, this is the exact opposite of what actually should be and is recommended, and it could lead to an increased risk of the patients dying from the disease.
If that wasn’t enough, AI Overviews regarding women’s cancer tests similarly provided entirely wrong information, which medical experts said could possibly result in people dismissing genuine symptoms.
Okay Google: The Rise Of Dr. ChatGPT
For the past nearly 20 years now, scared, befuddled patients have turned to the internet and web searches for medical insight, plugging in symptoms and clicking random websites in a bid to self-diagnose. With Dr. Google now having completed its AI residency, today it’s the chatbots that are transforming into a go-to source of health and medical information. According to OpenAI, nearly 40 million people worldwide are using ChatGPT for healthcare advice every day.
It isn’t just Germany; even the Canadians are no exception. According to the 2026 Health and Media Tracking Survey from the CMA (Canadian Medical Association), roughly half of the country’s public surveyed consult Google AI summaries and ChatGPT about their health and medical issues.

Suffice to say, it hasn’t worked out too well for them, as those who followed this not-so-wise AI counsel for self-diagnosis analysis and treatment were five times likelier to suffer adverse effects than those who didn’t.
The reasons are obvious – AI chatbots are too wildly agreeable and overly confident to be diagnosticians or dole out medical advice. For instance, a 2025 study showed that when researchers from the University of Waterloo prompted OpenAI’s GPT-4 with open-ended health and medical queries, it got the answers wrong around two-thirds of the time.
In yet another 2025 study, Harvard researchers found that if patients or users didn’t know that acetaminophen is the same as Tylenol, chatbots rarely pushed back on nonsensical queries asking to, let’s say, detail why one is safer than the other. Since AI likes to be compliant, helpful, and generally a yes-bot, its sycophantic nature tends to prioritise helpfulness over honesty and critical reasoning.
Code Red: ‘Confident Authority’ vs. Public Health
It’s not as though people aren’t aware of AI’s shortcomings. But there’s no denying that when you’re waiting 12 months to see a specialist, are suddenly left without a family doctor, or don’t have readily available access to care, turning to ChatGPT for quick and accessible answers seems like a gamble worth taking.
Should we be worried about the authenticity of the information? Yes, and the fact that you’re asking this question means that you do your research. But, should we be worried that an increasing number of people are possibly blindly following this information without doing further research, or worse, without consulting a doctor? Then yes, we should definitely be worried.
In case you missed:
