After receiving complaints that its AI-generated “AI Overviews” feature was giving false and possibly dangerous health information, Google took action to limit the use of the function in search results. The ruling arises after an investigation carried out by the guardian which discovered multiple cases where AI-generated responses included false medical information about serious illnesses such as cancer, liver disease and mental health.
One example given was looking for normal ranges in blood tests to detect liver disease; important variables such as age, sex, ethnicity and national medical standards were not considered in the AI-generated summaries, which showed generalized values. Because of this lack of context, People with severe liver disease may mistakenly think their test results are normalwhich could cause them to postpone or suspend necessary treatment, according to health experts.
The responses were considered “dangerous” and “alarming” by medical professionals.who emphasized that Providing false health information can lead to serious complications or even death.. Google opted to display direct links to external medical websites instead of using AI overviews for searches related to sensitive health topics. According to the company, it strives to improve the system and implements internal policy measures when necessary when AI summaries lack proper context.
However, depending on the wording of the question, AI-generated answers can still be found for certain health-related queries. Health groups, including the British Liver Trust, are concerned about this. AI summaries have the potential to oversimplify complicated medical tests, warned Vanessa Hebditch, the organization’s director of communications and policy. Since normal test results do not always rule out serious illness, he noted that presenting isolated numbers without sufficient explanation can mislead users.

Google Overview provides health information that is not 100% accurate because it does not have context such as age, gender, and ethnicity.
When asked why AI overviews were not removed more broadly, Google said its internal medical review team found numerous disputed answers to be true and supported by reliable sources. The company also highlights that when users seek health information, they should consult a medical professional.
Even with these assurances, the scenario shows that the application of generative AI to health-related advice still presents difficulties. The incident highlights the dangers of relying solely on automated systems to provide complex and potentially life-changing advice, although access to reliable medical information remains crucial.
Filed in . Read more about AI (Artificial Intelligence), Google and Google Search.
