Skip to content

Google blames ‘misinterpreted queries’ for AI overview mistakes

  • by
  • 3 min read

Google’s AI Overviews were announced at I/O 2024, and shortly after their release, the internet took them for a joyride. After the wildly inaccurate and frankly laughable ‘overviews’ that the AI-powered feature has given users since its launch, Google has responded by defending the feature, sharing updates, and explaining why AI Overview gave the answers it did.

Google’s explanation kicks off by explaining how AI Overviews work. The company claims that despite being powered by a customised language model, “they’re not simply generating an output based on training data.” Many speculated that AI Overviews suffered from the LLM hallucination issue that almost every popular AI model launched to the public has had to deal with at some point. However, the search giant stated clearly that the feature is backed up by top web results.

While that might sound like a good idea, that’s also the root cause of the random answers that AI Overviews has been spitting out. As anyone using the internet might be aware, the top search results for any query on Google aren’t always the best. With the way Google ranks articles and how they appear in searches now out in the open after 2,500 pages of an internal search API documentation leaked, we can now piece together a picture of how Google ranks pages and how taking AI-generated summaries from those pages might not always reflect the best answer for a query.

The fault here lies primarily with search results themselves rather than AI Overviews. For example, if one of the top search results for a query includes pages from Reddit (which isn’t the most reliable source of information online), your AI summaries will be flat-out insane. That’s exactly what happened with AI Overviews.

However, according to Google, the problems were caused by “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.” Additionally, the company claims that in a small number of cases, the feature misinterpreted language on web pages and presented inaccurate information. The search giant even claims that some screenshots shared of AI Overviews with information on “leaving dogs in cars, smoking while pregnant, and depression” were fake and that those overviews never appeared.

Despite being “optimised for accuracy” and tested extensively before launch, AI Overview seems to have been bested by the internet. Regardless, Google now “limited the inclusion of satire and humour content” as part of better “detection mechanisms for nonsensical queries” after claiming that many queries were aimed at producing erroneous results from the feature.

Moving on, Google has updated its systems to limit the use of user-generated content in responses that could offer misleading advice. It added triggering restrictions for queries where AI Overviews weren’t as helpful, and for topics including news and health, you might not see AI-generated summaries.

In the News: Spain halts Meta’s election data processing before EU vote

Yadullah Abidi

Yadullah Abidi

Yadullah is a Computer Science graduate who writes/edits/shoots/codes all things cybersecurity, gaming, and tech hardware. When he's not, he streams himself racing virtual cars. He's been writing and reporting on tech and cybersecurity with websites like Candid.Technology and MakeUseOf since 2018. You can contact him here: