It's 2024, according to AI Overviews. Go home, Google, you're intoxicated


The recent resurfacing of AI hallucinations in Google’s AI Overviews has reignited public skepticism about the reliability of AI-generated information—especially when it comes from a company as dominant as Google. While the feature aims to simplify search by providing AI-curated summaries, some of its answers continue to blur the line between helpful and hilariously wrong.

One of the most notable issues: Google’s AI Overviews got confused about the current year, giving mixed answers like "No, it is not 2025. The current year is 2025," or simply defaulting to 2024 even in mid-2025. Although Google has since resolved the date-related errors, users remain cautious, especially given the history of strange responses, such as:

  • Suggesting people put glue on pizza to keep the cheese from sliding off

  • Advising users to eat a small rock daily for minerals

These aren’t just one-off errors—they’re symptoms of what’s called AI hallucination, where AI systems generate information that sounds plausible but is entirely false or nonsound. What makes these incidents more troubling is their appearance in Google Search, a tool billions rely on for factual accuracy.

In response, a Google spokesperson told TechCrunch that improvements are ongoing:

"As with all Search features, we rigorously make improvements and use examples like this to update our systems."

Still, no specific technical explanation was given for the time/date confusion.

Despite these glitches, Google AI Overviews continues to grow, reaching over 1.5 billion users across 100+ countries, and accounting for more than 10% of search queries in key markets like India and the U.S.. According to CEO Sundar Pichai, the tool is still considered a success internally, though these high-profile errors underline the importance of human oversight.

Ultimately, these blunders are a reminder of two things:

  1. AI is not infallible, no matter how sophisticated the model or how powerful the company behind it.

  2. Human judgment and verification remain crucial, especially in a world increasingly reliant on AI for information.

Until models become truly robust at grounding their responses in verifiable reality, users need to stay alert—and maybe hold the glue.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !