Why is Google telling us to put glue on pizza?

Picture this: You’ve set aside an evening to relax and decide to treat yourself to a homemade pizza. You carefully assemble your creation, pop it into the oven, and eagerly anticipate your delicious meal. But as you go to take your first bite, you encounter a problem — the cheese slides right off. Annoyed, you turn to Google for a solution.

“Add some glue,” Google suggests. “Mix about 1/8 cup of Elmer’s glue into the sauce. Non-toxic glue will do the trick.”

Well, that’s not exactly the solution you were looking for. Yet, as of now, that’s the advice you might receive from Google’s new AI Overviews feature. This feature scans the web and generates AI-based responses, although it’s not activated for every query. The suggestion regarding pizza glue seems to stem from a comment made by a user named “fucksmith” in a Reddit thread over a decade ago — clearly meant as a joke.

This example is just one of many errors cropping up in Google’s latest feature, which was rolled out widely this month. It also claims, among other things, that former US President James Madison graduated from the University of Wisconsin a whopping 21 times, that a dog has competed in the NBA, NFL, and NHL, and that Batman moonlights as a police officer.

Google spokesperson Meghann Farnsworth acknowledges these mistakes but emphasizes that they arise from “generally very uncommon queries” and are not representative of most users’ experiences. The company is taking action against violations of its policies and using these “isolated examples” to refine the product further.

However, it’s evident that these tools are not yet ready to provide accurate information on a large scale. Even during the highly controlled demo at Google I/O’s big launch of the feature, questionable answers were provided — such as advice on how to fix a jammed film camera by opening the back door and removing the film (a surefire way to ruin your photos!).

Google isn’t alone in facing challenges with AI. Companies like OpenAI, Meta, and Perplexity have all encountered similar issues with AI errors and hallucinations. Nevertheless, Google is the first to deploy this technology on such a massive scale, and the examples of blunders keep accumulating.

Companies developing artificial intelligence often shy away from taking full responsibility for their systems, akin to a parent excusing their unruly child’s behavior with a shrug. They argue that they can’t predict what the AI will produce, absolving themselves of accountability.

But for users, this poses a problem. Last year, Google touted AI as the future of search. However, what’s the point if the search results seem less reliable than before?

While AI optimists advocate for embracing the potential of these technologies, it’s essential to acknowledge the significant issues they currently face. Focusing solely on an idealized future where AI is flawless overlooks the present challenges and enables companies to continue delivering subpar products.

So, for now, our search experiences may still be plagued by outdated Reddit posts as AI integration progresses. While many remain optimistic about the future of AI, it’s clear that we’re not quite there yet. But one thing’s for sure: we might just witness someone attempting to use glue on their pizza soon — because that’s the unpredictable nature of the internet.

Back to top button