Complacency over AI errors will make search an even worse experience. Google should shut its new feature down.
For years, to “google” meant to tap the web’s prodigious data. Today it means wading through ads, spam and, most recently, wildly inaccurate AI answers. The new AI Overview feature rolled out by Google over the last week has led to a flurry of errors that must have Alphabet Inc. Chief Executive Officer Sundar Pichai cringing. When asked about cheese sliding off pizza, it recommended a user apply glue. It suggested to another that a python was a mammal. Worse: Google’s AI told one user who was “feeling depressed” that they could jump off the Golden Gate Bridge.
Pichai has tinkered with one of the most successful and profitable technology products of all time and made it completely unreliable, even dangerous. The countdown has begun for when he takes it offline. The sooner he does, the better.
For now, Google has said it’s refining its AI search service with every new report of a hallucination (which, according to my Twitter feed, is turning into a flood). A Google spokesperson told The Verge that errors were happening for “generally very uncommon queries and aren’t representative of most people’s experiences.” That’s a poor excuse for a company that prides itself on organizing the world’s information. And infrequent search queries should get reliable results, since the vast majority of Google searches are made up of a long tail of uncommon queries.
This is a stunning turnaround for a company that was once so cautious that it refused to release generative AI technology it had built that was at least two years ahead of OpenAI Inc.’s ChatGPT. It has since succumbed to the race set off by Microsoft Corp. and OpenAI, which are stirring one controversy after another. Last week OpenAI released a new version of ChatGPT deliberately timed to preempt Google’s AI launches the following day. But in all the rush, Sam Altman botched the rollout and got into a beef with Scarlett Johansson. 1
Steve Jobs’ 2011 catchphrase “It just works” epitomized an era when the bar for technology products was reliability. But the more tech companies showcase how much generative AI doesn’t work, the harder it will be for them to prove its usefulness to enterprise customers and consumers alike.
Even Elon Musk, on the verge of raising $6 billion for his xAI startup, isn’t using generative AI tools at his own SpaceX and Starlink businesses because they keep making mistakes. “I’ll ask it questions about the Fermi Paradox, about rocket engine design, about electrochemistry,” he told the Milken Institute conference earlier this month. “And so far, the AI has been terrible at all those questions.”
Should Google stick it out with AI Overview and keep the feature in place, one outcome will obviously be more misinformation. Another is that, in much the same way we got used to scrolling past SEO spam and sponsored ads, we’ll acclimate to the zany mistakes its AI makes too. We’ll get used to an even more mediocre service because there are so few other options. (Google’s global market share for search has slipped to 82% from 87% about a decade ago.) In this new era, we resign ourselves to subpar software once billed as transformative for the world, that requires constant fact-checking.
Hallucinations aren’t a new problem, but they seem to be one that, to our detriment, we’re getting used to. When mistakes cropped up in Google’s very first demo of Bard in February 2023, shares of Alphabet dropped 7%, wiping $100 billion off the company’s value. On Friday, as more social posts of its latest gaffes went viral, they opened up almost 1%. Wall Street doesn’t seem to care. Does Google?
We’ll find out if and when Pichai pauses his new AI feature for further tinkering, as he did with the Gemini image-generator in February. It’d be yet another humiliating retreat, but to put tech back on the path to “just working,” he should just do it.
More From Bloomberg Opinion:
- OpenAI Can Rebuild Trust by Being Less Secretive: Parmy Olson
- Scarlett Johansson Proves She’s Nobody’s ChatBot: Beth Kowitt
- OpenAI Makes Journalism an Offer That It Can’t Refuse: Dave Lee
Written by: Parmy Olson @Bloomberg
The post “Google’s AI Keeps Hallucinating. Does Anyone Care?” first appeared on Bloomberg