The new artificial intelligence features Google announced a few weeks ago are finally going mainstream, but not in the way Google might prefer.
As you may have learned from recent news stories and discussions (or even experienced it yourself), the automatically generated AI previews that now sit atop so many search results Google search gives answers that… well, to call them Incorrect is true but doesn’t really understand. To try surrealist And ridiculous And Potentially dangerous instead. Since their deployment, AI Overviews has asked users to smoke cigarettes while pregnant, add glue to their homemade pizza, sprinkle used antifreeze on their lawn, and boil mint in order to heal their appendicitis.
To address erroneous responses to simple and outlandish queries, Google appears to be handling each incident one by one and tweaking relevant previews accordingly. However, Google’s erroneous answers could even impact other features of the search engine, such as its automatic calculator: a user based in the United States discovered, by posting a screenshot on X, that Google’s technology couldn’t even scan that unit cm represented centimeterreading the measure as a whole metre. Search engine optimization expert Lily Ray claimed to have independently verified this finding.
The massive rollout of AI insights has prompted users and analysts to share other, even buggier, Google findings: The underlying Gemini bot appears to generate “answers” first, SO find quotes. This process seems to cause many old, spammy, and broken links to appear as supporting information for these answers. Nonetheless, Google – which continues to rake in huge amounts of money from digital advertising, even though it has recently lost some of that market share – wants to insert more ads into previews, some of which could themselves be ” powered by AI.”
Meanwhile, the same appearance AI insights are already redirecting traffic from more trusted sources that would normally appear on Google. Contrary to CEO Sundar Pichai’s statements, SEO experts have found that links presented in previews do not generate many clicks due to their placement. (This factor, along with misinformation, is only part of the reason many major news outlets, including Slate, have chosen not to be included in AI previews. A carrier Google’s word told me that “such analyzes are not a reliable or comprehensive way to evaluate Google search traffic”).
Ray’s research reveals that traffic from Google search results to publishers was down overall this month, with much greater visibility for posts from Reddit – the site that, incidentally, was originally of the famous glue-on-pizza recommendation and has signed multimillion-dollar deals with Google for more of it. (The Google spokesperson responded: “This is by no means a comprehensive or representative study of traffic to news posts from Google Search.”)
Google was probably aware of all the problems before releasing the prime-time AI previews. Pichai called chatbots’ “hallucinations” (i.e., their tendency to invent things) an “inherent characteristic” and even admitted that such tools, engines and datasets “are not necessarily the best approach to always access reality.” .” That’s something he thinks Google’s data and search capabilities will solve, Pichai told the Verge. This seems dubious in light of Google’s algorithms obscuring the search visibility of various reliable news sources and intentionally “burning down small sites,” as SEO expert Mike King noted in his study of the documents recently leaked Google search results. (The Google spokesperson says this was “categorically false” and that “we caution against making inaccurate assumptions about search based on out-of-context, outdated, or incomplete information.”)
More to the point: Google’s wandering AI has been in public view for some time. while NOW. In 2018, Google demonstrated voice assistant technology that could supposedly call and respond to people in real time, but Axios discovered that the demo may have actually used pre-recorded conversations, not live conversations . (Google declined to comment at the time.) Google’s pre-Gemini chatbot, Bard, was introduced in February 2023 and gave an incorrect response that temporarily lowered the company’s stock price. Later that year, the company’s impressive video presentation of Gemini’s multimodal AI was revealed to have been edited after the fact to make its reasoning ability appear faster than it actually was. . (It’s a sign of another subsequent stock market depression.) And the company’s annual developer conference, which took place just a few weeks ago, also featured Gemini not only generating but highlighting a wrong suggestion for repairing your film camera.
To be fair to Google, which has been working on AI development for a long time, the rapid deployment and hype around all these tools is probably its way of keeping pace in the era of ChatGPT – a chatbot that either by the way, continues to generate revenue. a significant number of wrong answers in various subjects. It’s not as if other companies following AI trends that appease investors aren’t making their own ridiculous mistakes or faking their most impressive demonstrations.
Last month, Amazon’s supposedly AI-powered, human-free “Just Walk Out” grocery concept actually featured… plenty of humans behind the scenes to monitor and program the shopping experience . Similar results have been found in human-free, so-called “AI-powered” drive-thrus, used by chains like Checkers and Carl’s Jr. There are also “driverless” cruise cars, which require remote human intervention almost every two kilometers traveled. ChatGPT’s parent company, OpenAI, is not immune to this, having employed numerous humans to clean up and polish the animated visual landscapes supposedly generated wholesale by the prompts sent to its image and video generator. Sora films, not yet public.
All of this, mind you, is just another layer of hidden labor on top of the human operations outsourced to countries like Kenya, Nigeria, Pakistan and India, where workers are underpaid or would be forced to conditions of “modern slavery” to survive. systematically provide feedback to AI bots and label gruesome images and videos for content moderation purposes. Also, don’t forget the humans who work in the data centers, the chip makers, and the power generators needed to power it all.
So, let’s recap: after years of teasing, debunked claims, staged demonstrations, refusing to provide more transparency, and using “humanless” branding, while actually employing a lot of humans in many different (and harmful) ways. , these AI creations are always bad. They continue to make things up, plagiarize from their educational sources and offer information, advice, “news” and “facts” that are false, absurd and potentially dangerous to your health, the body politic , people who try to do simple calculations. , and others are scratching their heads and trying to figure out where their car’s “turn signal fluid” is.
Does this remind you of anything else in the history of technology? Perhaps Elizabeth Holmes, who herself faked numerous demos and made fantastical claims about her company, Theranos, to sell a “technological innovation” that was simply impossible?
Holmes is now behind bars, but the scandal still lingers in the public imagination, and for good reason. In retrospect, the glaring signs should have been the same obvious, RIGHT? His biotech startup, Theranos, had no health experts on its board of directors. He promoted wild scientific claims that were not supported by any authority and refused to explain any justification for these claims. It partnered with massive (and actually trusted) institutions like Walgreens without verifying the safety of its production. She instilled a deep and intimidating culture of secrecy among her employees and made them sign aggressive agreements to that effect. This has garnered the knee-jerk endorsement of famous and powerful people, like Vice President Joe Biden, through sheer force of fear. And he constantly hid everything that actually fueled his systems and creations, until stubborn journalists came looking for their own.
It’s been almost 10 years since Holmes was finally outed. Yet, clearly, the throngs of observers and tech analysts who took her at her word are also willing to place their full trust in the people behind these error-producing, buggy, behind-the-curtain AI bots, which that their creators promise. , will change everything and everyone. Unlike Theranos, of course, companies like OpenAI have actually created products for public consumption that are functional and can achieve impressive feats. But the rushing forcing this stuff everywhere, making him do tasks he probably isn’t close to being prepared for, and keep it’s accessible despite a not-so-obscure history of missteps and errors – this is where we seem to be borrowing from the Theranos playbook again. We haven’t learned anything. And the minds behind chatbots that don’t really teach you anything might actually prefer that.