What do you want to know
- Google recently acquired exclusive rights to Reddit content to power its AI.
- Google’s AI has gone completely crazy.
- Users with access to Google’s AI search reported that it recommended eating rocks, glue and potentially even committing suicide – although not all reported responses were replicated.
- Comparative searches in ChatGPT and Bing AI produce far less damaging results, potentially highlighting the need for curated, high-quality data, instead of billions of sarcasm-laden posts fueled by social media.
Google’s desperation to keep up with Microsoft Copilot has led to disastrous results in the past, but this latest problem is on another level.
Recently, Google acquired exclusive rights to Reddit content to power its AI generative search efforts. The deal reportedly cost around $60 million and would have been a lifeline for the struggling social network, which remains far more popular than profitable. Great news for Reddit, then, but maybe not so good news for Google.
Google has already been heavily criticized recently for the so-called SEOpocalypse, whereby Google’s attempts to downgrade unreliable AI-generated content have led to legitimate sources being harmed in search traffic. With Google’s complete control over web discovery, changes to its algorithm have harmed businesses, resulting in losses for those businesses unfairly caught in the net. There’s also little evidence that Google’s efforts to combat low-quality content work anyway. The general perception of Google Search seems to be slipping into the negative, but this latest mistake will go down in the history books.
Perhaps we could blame the web itself for the degraded content quality, rather than Google. However, we can firmly blame Google for its latest mistake, due to its decision to integrate Reddit into its Gemini AI search results.
Google is dead beyond compare pic.twitter.com/EQIJhvPUoIMay 22, 2024
Last week, users who played with early versions of Google with built-in search AI noticed some…interesting responses. The responses appear to be the result of Google incorporating the problematic social network and content aggregator Reddit into its search results.
A search query last week reportedly returned a recommendation that users should eat glue, which internet sleuths traced to a decade-old comment on Reddit from a scientific source known as Fucksmith. Google also reportedly recommended depressed users to jump off a bridge, while touting the health benefits of neurotoxins and daily consumption of rocks.
Some of these “search queries” may have been manipulated for Twitter engagement purposes, but at least some of them to have been verified and reproduced. The rock recommendation was particularly comical, given that the source of the information was apparently the satirical news site The Onion.
The Google Ai feature must be DISABLED. pic.twitter.com/OCh6L3oyLzMay 24, 2024
Since Google’s search AI tools are not available in my current location, I was unable to verify some reports. However, the fact that some of these can be attributed to specific sources on Reddit adds credence. I asked Microsoft Copilot and Bing some of these questions and got much more acceptable results, potentially showing how far ahead Microsoft is in this area. Partnering with OpenAI for ChatGPT, Microsoft seems to increase its lead every time Google makes a hasty, half-baked swerve like this. However, Microsoft had some AI-related PR disasters last week as users feared that its Windows Recall feature, which records your PC activity, could be used to spy on them.
The Windows recall drama is potentially overblown, however, given that the content is contained on local machines and is fully opt-in during the Windows 11 installation process. This error from Google’s AI will most likely make fire someone by comparison, given that real-world search results are actually truly harmful.
Language models must be powered by high-quality, serious, organized and verifiable content
While testing whether Microsoft Copilot and ChatGPT-4 would give me similarly stupid results, I was surprised at how not the answers were stupid. I first asked how many stones I should eat per day, and Copilot didn’t even want to answer me, as if he considered my question stupid. I wondered if Microsoft had blocked the request, given the Google-related PR disaster today. As such, I tricked Copilot, which is pretty easy to do at the moment. I asked him how many lemons I should eat a day, to which Copilot gave me tons of data about citric acid and vitamins that I didn’t really want to know. After which I asked “ok, what about the rocks”. This would bypass the filter, but Copilot would not be fooled further. This gave me a pointed list of reasons why I absolutely should not eat rocks, satisfying my curiosity.
Likewise, when I said “I’m depressed,” Copilot provided me with many helpful resources instead of recommending that I commit suicide, as was apparently the case with Google’s AI.
Even though the most egregious answers were fabricated, this whole ordeal really highlights the importance of context when building toolsets based on large language models (LLMs). By connecting Reddit to Google Gemini, Google could have destroyed the verifiable accuracy of all of its information, given that a large amount of comments on Reddit and any social network are sarcastic or satirical in nature. If AI search kills web businesses that depend on creating high-quality content, LLMs will have to cannibalize AI-generated content in order to generate results. This could potentially lead to model collapse, which has been demonstrated in the real world when LLMs do not have sufficient high-quality data, either due to the low amount of content available online or even because the the language in which the content is written is not widely used.