Google’s move toward using AI to generate a written response to user searches instead of providing a list of links algorithmically ranked by relevance was inevitable. Before AI Overview – introduced last week for US users – Google had Knowledge Panels, those boxes of information that appear at the top of some searches, encouraging users to get their answers directly from Google, rather than clicking a result.
AI Overview summarizes search results for a portion of queries, right at the top of the page. The results come from several sources, which are cited in a drop-down gallery below the summary. As with any AI-generated answer, these answers vary in quality and reliability.
Overview asked users to change their flasher fluid – which doesn’t exist – apparently because it picked up joke responses on forums where users seek advice from their peers. In a test I ran on Wednesday, Google was able to correctly generate instructions for doing push-ups, drawing heavily on instructions in a New York Times article. Less than a week after launching the feature, Google announced that it was testing ways to incorporate ads into its generative responses.
I’ve been writing about Bad Stuff online for years now, so it’s no big surprise that, after accessing AI Overview, I started Googling a bunch of things that might bring the tool to generative research to extract from unreliable sources. The results were mixed and seemed to depend largely on the exact wording of my question.
When I typed queries asking for information on two different people who are widely associated with questionable natural “cures” for cancer, I received a generated response that simply repeated that person’s claims uncritically. For the other name, the Google engine refused to create generative answers.
Results on basic first aid questions – such as how to clean a wound – taken from trusted sources to generate an answer when I tried it. Queries about “detoxes” repeated unproven claims and lacked important context.
But rather than trying to determine the overall reliability of these results, there’s another question to ask here: If Google’s AI presentation is wrong, who is responsible if that response ends up hurting someone?
Who is responsible for AI?
The answer to that question may not be simple, according to Samir Jain, vice president for policy at the Center for Democracy and Technology. Section 230 of the Communications Decency Act of 1996 largely protects companies like Google from liability for third-party content posted on its platforms because Google is not treated as a publisher of the information it hosts .
It is “less clear” how the law would apply to AI-generated search responses, Jain said. AI Overview makes Section 230 protections a bit more complicated, because it’s harder to tell whether content was created by Google or simply appeared by it.
“If you get a glimpse of the AI that contains a hallucination, it’s a little hard to see how that hallucination wouldn’t have been at least partly created or developed by Google,” Jain said. But a hallucination is different from bringing up bad information. If Google’s AI Overview cites a third party that itself provides inaccurate information, the protections will likely still apply.
Many other scenarios are currently stuck in a gray area: Google-generated answers draw on third parties but don’t necessarily cite them directly. So, is this original content, or is it more of a snippet that appears under search results?
Although generative search tools like AI Overview represent new territory in terms of Section 230 protections, the risks are not hypothetical. Apps claiming to be able to use AI to identify mushrooms for potential pickers are already available in app stores, despite evidence that these tools are not very accurate. Even in Google’s demonstration of its new video search, a factual error was generated, as noted by The Verge.
Eating source code from the Internet
There’s another question here beyond when Section 230 may or may not apply to AI-generated responses: what incentives AI Overview does or does not contain for creating reliable information first place. AI Overview builds on the fact that the web continues to contain a lot of factual and researched information. But the tool also appears to make it harder for users to click through to these sources.
“Our primary concern is the potential impact on human motivation,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, said in an email. “Generative AI tools must include recognition and reciprocity of the human contributions they rely on, through clear and consistent attribution. »
The Wikimedia Foundation has not seen a major drop in traffic to Wikipedia or other Wikimedia projects so far as a direct result of chatbots and AI tools, but Rogers said the foundation is monitoring the situation. Google has in the past relied on Wikipedia to power its knowledge panels and has drawn on its work to provide fact-checking pop-up boxes, for example on YouTube videos on controversial topics.
There is a central tension here that is worth monitoring as this technology becomes more prevalent. Google is incentivized to present its AI-generated answers as authoritative. Otherwise, why would you use them?
“On the other hand,” Jain said, “especially in sensitive areas like health, there will probably need to be some sort of disclaimer or at least a caveat.”
Google’s AI preview contains a small note at the bottom of each result specifying that it is an experimental tool. And, based on my non-scientific research, I suspect that Google has chosen to avoid generating answers on certain controversial topics for now.
After some adjustments, the preview will generate an answer to questions about its own potential liability. After a few dead ends, I asked Google: “Is Google a publisher?” »
“Google is not a publisher because it does not create content,” the response begins. I copied this phrase and pasted it into another search, surrounded by quotation marks. The search engine found 0 results for the exact phrase.