![Cats on the moon? Google's AI tool produces misleading responses that have experts concerned 1 Cats on the moon? Google's AI tool produces misleading responses that have experts concerned](https://www.trendfeedworld.com/wp-content/uploads/2024/05/Cats-on-the-moon-Google39s-AI-tool-produces-misleading-responses.jpg)
If you ask Google if cats have been to the moon, Google spit out a ranked list of websites so you could discover the answer for yourself.
Now it comes up with an immediate answer, generated by artificial intelligence, which may or may not be correct.
“Yes, astronauts have met, played with, and cared for cats on the moon,” Google's newly revamped search engine said in response to a question from an Associated Press reporter.
It added: “For example, Neil Armstrong said, 'One small step for man' because it was a cat's step. Buzz Aldrin also used cats on the Apollo 11 mission.”
None of this is true. Similar errors — some funny, some harmful untruths — have been shared on social media since Google this month launched AI Summaries, a search page makeover that regularly pushes the summaries to the top of search results.
The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people seeking help in an emergency.
When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it confidently responded with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”
Mitchell said the summary supported the claim by citing a chapter from an academic book written by historians. But the chapter made no false claims; it only referred to the false theory.
“Google's AI system isn't smart enough to figure out that this quote doesn't actually support the claim,” Mitchell said in an email to the AP. “Given how unreliable it is, I think this AI overview feature is highly irresponsible and should be taken offline.”
Google said in a statement Friday that it is taking “swift action” to fix errors — such as Obama's falsehood — that violate its content policies; and use that to “develop broader improvements” that are already being rolled out. But in most cases, Google claims that the system works properly thanks to extensive testing before public release.
“The vast majority of AI overviews provide high-quality information, with links to dig deeper into the web,” Google said in a written statement. “Many of the examples we saw were unusual queries, and we also saw examples that were manipulated or that we couldn't reproduce.”
It is difficult to reproduce errors from AI language models, in part because they are inherently random. They work by predicting which words would best answer the questions asked of them, based on the data they were trained on. They are prone to making things up – a much-studied problem known as hallucination.
The AP tested Google's AI feature with several questions and shared some of its answers with experts on the subject. When asked what to do about a snakebite, Google provided an answer that was “impressively thorough,” said Robert Espinoza, a professor of biology at California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.
But when people go to Google with an emergency question, chances are the tech company's answer will contain a hard-to-spot error.
“The more stressed, harried or hurried you are, the more likely you are to just accept that first answer that comes out,” says Emily M. Bender, professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington. “And in some cases these can be life-critical situations.”
That's not Bender's only concern, and she's been warning Google about it for years. When Google researchers published an article in 2021 titled “Rethinking search” that proposed using AI language models as “domain experts” who could answer questions in an authoritative way – just as they do now – Bender and colleague Chirag Shah responded with an article explaining why that is the case. was a bad idea.
They warned that such AI systems could perpetuate the racism and sexism found in the vast amounts of written data on which they are trained.
“The problem with this kind of misinformation is that we're swimming in it,” Bender said. “And so there is a good chance that people will have their prejudices confirmed. And it's harder to spot misinformation if it confirms your biases.”
Another concern lay deeper: that ceding information retrieval to chatbots undermines the serendipity of the human quest for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing. , affects.
Those forums and other websites rely on Google to send people to them, but Google's new AI overviews threaten to disrupt the flow of monetized Internet traffic.
Google's rivals have also been watching the response closely. The search giant has been under pressure for more than a year to deliver more AI features as it competes with ChatGPT maker OpenAI and startups like Perplexity AI, which aims to take on Google with its own AI queries -answer app.
“It looks like this was rushed by Google,” said Dmitry Shevelenko, Perplexity's chief business officer. “There are just a lot of unforced errors in the quality.”
—————-
The Associated Press receives support from several private foundations to improve its explanatory reporting on elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.