Google justifies the failures in AI Overview for “pointless searches” and inappropriate use of its AI |  TECHNOLOGY

Google AI Overview justifies that the incorrect answers result from “meaningless searches, apparently intended to produce erroneous results”, although it has acknowledged that some of them could be due to queries on areas and topics in which it needs to improve its accuracy.

AI Overview is a Gemini feature that offers quick answers with summaries created by AI and has been available for a few weeks in the United States. More specifically, it is a Search Labs experiment that has replaced the Search Generative Experience (SGE).

With its implementation, several users reported having obtained a series of inconsistent responses, such as “eat at least one small stone a day” on the advice of Geology experts at the University of Berkeley.

This was reported by a user known as Kris Kashtanova on

In response to these complaints, a Google spokesperson assured that its technology largely generated “high-quality information” and that it had offered erroneous answers due to “uncommon queries” being made.

After commenting that the firm was taking “quick action” to solve this problem, Google has published a statement signed by the vice president and head of Google Search, Liz Reid, in which she explained how AI overviews work and what could have been the cause of the failure.

Firstly, he pointed out that, according to the comments he has received, AI Overviews users are more satisfied with their search results and ask “longer and more complex” questions to their AI, since “they know that Google can now help”.

With this, it has ensured that “clicks to web pages are of higher quality” and that users are more likely to stay on a specific page because it has done “a better job of finding the correct” and useful information for them.

Google has also clarified that this experience “works very differently than ‘chatbots’ and other LLM products” and that, in addition to sharing text results, it includes relevant links so that users “can investigate further.”

Because it considers the accuracy of these results “primary,” and AI Overview descriptions are designed to show information backed by the best web results, this technology generally does not record hallucinations or invent things as other tools of this type would do.

On the other hand, the technology company has indicated that the optimized work of its AI is due to “solid red teaming efforts, evaluations and tests”, but has also recognized that novel searches carried out by millions of people may have resulted in errors. in their responses.

It also acknowledged that some of the overviews were “strange, inaccurate, or unhelpful,” which could be about queries that users don’t typically ask or about specific areas and topics where it needs improvement.

However, he insisted that a large number of false screenshots have been shared showing misleading, “obvious and silly” results, such as those that recommended smoking during pregnancy.

MEASURES THAT END WRONG RESPONSES

To address this issue, Google has been working on a series of updates that it believes “can help a broad set of queries, including new ones” that it hasn’t yet identified, as well as remove responses that don’t comply with its policies.

To this end, it has created better detection mechanisms for meaningless queries “that should not show an overview of AI” and has limited the inclusion of satirical and humorous content in these results.

It has also updated its systems to limit the use of user-generated content in answers that could offer misleading advice and added AI activation restrictions for queries where overviews were not helpful.

Finally, it has indicated that it has reinforced the security barriers that it already implements on topics such as news or health, announcing that its objective is “not to show AI summaries for important news topics”, due to “currentity and factuality” of these are important.

By Editor

Leave a Reply