Digital Worldwide News

Google's AI Search Tells Users to Drink Urine and Glue Pizza

24th May 2024


Google's new AI Overviews feature also recommended users to drink a couple of litres of light-coloured urine in order to pass kidney stones.


One user tweeted, "I searched for kidney stone remedies and Google's AI told me to drink urine. Seriously, Google?" Another user sarcastically commented, "Thanks Google, but I think I'll stick to more traditional methods for passing kidney stones."


In addition to recommending urine consumption, the AI Overviews feature has also provided questionable advice in other areas. 


For example, when users searched for tips on improving the adhesion of cheese to pizza, the AI suggested using "non-toxic glue" as a solution.


This unusual recommendation sparked confusion and amusement among users, further highlighting the challenges of relying solely on AI-generated responses.


Despite the mockery, Google insisted that such instances were isolated and not representative of the overall performance of the feature.

 

While the AI Overviews feature aims to provide a summary of search results to save users time, its accuracy has come under scrutiny.

 

The use of AI to generate responses to complex queries can sometimes result in unexpected or inaccurate answers. In this case, the recommendation to drink urine highlights the potential risks of relying solely on AI-generated information.


AI expert Dr. Karen Hao from MIT stated, "AI models are trained on vast amounts of data, but they can still produce erroneous or misleading results, especially when faced with unusual or uncommon queries.


“It's important for users to critically evaluate the information provided by AI systems and cross-reference it with reputable sources."

 

Another renowned AI researcher, Dr. Andrew Ng, emphasized, "While AI has made significant advancements in natural language processing and understanding, it still lacks the contextual understanding and common sense reasoning abilities of humans. 


As a result, AI-generated responses may sometimes be inaccurate or inappropriate."

 

According to a study by Pew Research Center, 64% of Americans say that computer programs will always reflect some level of human bias, while 34% believe it is possible to build AI systems that are unbiased.

 

An analysis by Gartner predicts that by 2022, 70% of customer interactions will involve emerging technologies such as machine learning applications, chatbots, and mobile messaging, up from 15% in 2018.