Live
- Gold rates in Visakhapatnam today surges, check the rates on 23 November, 2024
- Gururaja School students win prizes in Balotsav-2024
- Barring Warangal, 5 airport projects fail to take wings
- KSS unit to be set up in Madakasira
- Narrow roads lead to frequent traffic jams in Kurnool
- Bengal bypolls: TMC leading in all six seats after 2nd round of counting
- Stimulating NREM sleep can boost cognitive function, memory
- EO releases TTD employees’ cricket team Jersey
- Priyanka Gandhi leads by 90,000 votes in Wayanad, Cong edges out BJP in Palakkad Assembly seat
- US CDC confirms H5N1 bird flu infection in child in California
Just In
Google and Microsoft Chatbot offers false report on the Israel and Hamas conflict
Google Bard and Microsoft Bing Chat, two of the world's most popular artificial intelligence chatbots, have raised eyebrows by incorrectly reporting a ceasefire in the current conflict between Israel and Hamas.
Since the emergence of OpenAI's ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular worldwide. This technology puts everyone's information just an instant away to personalize it however you want. You can even go to Google Search, enter your query, and find your desired answer. Ask the AI chatbot, and it will instantly present you with the answer. However, the content offered by AI chatbots is not always objective and accurate. In a recent case, two popular AI chatbots, Google Bard and Microsoft Bing Chat, have been accused of providing inaccurate reports on the conflict between Israel and Hamas.
Google and Microsoft AI chatbots report false information
According to a Bloomberg report, Google's Bard and Microsoft's AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots incorrectly claimed that a ceasefire was in effect. In a newsletter, Bloomberg's Shirin Ghaffary reported: "Google's Bard told me on Monday, "both sides are committed" to keeping the peace. Microsoft's AI-powered Bing Chat similarly wrote Tuesday that "the ceasefire signals an end to the immediate bloodshed."
Another inaccurate statement by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict, where he reported that the death toll had surpassed "1,300" on October 11, a date that had not yet arrived.
What is causing these errors?
While the exact cause behind this inaccurate reporting of facts is unknown, AI chatbots misrepresent facts from time to time, and the problem is known as AI hallucination. For those who don't know, AI hallucination occurs when a large language model (LLM) invents facts and reports them as the absolute truth. This is not the first time an AI chatbot has invented facts. In June, there was talk that OpenAI would be sued for defamation after ChatGPT falsely accused a man of a crime.
This problem has persisted for some time, and even the people behind AI chatbots know it. At an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said: "It will take us about a year to perfect the model. It is a balance between creativity and accuracy, and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth."
At a time when there is so much misinformation in the world, inaccurate news reporting by AI chatbots raises serious questions about the reliability of the technology.
Tags: Google Bard, Microsoft Bing, Technology, Google, Microsoft, AI Chatbot
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com