Google and Microsoft Chatbot offers false report on the Israel and Hamas conflict

Google and Microsoft Chatbot offers false report on the Israel and Hamas conflict
x
Highlights

Google Bard and Microsoft Bing Chat, two of the world's most popular artificial intelligence chatbots, have raised eyebrows by incorrectly reporting a ceasefire in the current conflict between Israel and Hamas.

Since the emergence of OpenAI's ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular worldwide. This technology puts everyone's information just an instant away to personalize it however you want. You can even go to Google Search, enter your query, and find your desired answer. Ask the AI chatbot, and it will instantly present you with the answer. However, the content offered by AI chatbots is not always objective and accurate. In a recent case, two popular AI chatbots, Google Bard and Microsoft Bing Chat, have been accused of providing inaccurate reports on the conflict between Israel and Hamas.

Google and Microsoft AI chatbots report false information

According to a Bloomberg report, Google's Bard and Microsoft's AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots incorrectly claimed that a ceasefire was in effect. In a newsletter, Bloomberg's Shirin Ghaffary reported: "Google's Bard told me on Monday, "both sides are committed" to keeping the peace. Microsoft's AI-powered Bing Chat similarly wrote Tuesday that "the ceasefire signals an end to the immediate bloodshed."

Another inaccurate statement by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict, where he reported that the death toll had surpassed "1,300" on October 11, a date that had not yet arrived.

What is causing these errors?

While the exact cause behind this inaccurate reporting of facts is unknown, AI chatbots misrepresent facts from time to time, and the problem is known as AI hallucination. For those who don't know, AI hallucination occurs when a large language model (LLM) invents facts and reports them as the absolute truth. This is not the first time an AI chatbot has invented facts. In June, there was talk that OpenAI would be sued for defamation after ChatGPT falsely accused a man of a crime.

This problem has persisted for some time, and even the people behind AI chatbots know it. At an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said: "It will take us about a year to perfect the model. It is a balance between creativity and accuracy, and we are trying to minimize the problem. (At present,) I trust the answers that come out of ChatGPT the least out of anyone else on this Earth."

At a time when there is so much misinformation in the world, inaccurate news reporting by AI chatbots raises serious questions about the reliability of the technology.

Tags: Google Bard, Microsoft Bing, Technology, Google, Microsoft, AI Chatbot

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS