Google advises users to double-check the accuracy of its Bard AI chatbot’s responses.

Google has issued a warning  about the limitations of its chatbot, Bard, for the second time. According to the corporation, anyone using the generative AI should also use Google search to validate that its replies are correct, an advise by Google

Chatbots like Bard and ChatGPT are infamous for having hallucinations and occasionally spitting forth false responses. It’s something Google, the creator of Bard, is fully aware of, and it advises consumers to double-check any information it produces.

Debbie Weinstein, Google’s UK CEO, told the BBC’s Today show that Bard was “not really the place you go to search for specific information.”

Weinstein went on to say that Bard should be regarded as a “experiment” most suited for “collaboration around problem-solving” and “creating new ideas.”

“We’re encouraging people to use Google as the search engine to actually reference information they found,” she explained.

According to AI proponents, generative AIs might potentially kill off traditional search engines like Google’s, thus it may be in the company’s best interest to encourage consumers to check Bard’s replies.

The generative AI tools itself warn about their proclivity to invent “facts.” A disclaimer at the bottom of ChatGPT’s webpage notes that it may produce erroneous information about individuals, places, or facts. Meanwhile, Bard advises users that it has limits and will not always be correct.

This isn’t the first time Google has issued a warning concerning chatbots. Alphabet’s parent firm, Alphabet, warned its employees last month to be cautious while utilizing the tools, including Bard, and to avoid entering personal information into the generative AIs. The corporation also advised its engineers to avoid using code generated by these services directly.

In its first demo in February, Bard produced the incorrect response. A few months later, we learned that Google personnel allegedly advised the business not to launch the chatbot, describing it as a “pathological liar,” “cringe-worthy,” and “worse than useless.”

In one of the most well-known incidents of an AI hallucination, two attorneys presented bogus legal research generated by ChatGPT in a personal injury lawsuit. One of the lawyers stated that he had no clue that AI content could be misleading. His attempt to validate the citations was to ask ChatGPT if the cases were genuine.

Leave a Reply

Your email address will not be published. Required fields are marked *