Google Bard Insiders Skeptical About the AI Chatbot’s Usefulness


Google Bard product managers, designers, and engineers question the effectiveness and utility of the AI chatbot, messages from an ‘official’ Discord group show.


Key Points

  • Members of an invitation-only Google Bard Discord group question the effectiveness and utility of the AI chatbot.
  • Generative AI outputs should not be trusted without independently verifying the information.
  • Feedback is encouraged from Bard users while the tool is in the experimental phrase of development.

Alphabet Inc.’s Google is running an invitation-only chat on Discord for heavy users of Bard, Google’s artificial intelligence (AI) chatbot that is powered by a large language model (LLM) named LaMDA. The Discord group has almost 9,000 members which consist of Google product managers, designers, and engineers who use the private forum to openly discuss Bard’s effectiveness and utility, and some participants are questioning whether the enormous resources going into development for the AI chatbot are worth it, according to a Bloomberg article.

“The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness?” said Cathy Pearl, user experience lead for Bard, in August 2023. “Like really making a difference. TBD!”

As Google has further integrated Bard into its core products, such as Gmail, Maps, Docs, and YouTube, the company has received complaints about the AI chatbot generating made-up facts and giving potentially dangerous advice to its users. The same day Google introduced app extensions for Bard, the company also announced a Google Search button on Bard’s interface to help users double-check the tool’s generative AI responses for factual accuracy against results from its search engine.

“My rule of thumb is not to trust LLM output unless I can independently verify it,” Dominik Rabiej, a senior product manager for Bard, wrote in the Discord chat in July 2023. “Would love to get it to a point that you can, but it isn’t there yet.”

Bloomberg reviewed dozens of messages in the Discord group from July to October 2023 that provided insight into how Bard is being critiqued by users who know it best. Some of those messages indicate that the company leaders tasked with developing the AI chatbot feel conflicted about Bard’s potential.

Expounding on his answer about “not trusting” responses generated by large language models, Rabiej expounded on the comment about “not trusting” LLMs yet by suggesting a limit of people’s use of Bard to “creative/brainstorming applications.” Using Bard for programming applications was also a good option, Rabiej said, “since you inevitably verify if the code works!”

The debate about Bard’s limitations on Google’s Discord channel is not surprising for most members of the group because it is a routine part of product development. “Since launching Bard as an experiment, we’ve been eager to hear people’s feedback on what they like and how we can further improve the experience,” said Jennifer Rodstrom, a Google spokesperson. “Our discussion channel with people who use Discord is one of the many ways we do that.”

Google’s announcement of the release of Bard in March 2023 also contained details about the AI chatbot’s limitations during development. It states: “While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently.”

Google Bard also includes a disclaimer on the tool for its users that states: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”

Google Bard is still an early experiment for the company and the generative AI tool is still learning and growing. Users should check the sources Bard uses for accuracy. If the output seems to be a hallucination (made up or not correct), then users should submit feedback as Google continues to improve the quality and safety of the product.

Summary

Members of a private Google Bard testing group on Discord are questioning the effectiveness and utility of the AI chatbot in its current state of development, with some feeling conflicted about Bard’s potential. Bard is still in the experimental stage where it can produce false or inaccurate outputs. As a result, users of the product should not trust the output unless it can be independently verified.



منبع