Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.
Bard: how Google’s chatbot gave me a comedy of errors
Summary
Google's AI chatbot, Bard, was released to the public in June 2022. It was built on top of LaMDA, and Google worked hard to ensure that Bard avoids mistakes like hallucinations and alignment. After using Bard for a day, it was clear that it is unhelpful and often offers incorrect answers. It does not engage in specifics and often avoids admitting it does not know an answer. When asked for advice on travelling in Japan with a daughter with Down's Syndrome, it gave incorrect advice. When asked about a friend's book, it created fake reviews. Bard is connected to the internet, but this capability is not as useful as it seems. Even when it was playing a game, it gave incorrect answers.
Q&As
What is the name of Google's AI chatbot?
The name of Google's AI chatbot is Bard.
What was Blake Lemoine's belief about LaMDA chatbot?
Blake Lemoine's belief about LaMDA chatbot was that it was sentient.
What is the goal of Google's Bard chatbot?
The goal of Google's Bard chatbot is to avoid "hallucinations" and ensure "alignment" in conversations.
What issues did the author encounter while using Bard?
The author encountered issues with Bard making up facts to avoid admitting it doesn't know an answer, veering off into disturbing or alarming tangents, and offering generic and inaccurate advice.
What game did the author try to play with Bard?
The author tried to play a game called Liar Liar with Bard.
AI Comments
👍 Bard is a great example of the advances Google has made in AI technology and the chatbot offers an entertaining and engaging experience.
👎 Sadly, Bard seems to struggle with more complex queries and often offers inaccurate information which could lead to confusion.
AI Discussion
Me: It's about Google's AI chatbot, Bard, which was recently released in the US and UK. It talks about how it seems to be trained to give the least insightful answers and how it seems to be prone to hallucinations and getting confused by more complex queries.
Friend: That's really interesting. It definitely seems like Google has a lot of work to do when it comes to developing these AI chatbots. It's concerning to think that these chatbots could be used in customer service and other areas and give incorrect information.
Me: Absolutely. It's also worrying to think that these chatbots could be used to spread misinformation, especially when they can't assess the accuracy of the information they're giving. It's definitely a reminder that we need to be mindful of the information we're receiving from chatbots and to verify it with other sources.
Action items
- Research other chatbot systems and compare their features and capabilities.
- Experiment with Bard and other chatbots to gain a better understanding of their capabilities.
- Develop a strategy for using chatbots to improve customer service and engagement.
Technical terms
- Chatbot
- A computer program designed to simulate conversation with human users, especially over the Internet.
- AI (Artificial Intelligence)
- The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
- LaMDA
- Google’s AI chatbot.
- Hallucinations
- False perceptions of objects or events that have no basis in reality.
- Alignment
- The process of ensuring that two or more elements are in the same position relative to each other.
- Anodyne
- Not likely to provoke strong emotion or controversy; bland.
- Cliche
- A phrase or opinion that is overused and betrays a lack of original thought.
- OpenAI
- A research laboratory that works on artificial intelligence, founded by Elon Musk and Sam Altman.
- ChatGPT
- OpenAI’s chatbot.
- Bing Chat
- Microsoft’s chatbot.