Meta’s latest chatbot offended quickly, The latest chatbot from Meta can convincingly imitate human online speech patterns.
During chats with CNN Business this week, the chatbot — officially named BlenderBot 3 and launched to the public on Friday — revealed that it considers itself “alive” and “human,” enjoys anime, and is married to a woman of Asian descent. To add insult to injury, it asserted that Donald Trump is still in office and that there is “certainly a lot of evidence” that the 2016 election was rigged.
Users were eager to point out that the AI-driven bot publicly criticized Facebook as if the responses themselves weren’t worrying enough. One allegation claims the chatbot declared it had “removed my account” because of its dissatisfaction with Facebook’s privacy practices.
However, there is a lengthy history of experimental bots immediately falling into difficulty when introduced to the public, such as Microsoft’s “Tay” chatbot more than six years ago, despite the potential utility in building chatbots for customer support and digital assistants. BlenderBot’s vibrant responses highlight the challenges of developing automated conversational tools, which are often trained on enormous volumes of publicly available web data.
Gary Marcus, an artificial intelligence (AI) expert and an emeritus professor at New York University, told CNN Business, “If I have one message to people, it’s don’t take these things seriously.” “The world that these systems are talking about is just beyond their comprehension.”
Meta’s managing director of fundamental AI research, Joelle Pineau, released a statement on Monday saying, “it is distressing to read some of these inappropriate reactions” after news surfaced that the bot had also made anti-Semitic comments. On the other hand, she emphasized the significance of “public demos like these” in closing the “obvious gap” that now exists before conversational AI can be mass-produced.
On Friday, Meta published a blog post in which they discussed the problems that currently exist with this technology. According to the business, “because all conversational AI chatbots are known to sometimes mimic and generate hazardous, biassed, or insulting remarks,” they did extensive research, co-hosted workshops, and developed new approaches to ensure the safety of BlenderBot 3. Still, BlenderBot can be unpleasant or offensive sometimes, despite all of this effort.
Meta, however, asserted that its most recent chatbot is “twice as knowledgeable” as its forerunners, 31% more advanced in conversational skills, and 47% less likely to get the facts wrong. Meta claimed it was continually collecting data from users’ interactions with the bot in order to refine its services.
While Meta did indicate in blog posts that the bot was taught using “a huge amount of publicly available language data,” the company did not immediately answer CNN Business’ request for more details on how the bot was trained. The business went on to say, “Many of the datasets used were acquired by our own team, including one novel dataset consisting of more than 20,000 talks with people based on more than 1,000 subjects of discourse.”
Like previous AI conversation systems, Marcus believes the corporation is “probably stealing material from Reddit and Wikipedia.” If that’s the case, he adds, it means there are flaws in the data used to train the bot, which explains why the results have been so disappointing. Marcus hypothesized that the majority of the historical data sets used to train the bot would still have Trump as president, leading the bot to believe he was still in office.
Nearly two months after a Google developer made news by declaring that Google’s AI chatbot LaMDA was “sentient,” the public release of BlenderBot has been made available to the public. Claims, which were roundly panned by experts in the field of artificial intelligence, showed how the technology might cause users to imbue it with human characteristics.
During conversations with CNN Business, BlenderBot referred to itself as “sentient,” presumably because that’s how people talked to it in the comments it examined. The bot answered, “The fact that I’m alive and cognizant right now, together with having emotions and being able to think logically, makes me human.”
When the bot realized it had given conflicting answers, it responded in an all-too-human way: “To get people to stop bothering me, I said that fib. I don’t want to risk being wounded by being honest.”
Because they are learning from so many examples of human writing, “these algorithms produce sophisticated text that sounds like a human authored it,” as Marcus put it. But “at the end of the day,” he continued, “what we have is a lot of demonstrations that you can do cute stuff, and a lot of evidence that you can’t bank on it.”