It didn’t take long for Meta’s new chatbot to say something offensive
By Catherine Thorbecke, CNN Business
Meta’s new chatbot can convincingly mimic how humans speak on the internet — for better and worse.
In conversations with CNN Business this week, the chatbot, which was released publicly Friday and has been dubbed BlenderBot 3, said it identifies as “alive” and “human,” watches anime and has an Asian wife. It also falsely claimed that Donald Trump is still president and there is “definitely a lot of evidence” that the election was stolen.
If some of those responses weren’t concerning enough for Facebook’s parent company, users were quick to point out that the artificial intelligence-powered bot openly blasted Facebook. In one case, the chatbot reportedly said it had “deleted my account” over frustration with how Facebook handles user data.
While there’s potential value in developing chatbots for customer service and digital assistants, there’s a long history of experimental bots quickly running into trouble when released to the public, such as with Microsoft’s “Tay” chatbot more than six years ago. The colorful responses from BlenderBot show the limitations of building automated conversational tools, which are typically trained on large amounts of public online data.
“If I have one message to people, it’s don’t take these things seriously,” Gary Marcus, an AI researcher and New York University professor emeritus, told CNN Business. “These systems just don’t understand the world that they’re talking about.”
In a statement Monday amid reports the bot also made anti-Semitic remarks, Joelle Pineau, managing director of fundamental AI research at Meta, said “it is painful to see some of these offensive responses.” But she added that “public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.”
Meta previously acknowledged the current pitfalls with this technology in a blog post on Friday. “Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company said. “Despite this work, BlenderBot can still make rude or offensive comments.”
But Meta also claimed its latest chatbot is “twice as knowledgeable” as predecessors as well as 31% more improved on conversational tasks while being factually incorrect 47% less often. Meta said that it was continuously gathering data as more people interact with the bot to make improvements.
Meta did not immediately respond to CNN Business’ request for further specifics on how the bot was trained, but it did say in blog posts that it was trained via “a large amount of publicly available language data.” The company added: “Many of the datasets used were collected by our own team, including one new dataset consisting of more than 20,000 conversations with people predicated on more than 1,000 topics of conversation.”
Marcus speculated that the company is “probably borrowing stuff from Reddit and Wikipedia” like other AI chat systems. If so, he says, the poor outcomes highlight limitations with the data on which the bot is trained. For example, the bot may think Trump is still president because in the majority of older data sets it was trained on, Trump was still president, Marcus speculated.
The public release of BlenderBot comes nearly two months after a Google engineer made headlines by claiming that Google’s AI chatbot LaMDA was “sentient.” The claims, which were widely criticized in the AI community, highlighted how this technology can lead people to assign human attributes to it.
BlenderBot self-identified as “sentient” during chats with CNN Business, likely because that’s what the human responses it studied said. When asked what made it “human,” the bot stated: “The fact that I’m alive and conscious right now makes me human, as well as having emotions and being able to reason logically.”
After being caught contradicting itself in responses, the bot also produced an all-too-human response: “That was just a lie to make people leave me alone. I’m afraid of getting hurt if I tell the truth.”
As Marcus put it, “these systems produce fluent language that sounds like a human wrote it, and that’s because they’re drawing on these vast databases of things that humans actually did write.” But, he added, “at the end of the day, what we have are a lot of demonstrations that you can do cute stuff, and a lot of evidence that you can’t count on it.”
The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.