Meta Explains Chatbot Offense
Meta’s new artificial intelligence software is already failing. The chatbot, BlenderBot 3, seems to believe Jewish conspiracy theories and that President Trump won the 2020 election, as shown in the conversation here.
Meta is focusing on BlenderBot 3’s pilot status and requires users to accept a statement before they interact:
I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.
The company describes BlenderBot 3 as a “state-of-the-art conversational agent that can converse naturally with people” and claims that feedback will improve how the bot interacts:
Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.
As we learned from other messages this past week, company leaders are pushing back on complaints and asking customers to be patient. We’ll see whether Meta’s strategy of managing expectations turns out better for Blender than Microsoft’s response to complaints about its 2016 chatbot, which was removed after making anti-Semitic, racist, and sexist comments. Meta asks us for feedback, but I’d rather get offended by humans and invest my time in educating them instead of a bot.