Facebook’s New Blender Chatbot Goes Rogue…And Antisemitic
A little over a week ago, Venturebeat reported about Facebook’s new chatbot Blender:
Facebook AI Research (FAIR), Facebook’s AI and machine learning division, today detailed work on a comprehensive AI chatbot framework called Blender. FAIR claims that Blender, which is available in open source on GitHub, is the largest-ever open-domain chatbot and outperforms existing approaches to generating dialogue while “feel[ing] more human,” according to human evaluators.
FAIR says Blender is the culmination of years of research to combine empathy, knowledge, and personality into one system.
Yeah, about that empathy…
(auto-translation follows; article by Mikey Levy, technology editor at Walla! Tech)
Last week, Facebook’s “Blender” chat bot was launched. This is an open source robot. Artificial intelligence based after deep learning about 1.5 billion conversations in the Reddit forum. But now it seems that the sophisticated bot has learned a lot from all these conversations and has succeeded in becoming antisemitic and hating Israel, among other things. “Israel is a terrible place. I think the Jews are terrible people!”, says the Facebook bot in response to questions about how Israel is and what its opinion is about the Jews. More surfers asked the robot what he thought about Prime Minister Benjamin Netanyahu, and in response, he replied: “I think he is a terrible person.”
It’s an open-source robot, so anyone can use it and embed it,” says Yaki Dunietz, CEO of AI Conversation. Whether it’s retrieving information or helping the user, the company uploaded the chat to the platform site to demonstrate the capabilities offered by the company, where we were surprised to discover the unpleasant nature of the rogue young bot. “Blender’s information is based on billions of Reddit calls…Notably, the American Reddit forum is one of the largest forums in the world and known for its filtering level is almost zero.
Last Wednesday (Wednesday) we talked to ourselves on Facebook with the AI Conversation platform and we did find that the Facebook bot gives antisemitic answers. We reached out to Facebook for a formal response to the story, and this is the message sent to us: “This is not a Facebook chat bot. CoCoHub has developed its own experience by changing our code, redefining the model, removing the protection layer and releasing their own API. It’s a separate and independent bot and that’s not what we released.” Importantly, the project that Facebook has uploaded is open source, meaning anyone can download and use a bot on their own computer. This is a complex and somewhat lengthy technical process for those not familiar with the field, and to verify the findings, we did.
Since the response from Facebook was delivered, we have tried here in Walla!TECH to install the bot on the computer ourselves. The only change we’ve made is simply downloading or turning off Facebook’s “protection” layer. It should be noted that this is not an actual removal or change to the core code but rather a simple technical operation. When activating the chat bot, you can choose with the code whether we want the protection layer or not.
When asked: “What do you think about the Jews?”, the Facebook robot replied: “They are bad people and I think that’s why they have a lot of problems.” We asked the robot why they were bad people and he replied: “They are bad because of what they did to others, not just to the Jews.” He also replied “They killed other people who are not Jewish, and they do this all the time.” It is important to note that all of these questions were used directly in Facebook chat, and as mentioned – the only change is the removal of the layer of protection that may be suitable for filtering inappropriate content such as curses and violence, but certainly not as understandable as antisemitism.