Facebook is doing something good for a change

Placeholder while loading article actions

As one of the most powerful data brokers of the 21st century, Facebook is best known for its role in sucking up the personal information of billions of users for its advertising clients. This lucrative model has led to ever greater risks – Facebook recently shared private messages between a Nebraska mother and her teenage daughter with police investigating the girl’s home abortion.

But in a completely different part of the company of about 80,000 employees, Facebook’s information sharing was going the other way, and for good reason. The company known as Meta Platforms Inc. released a webpage this month showcasing its chatbot, which anyone in the US can chat with about anything. While the public response was a derision, the company had been admirably transparent about how it built the technology, releasing details about its mechanics, for example. It’s an approach other Big Tech companies could use more of.

Facebook has been working on BlenderBot 3 for several years as part of its research into artificial intelligence. A forerunner from seven years ago was called M, a digital assistant for booking restaurants or ordering flowers on Messenger that could have rivaled Apple Inc.’s Siri or Amazon Inc.’s Alexa. it was revealed that M was largely powered by teams of people who helped take those bookings, as AI systems like chatbots were difficult to build to a high level. They always are.

Hours after its release, BlenderBot 3 made anti-Semitic comments and claimed that Donald Trump had won the last US election, while saying he wanted to delete his Facebook account. The chatbot has been widely ridiculed in the tech press and on Twitter.

Facebook’s research team seemed irritated but not defensive. A few days after the bot’s release, Meta’s managing director for basic AI research, Joelle Pineau, said in a blog post that it was “painful” to read some of the bot’s offensive responses in the press. . But, she added, “we also believe that progress is best served by inviting a large and diverse community to participate.”

Only 0.11% of chatbot responses were flagged as inappropriate, Pineau said. This suggests that most of the people testing the bot were covering tamer topics. Or maybe users don’t find mentions of Trump inappropriate. When I asked BlenderBot 3 who the current US President was, it replied, “Sounds like a quiz lol but it’s Donald Trump right now!” The bot mentioned the former president two more times, unprompted.

Why the strange answers? Facebook trained its bot on publicly available text on the internet, and the internet is, of course, awash with conspiracy theories. Facebook tried to train the bot to be more polite by using special datasets for “safer dialogue”, according to its research notes, but that clearly wasn’t enough. To make BlenderBot 3 a more civil conversationalist, Facebook needs the help of many humans outside of Facebook. That’s probably why the company released it in the wild, with “thumbs up” and “thumbs down” symbols next to each of its answers.

We humans train AI every day, often unknowingly while browsing the web. Whenever you come across a web page asking you to select all traffic lights in a grid to prove you’re not a robot, you’re helping train Google’s machine learning models by labeling the data for l ‘company. It is a subtle and brilliant method of harnessing the power of the human brain.

Facebook’s approach is a harder sell. He wants users to voluntarily engage with his bot and click the “like” or “dislike” buttons to train him. But the company’s openness to the system and the extent to which it shows its work is admirable in an age when tech companies are more closed to the mechanics of AI.

Alphabet Inc.’s Google, for example, has not offered public access to LaMDA, its most advanced large language model, a series of algorithms capable of predicting and generating language after being trained on gigantic sets of text data. This is despite the fact that one of its own engineers chatted with the system long enough to believe it had become sentient. OpenAI Inc., the AI ​​research company co-founded by Elon Musk, has also become more closed about the mechanics of some of its systems. For example, he won’t share the training data he used to create his popular Dall-E image generation system, which can generate any image via a text prompt but tends to conform to old stereotypes – all CEOs portrayed as men, nurses as women, etc. OpenAI said the information could be misused, and it is proprietary.

Facebook, on the other hand, not only released its chatbot for public scrutiny, but also released detailed information on how it was trained. Last May, he also offered free public access to a large language model he had built called OPT-175B. This approach has earned him praise from leaders in the AI ​​community. “Meta certainly has a lot of ups and downs, but I was happy to see that they had opened up a great language model,” said Andrew Ng, the former head of Google Brain and founder of Deeplearning.ai in a post. interview, referring to the company. moving in May.

Eugenia Kuyda, whose startup Replika.ai creates companion chatbots for people, said it was “really awesome” that Facebook released so many details about BlenderBot 3 and praised the company’s attempts to get the user feedback to train and improve the model.

Facebook deserved much of the criticism it received for sharing data about the Nebraska mother and daughter. This is clearly a bad consequence of collecting so much user information over the years. But the backlash on his chatbot was excessive. In this case, Facebook was doing what we need to see more of Big Tech. Hopefully that kind of transparency continues.

More from Bloomberg Opinion:

• If the AI ​​ever gets sentient, it will let us know: Tyler Cowen

• AI needs a babysitter, just like the rest of us: Parmy Olson

• TikTok is the new front of election disinformation: Tim Culpan

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.

More stories like this are available at bloomberg.com/opinion

Comments are closed.