THE NOTION OF machines rising up against their creators is a common theme in culture and in breathless news coverage. That helps explain the lurid headlines in recent days describing how Facebook AI researchers in a “panic” were “forced” to “kill” their “creepy” bots that had started speaking in their own language.
That’s not quite what happened. A Facebook experiment did produce simple bots that chattered in garbled sentences, but they weren’t alarming, surprising, or very intelligent. Nobody at the social network’s AI lab panicked, and you shouldn’t either. But the errant media coverage may not bode well for our future. As machine learning and artificial intelligence become more pervasive and influential, it’s crucial to understand the potential and the reality of these technologies. That’s particularly true as algorithms come to play a central role in war, criminal justice, and labor markets.
Here’s what really happened in Facebook’s AI research lab. Researchers set out to make chatbots that could negotiate with people. Their thinking: Negotiation and cooperation will be necessary for bots to work more closely with humans. They started small, with a simple game in which two players were told to divide a collection of objects, such as hats, balls, and books, between themselves.
The team taught their bots to play this game using a two-step program. First, they fed the computers dialog from thousands of games between humans to give the system a sense of the language of negotiation. Then they allowed bots to use trial and error—in the form of a technique called reinforcement learning, which helped Google’s Go bot AlphaGo defeat champion players—to hone their tactics.
When two bots using reinforcement learning played each other, they stopped using recognizable sentences. Or, as Facebook’s researchers drily describe it in their technical paper, “We found that updating the parameters of both agents led to divergence from human language.” One memorable exchange went like this: