Right now research on AI and practical applications are incredibly productive.
The potential for imminent benefits to humanity cannot be overstated.
Rightfully we should also be concerned with the possible dangers occurring when AI systems start using their power in ways not devised by their human creators.
The Media Spin
In light of the above context the majority of media outlets feasted on a bizarre unwanted consequence in a Facebook experiment:
Over time, the bots began to deviate from the scripted norms and in doing so, started communicating in an entirely new language — one they created without human input.
While Facebook did shut down the experiment it's most likely because the chatbots had failed to keep communicating in proper English rather than by fear of a doomsday scenario.
Negotiation Wizards
The interesting part about this experiment, as explained in this excellent article is that the AI bots got REALLY good at negotiating.
In any case, the obsession with bots "inventing a new language" misses the most notable part of the research in the first place: that the bots, when taught to behave like humans, learned to lie — even though the researchers didn't train them to use that negotiating tactic.
Whether that says more about human behavior (and how comfortable we are with lying), or the state of AI, well, you can decide. But it's worth thinking about a lot more than why the bots didn't understand all the nuances of English grammar in the first place.