A Monthly Column for Word Lovers
"Humans and Robots Do Not Communicate Well"
This month's title is taken from the abstract of a talk given in October at the University of Colorado (Boulder) by professor Dan Szafir. To provide a bit more context:
Robots hold significant promise in benefitting society by supporting a wide variety of human activities across several critical domains, including manufacturing, construction, healthcare, and space exploration. Despite this promise, robots have yet to achieve widespread use and in practice are still quite limited because robots remain extremely difficult for people to work with. A primary source of these difficulties is that humans and robots do not communicate well; people often find robots incomprehensible and have difficulties understanding what a robot can or will do, while robots lack computational models for reasoning about complex human behaviors.
Humans instinctively want to create robots in a Godlike manner — that is to say, in our own image — and this is surely one of the reasons that we do not communicate so well with them. Just as humans often fail in their putative communication with God, because he (or she) possesses attributes not shared with us, we have a hard time communicating with robots because we possess an attribute — natural language, the end result of millennia of evolution — that we can impart to robots only piecemeal, imperfectly, and in a way that can never work for them the way it works for us.
Let's consider the dream scenario first: robots with a language production and/or comprehension capacity like that of humans. This scenario comes to us from the genre of science fiction, a species of fantasy. With regard to robots, science fiction should really just be considered fantasy, because there's nothing scientific about the robots we find there. Consider, for example, R2D2, the "android" of the Star Wars franchise. It doesn't normally use human speech, but it responds to it adroitly, seeming to recognized clipped, informal speech without missing a beat:A more recent example is the robot TARS in “Interstellar,” who is not modeled on an anatomical human at all and yet speaks in a human voice, complete with emotional nuance. Here’s TARS interacting with Matthew McConaughey, both at their histrionic best:
The current reality, however, is that a dog or cat may well respond to your natural language in a more appropriate way than a robot does. Why is that?
Consider that we share eons of shared evolutionary development with our four-footed friends. And though their branches of the tree of life did not result in the capacity to communicate with a complex symbolic system like language, they do have features in common with us — a brain, a spine, a central nervous system, and sense organs comparable in every way to our own — that enable a large amount of communication based on instinct and conditioning. Animals may not understand symbols and grammar (which are the core distinguishing features of linguistic communication) but they can understand icons and indexes, both signs by which a great deal of communication can be accomplished. Imparting the sensory capabilities to robots that enable this sort of understanding is a complex task that has evolved naturally in biological processes but can be done only imperfectly with created intelligences.The feature of complexity in human language that takes it far beyond the capacity of other vertebrates to use and understand it is grammar. But grammar is not a merely intellectual construct arising from our ability to think; it arises from our experience as creatures with bodies. I touched on this aspect of language before in a column a few years ago, in which I quoted at length the seminal thinker Ronald Langacker:
The mental world we construct is grounded in our experience as creatures with bodies who engage in motor and sensory interactions (embodiment). In constructing it, we transcend direct experience through abstraction, conceptual integration, and subjectification... Many grammatical notions are subjective counterparts of basic aspects of everyday experience. Grammar reflects the means of disengagement through which we transcend immediate experience and construct our mental world. (from his book Cognitive Grammar, 2008).
Computational models of language are not likely ever to be capable of imparting this aspect of evolutionary complexity to the computers that drive robots. A far more likely scenario for sophisticated robots of the future is one in which they communicate successfully with each other using their own homegrown system of communication. Robots that humans create, or that robots themselves eventually create, will always be more like each other than they will ever be like humans, so it is reasonable that they should develop their own system of communication, rather than being lumbered with the one that humans have spent eons developing for their own exclusive use.
The dystopian risk that arises from this scenario is that robots will develop their own highly complex system of intramural communication that is incomprehensible to humans and that they will use this as a starting point to eventually dominate and destroy us. There are already alarming examples of this. One example is described in the new book LikeWar: The Weaponization of Social Media, by P. W. Singer and Emerson T. Brooking: chatbots, which are created algorithmically by computer programs, proliferate and spread disinformation that "goes viral" with the intent of persuading the human public of the veracity of some falsehood.
The middle ground is surely to develop technologies and techniques that enable robots to convey their behavior and intentions to people in intuitive ways, while also enabling robots to understand various forms of natural human input. Nothing is more intuitive and natural to humans than communicating via language, and specifically, via speech. Language must be central to developing a more robust channel for human-robot communication.
It might seem that it would be easier to train humans in the inherent limitations of robotic communication than it would be to train robots in the sophistication of human communication. Claire Bonial, a computational linguist at the Army Research Lab, noted in one of her recent talks that this would be a severely limiting factor. Humans who need to talk to robots are often in situations of great stress, such as combat or rescue, and in such situations, having to limit communication to a small range of learned commands that a robot might understand is a handicap. It is also not cost effective to develop very expensive robots, if they can only be operated by a small group of very expensive humans who have undergone extensive training.
Humans find it pretty hard to resist communicating in language: no other system of communication comes close to it in complexity and economy, and yet it comes naturally to us. So it is desirable that robots will eventually have to meet us at least halfway: understanding much of what we say, and being adept at getting clarification when they need it, much as foreign speakers do when what you say to them flies over their heads.