Please Don't Kick the Robots
If you follow the futures blogosphere at all -- or just read BoingBoing -- you've undoubtedly seen this video of the "packbot" called Big Dog:
It's an interesting prototype, and a telling example of how rapidly we're moving into the robotic age. The use of four legs for mobility gives it a particularly sci-fi appearance -- as if, at any moment, a tiny flying drone could show up and wrap a cable around its legs. Its walking pattern is distinctly mechanical, except under a particular condition: when it's in trouble, at which point it moves its legs around, trying to stay up, in an eerily animal-like way. I found Big Dog's efforts to recover from slipping on the ice fascinating. But I had a somewhat different reaction to its efforts to recover from being kicked: I felt a bit sick.
My reaction to seeing this robot kicked paralleled what I would have had if I'd seen a video of a pack mule or a real big dog being kicked like that, and (from anecdotal conversations) I know I'm not the only one with that kind of immediate response. True, it wasn't nearly as strong a shocked feeling for me as it would have been with a real animal, but it was definitely of the same character. It simply felt wrong.
I had a similar reaction when I learned that the "Pleo" robot dinosaur toy reacts to being picked up by the tail by crying out in apparent distress.
Pleo is also capable of getting upset—when you hold him upside down by his tail, Pleo lets out an panicky wail until you put him down on his feet.This is where the emotional pull of Pleo—not in him, but in you—is apparent, because once placed safely on a flat surface, Pleo knows how to lay a guilt trip. Like a dog that has just been beaten, Pleo's tail trembles and goes down between his legs, all while he hangs his head and makes noises like a baby dinosaur sobbing. Oh, Herbert, I never meant to hold you upside down all those times. Please forgive me.
Like the author of the above review, my immediate, gut response mirrors what I would feel for a living animal. Intellectually, I know that it's a simple machine without any actual sense of pain or fear; emotionally, it's horrifying.
This response is, at least to an extent, hard-wired -- most of us react to the sight of an animal in distress with empathy for the creature and, if applicable, disgust for the person abusing it. Psychologists have long recognized that humans without this empathy for non-human animals are more likely to be abusive to other people. The behaviors of these robots -- the scrambling legs, the desperate cries -- mirrors real animal behavior closely enough, at least for some of us, to elicit this same kind of empathy.
Some of this "mirror empathy" comes from the robots being biomorphic, that is, having animal-like appearances. Even if a Roomba let out panicky squeaks and flashing lights at being turned upside-down, for example, few of us would react as we would to seeing a turtle on its back. There's no biomorphism to the Roomba. And that's probably a good thing. After all, it's trying to carry out a particular task efficiently, and it probably wouldn't work as well if people constantly picked it up because it was so cute.
It strikes me that there's a likely split in the near-term evolution of human-environment robots in the years to come. Some robots, those meant to interact on a regular basis with humans, will likely take on stronger biomorphic appearances and behaviors, usually in order to deter abusive behavior. A small number of robots, intended to provide emotional support to the injured or depressed, may have human-like appearances. Other robots, meant to work more-or-less out of sight, will probably take on more camouflaged appearances, trying to avoid being noticed.
Note the "usually" above. I would expect that some human-interactive robots will be designed with biomorphic cues meant to elicit a response other than empathy. Fear, for example: a robot that triggers deeply-rooted responses to (say) spiders or snakes may be a better tool for the police or military than one that makes people think of puppies or ponies. Such a design wouldn't necessarily undermine its interactions with the military/police units; we know that soldiers already have strong emotional attachments to completely non-biomorphic, remote-control robots.
I don't think it's likely that we'll stop having these kinds of emotional reactions to biomorphic (in appearance and/or behavior) robots. I think it's rather healthy that we do, actually. For one, it's an indicator that our sense of empathy remains strong and sensitive, and that seems quite a good thing. Another reason, however, is a bit more speculative. At some point, whether in the next decade or next century, we're likely to develop robots that really won't like being kicked. I'd rather not have them start to want to kick back.