Should a "thinking" machine have human rights? The question is less absurd -- and less distant -- than some may assume. We may be getting very close to the point of being able to build machines able to emulate (or display, depending upon one's perspective) consciousness. Thinking about what that might imply is useful now, before the reality confronts us, argues Columbia University's Benjamin Soskis in the current edition of Legal Affairs. Moreover, thinking through the details of whether to assign rights to seemingly self-aware machines will allow us to examine other messy ethical issues in ways which give us some emotional distance. Soskis' essay is a detailed, thought-provoking piece, well-researched and illustrative of a variety of perspectives. He doesn't come to any grand conclusions, but he does raise important questions.
There have been no significant recent breakthroughs in AI research to make one think that the R2D2 is just around the corner, but the combination of steady advances in hardware sophistication and new advances in cognitive science suggest that such breakthroughs are entirely possible. As "traditional" approaches to AI have faltered, it's quite possible that a breakthrough will come more as an "aha!" moment, a realization of a new paradigm, rather than as the cumulation of a long history of close-but-not-quite attempts. But even absent Microsoft Conscious Self-Awareness for Windows, there are good reasons to have considered ahead of time what we will and will not accept as "proof" of consciousness, and what limitations there should be on the rights of self-aware non-humans. At the very least, we should be aware of how the idea of self-aware machines can be abused:
According to Wendell Wallach, co-author of the forthcoming book Robot Morality, corporations that own computers and robots might seek to encourage a belief in their autonomy in order to escape liability for their actions. "Insurance pressures might move us in the direction of computer systems being considered as moral agents," Wallach notes. Given the close association between rights and responsibilities in legal and ethical theory, such a move might also lead to a consideration of legal personhood for computers. The best way to push back against the pressures to treat computers as autonomous would be to think carefully about what moral agency for a computer would mean, how we might be able to determine it, and the implications of that determination for our interaction with machines.
The fact that at least parts of any putative AI software will have been written by humans is also worth bearing in mind. If the "ethics engine" and "morality subroutines" ultimately come down to programming decisions, we must be cautious about trusting the machine's statements -- just like we had well-founded reasons to be concerned about the reliability of electronic voting systems. One problem is that efforts to make machines more "sociable" in both behavior and appearance short-circuits our logical reaction and appeals directly to emotions:
Chris Malcolm, at the U.K. Institute of Informatics at the University of Edinburgh, tells the hypothetical tale of the "indestructible robot," the creative challenge posed by a physicist to a robot designer. After some tinkering, the roboticist comes back with a small, furry creature, places it on a table, hands the physicist a hammer, and invites him to destroy it. The robot scampers around a bit, but when the physicist raises the hammer, the machine turns over on its back, emits a few piteous squeals, and looks up at its persecutor with enormous, terror-stricken eyes. The physicist puts the hammer down. The "indestructible" robot survives, a beneficiary of the human instinct to protect creatures that display the "cute" features of infancy.
But being careful about how we think about thinking machines isn't just an issue for our own self-defense; it's a way of thinking about human rights and ethics, too:
Even specifying why we should deny rights to intelligent machines—thinking carefully about what separates the human from the nonhuman, those to whom we grant moral and legal personhood and those to which we do not—will help us to understand, value, and preserve those qualities that we deem our exclusive patrimony. We can come to appreciate what science can tell us and what it cannot, and how our empirical habits of mind are challenged by our moral intuitions and religious convictions. So the issue of A.I. rights might allow us to probe some of the more sensitive subjects in bioethics, for example, the legal status of the unborn and the brain-dead, more freely than when we consider those flesh-and-blood subjects head on. In short, it provides a way to outflank our discomfort with some of the thorniest challenges in bioethics.
Such considerations may have even broader implications. It's not unreasonable to ask, for example, why we would consider extending human rights to machines if we don't extend them to our closest relations, the Great Apes, or to demonstrably intelligent animals like Cetaceans. But that question begs to be turned around -- why don't we extend greater rights to Bonobos and Dolphins? Is it out of a sickening fear of what that would mean about how humans have behaved towards those creatures? Bonobos, our closest genetic relatives, have nearly disappeared in the wild due to hunting. Taken one way, that's a tragic story of an animal driven to the brink of extinction; taken another, it's genocide.
If, for reasons of logic or fear, we shy away from extending full legal rights to self-aware machines, Soskis offers another possibility, albeit a line of reasoning which leads to its own ethical trap:
Christopher Stone suggests various gradations of what he calls "legal considerateness" that we could grant A.I. in the future. One possibility would be to treat A.I. machines as valuable cultural artifacts, to accord them landmark status, so to speak, with stipulations about their preservation and disassembly. Or we could take as a model the Endangered Species Act, which protects certain animals not out of respect for their inalienable rights, but for their "aesthetic, ecological, historical, recreational, and scientific value to the Nation and its people." We could also employ a utilitarian argument for their protection, similar to Kant's justification of certain protections of animals and Jefferson's argument for protection of slaves, based on the possibility that if we don't afford that protection, individuals might learn to mistreat humans by viewing the mistreatment of robots.
Would we really consider offering a limited protection to conscious non-human "persons" which echoes past treatment of slaves? After all, if a machine's emotions and ethics and reactions are entirely derived from software, there's no reason why we couldn't program robots to want to be slaves, to enjoy it, to see their enslavement as entirely right and proper. If that sentence fills you with disgust... ask yourself why. How would that differ from how we use machines now?
How to ethically treat apparently self-aware machines is not high on the list of immediate problems facing the planet right now, but that doesn't mean it's not worthy of some consideration. We are always better off imagining how to handle potential problems than leaving them until they boil over. Even if we don't get the particulars right, we have at least established some ground rules for asking good questions. And, as Soskis notes, thinking about machine ethics is a useful pathway to thinking about human ethics -- an issue which can always use further consideration.
Comments (3)
I think my first response should be - if we're not consistent about giving humans rights, how can we even consider giving Artificial Life any rights?
I ask that not as rhetoric, but as a real question. Consider that we have 'Rights Violations' around the world - so it's plain to see that we have not established rights for humanity which can be seen as something to parallel - or even imitate. We still stumble through ideologies. For such a 'smart' race, we can be conspicuously stupid at a societal level.
Consider this problem: Smoking is considered bad, and with due cause - but some would argue it is a 'right'. But if this right affects the rights of others - is it still a right, or does it become a wrong. To me, it's a wrong - and I've been quitting smoking for about 17 years now. :-)
Consider the right to bear arms. It's still illegal to kill people (unless there's self defense involved, or war) - more importantly, it is immoral to kill people. Every religion is consistent in that. And yet we do it every day, through negligence, self absorption, and so on. Most atheists and agnostics would agree to this as well.
So how do we explain to another sentient being - in this case a robot, or artificial life form (ALF) - how do we explain that it's immoral to do something, yet we practice it? An ALF is a new form of life, which too will evolve - and it won't be unlike a child.
Would a strict religious upbringing help? No, most of history's tyrants and despots have had strict religious upbringings. So how do we teach some ALF about morals? If we bring Asimov's laws into a true ALF, should we not expect it to rationalize it's way out of the laws as we have - in fact, as Asimov's robots do?
The true allure of Asimov's robots is that they have a tendency toward innocence. In his books, we see robots 'grow up' in situations, and in a way they become characters - things we can relate to. But we relate to such characters all the time, human characters. Like people who read Henry David Thoreau - a lot of people envy his personality and life, but very few actually try to emulate it.
That's a distinct difference.
So what morals do we want ALFs to have? Once we get that straightened out, we can figure out rights. And wrongs.
Posted by Taran | January 4, 2005 10:53 PM
Posted on January 4, 2005 22:53
This was posted to Metafilter two days ago, and is related to the issue: Saving Machines From Themselves: The Ethics of Deep Self-Modification.
Posted by Juri Pakaste | January 5, 2005 1:58 AM
Posted on January 5, 2005 01:58
"After all, if a machine's emotions and ethics and reactions are entirely derived from software, there's no reason why we couldn't program robots to want to be slaves, to enjoy it, to see their enslavement as entirely right and proper. If that sentence fills you with disgust... ask yourself why. How would that differ from how we use machines now?"
Though I'm not very knowledgeable in computer science-- so forgive me if this is totally off base-- it seems that an intelligent machine's emotions, ethics, and reactions would not be a function of software alone, but also of the machine's material form and the attendent physical laws that determine, power, and limit the machine's functionality. Just as the individual perceptions and emotions of human beings, infinitely rich in their variations, arise from a concrete and explainable biological system, so would an intelligent machine's pre-programmed perceptions and behaviors, filtered through various types of abstract information space, manifest in a million different ways. I would venture that the mechanism that facilitates this transformation from cut-and-dry programming to subjective experience is endowed with Inalienable Rights.
But that nature, however, does not seem to recognize them.
Posted by overturned turtle | January 5, 2005 3:24 AM
Posted on January 5, 2005 03:24