Science fiction has been considering the question of the boundary between human beings and androids (or robots) and asking what is human, what is other, and where the boundary is. Mary Shelley presented us with what might be the most famous example in her novel Frankenstein, but since that novel appeared, many other authors have put forward thoughtful examinations of the question. Further, SF also forces us to confront the question of what it means for beings to have rights in a culture that tries to craft definitions of who is “human” and who (or what) is not. The use of norms translated into legal language has operated to limit or expand this category. Robots are fairly clearly not “human” in any legal regime, and it’s not clear that they will be in the near future. The binary view (human/not human) has forced us into a discussion of what human and not-human means, with fairly draconian decisions about who or what falls into the “human” category. This short essay begins my discussion of whether we should continue to consider the decision binary or open up the decision about rights to encompass artificial beings and how science fiction presents that issue.
To begin the discussion, consider cyborgs, those beings that are a mixture of human and mechanical parts. Are they “human” or “android/robot”? Should they be able to claim full human rights? Or do their mechanical parts mean that they are “less than” human? Is there a line that separates the human from the artificial in such cases, placing a human with only one or two mechanical parts firmly on the “human” side of the line? What privacy rights do such individuals have if they agree to have, major or minor, devices implanted? What about other kinds of humanoid or human-like beings? What sorts of human rights could they claim?
Science fiction has already offered us the opportunity to think about such issues. In addition to Dr. Frankenstein’s monster, consider the much more recent example of the Star Trek episode “What Are Little Girls Made Of?” When the episode’s “evil genius,” Dr. Roger Korby offers some of the Enterprise crew eternal life by placing their minds (brains) in android bodies, at least one crew member, Lt. Uhura, is intrigued. When Nurse Chapel discovers that Korby himself is now essentially an android except for his human brain, she is repulsed. Her reaction hurts Korby because he loves her and wants to continue their relationship. The response of these humans in these different films and tv episodes shows us that humans can have very different reactions to the possibility of a cyborg or human/android existence.
Beings such as the Tin Man from the Wizard of Oz and the individual who uploads her mind to the ‘net in the X-Files episode Kill Switch are essentially or wish to be cyborgs. The Tin Man wants a heart from the Wizard to make him human again. During his existence as the Tin Man, he has human emotions but the appearance and body of a non-human. In “Kill Switch,” Esther wants to track down murderous hackers and keep them at bay. Essentially “human” desires motivate both characters even though we could argue that they exist as cyborgs, or quasi-cyborgs, the Tin Man because the Wicked Witch has made him into an artificial being but left him with his human desires (a truly cruel fate), and Esther because she chooses a cyborg existence.
Consider also the replicants of Philip K. Dick’s Do Androids Dream of Electric Sheep?adapted into the film Blade Runner.In the Los Angeles of 2019 (the setting for the film), the replicants return to Earth illegally to exact revenge on those who created them. Why is their return illegal? As genetically engineered beings, they are so powerful both physically and intellectually that society deems them a threat to ordinary humans. Because they break the law by returning, Rick Deckard, a “blade runner,” hunts them down and eliminates them. While Deckard has some reservations about his job, he considers what he does justifiable. He eliminates the replicants, but he doesn’t think that what he does is “killing,” at least not at the beginning of the movie. We might agree that these extremely powerful beings, who are so intellectually and physically superior to humans, pose such a threat that we have no choice but to ban them if they threaten us (hence the penalty humans impose on the replicants in Dick’s story and in the film). In particular, because humans (the Tyrell Corporation) create replicants, perhaps humans believe they are justified in destroying them.
While replicants are not, strictly speaking, cyborgs, they illustrate the problem that humans have first, with beings that do not fit the norm of humans who perform without any sort of artificial assistance—drugs, equipment, or in some cases, technology—and second, with other humans who use assistance to compete because of some sort of disability. Humans who consider themselves the norm have traditionally resisted the idea that non-humans, particularly those that humans themselves have created, should have the same rights as their creators.
U.S. law has not yet confronted the question, but we could speculate on how courts and societies might decide the issue based on existing principles as well as examples of persons who have challenged us to rethink our biases and our definitions. Similarly, international and foreign law regimes haven’t yet addressed the issue, but they might well have to confront it.
As early as 2001, the U. S. Supreme Court handed a victory to golfer Casey Martin, who wanted to use a golf cart during the PGA Tour. He made his request under the Americans with Disabilities Act (ADA). The Tour organizers opposed his request on the grounds that the use of a cart is a “modification” that would “fundamentally alter the nature of the game.” The Court ruled that granting Martin an accommodation, given that he would still have to walk the course a certain percentage of the time and still have to compete for the same way other golfers competed, would not alter the nature of the game. Martin spent one year on the professional PGA circuit, and then left to become a college golf coach.
In 2012 double amputee Oscar Pistorius (interestingly called “Blade Runner” by some) competed in the 2012 Olympics, even though some commentators suggested that his prosthetics gave him an unfair advantage over non-amputees. In 2020, the Court of Arbitration for Sport denied athlete Blake Leeper’s request to participate in the Tokyo Olympics because his prosthesis provided him with an overall height greater than he would have had with his biological legs, and thus with an advantage over his competitors. At least one team of researchers argues that the Court based its decision on norms that do not apply to African American runners.
These two examples suggest that one of the existing concerns, at least in sports, is that human beings who compete with the assistance of some biomechanical assistance, even though that assistance is necessary to their everyday activity, is one that law and society might need to consider more often. We probably would not hear much objection to an athlete who has a pacemaker. What about an aspiring professional baseball player with a prosthetic arm that allows him to throw at speeds greatly exceeding the norm? The short-lived television series Century City included an episode that featured a baseball player who challenges a major league ban because he has an artificial eye. One researcher has begun to explore such specific emerging uses of technology, suggesting that their use could lead to a rethinking of what it means to be human.
Another concern has to do with privacy and bodily autonomy. What rights does an individual have in the various pieces of equipment implanted in her (for example, a pacemaker)? Does she have the right to control, or at least have access to, the information that the device sends back to the manufacturer, her physician, or her insurance company? Recently, the U. S. Supreme Court ruled that college athletes control their name, image, and likeness (NIL) rights. If a college athlete agrees to don wearable technology, to what extent, if at all, does she control the data a manufacturer might collect from that technology? What responsibilities do those individuals or companies have to make certain that hackers aren’t interfering with or capturing information that such devices are broadcasting? To what extent do or should the Tin Man and Esther from “Kill Switch” think about privacy and bodily autonomy? Both of these characters put themselves in harm’s way for the benefit of others. In doing so, to what extent, if at all, do they waive such privacy and autonomy?
Note that in most of the examples, the real as well as the fictional, the norm is human, and humans and cyborgs must conform to it, even though the law permits some exceptions.
SF, particularly “soft SF,” reveals the hopes and fears of society at particular points in time. But the question of what it means to be human and what the limits of humanity are is an enduring question. Since the publication of Frankenstein, SF creators have been giving us some answers, including whether we can create a different definition of humanity by incorporating technology into human beings. In future short essays, I hope to explore some of the ways in which SF creators invite us to consider their answers and create our own.
Christine A. Corcos
Richard C. Cadwallader Professorship and Judge Albert Tate, Jr. Foundation Professorship
Associate Professor of Law