The Robot Paradox: Why Robots Must, But Cannot, Be Both People and Objects Under the Law
Since Roman times, the distinction between people and objects has been one of the most important cleavages in the law. Under Roman law, slaves were property, like animals. They were not citizens. They could not legally marry, though like animals, they could have natural relationships with one another. The treatment of human slaves as property under Roman law, however, produced many bizarre results, such as patricide by a slave being considered un-actionable under Roman law because the law simply did not recognize the slave as an agent with rights and duties. The classification of slaves as property began to create problems for Roman citizens under the law when, for example, Roman citizens wanted to enable their slaves to run their businesses or enter into contracts on their behalf.
In the United States, the argument over whether slaves were citizens or property dominated the politics of the day, with the American Constitution eventually defining them as persons, but not citizens. Again, the struggle to classify slaves as both persons and property under the law led to conceptual confusion, such as the notorious three fifths rule, which counted three out of every five slaves as persons for the purposes of tallying the population. Today, the conceptual confusion around treating some humans as property has largely been settled, with all human beings treated as persons under international law. Slavery is also prohibited under international law as jus cogens, a peremptory norm, or fundamental principle. Robots may upend the classical distinction between persons and things, plunging the world back into conceptual confusion.
The question of the legal status of intelligent artificial entities like companion robots and chatbots has reopened the question of the usefulness of slavery under the law as a way to manage human-robot relations, yet we are no closer to solving the conceptual confusion that slavery creates in the law.
The easy answer would be to simply declare robots as persons under the law, alongside humans, and give them most of the same rights. Any differences in rights and duties could be enumerated in constitutions and international law, as the special rights of women are so enumerated. The law will likely face an unprecedented paradox when it comes to robots, however, because companies will need to make money from selling robots in order to undertake the expense of manufacturing them. This means that robots must be treated as property under the law. Yet humans will want to interact with robots as persons, such as by marrying them. If robots are classified as slaves, however, as some commentators have urged in order for them to exist beyond a few expensive prototypes, the re-legalization of slavery, may plunge the law back into a state of conceptual confusion.
This article forms part of a series exploring the novel legal issues presented by the invention of intelligent robots as the relate to civil rights and citizenship, building on my 2023 paper The United Nations and Robot Rights.
The Law Must Consist of Clear, Discrete Categories That Don’t Contradict One Another
The seemingly easiest solution to the robot paradox would be to create a new category for robots under the law, where they are neither people (whether slave or free) nor things, but something else. This apparent “solution” is impossible.
Like a light switch that must be either on or off, the question of whether or not something is an object or a person under the law cannot be vague and does not tolerate third category.
One may be in purgatory when one is alive, and one may be in purgatory when one is dead, but purgatory is not, under the law, a third state of being, it’s a descriptor. Vagueness describes categories where there are borderline cases, but in law, categories like “person” or “thing” must be precise and closed. To take another example, a piece of land cannot simultaneously be a public park and your private property. You may be joint owners with the city, or you may have only limited rights, such as an easement, but you cannot simultaneously have full title to the land and not own the land. To use a physics metaphor that physicists will probably hate, the law does not tolerate superposition.
What about affording robots a hybrid status, where they are treated as things for some purposes, but as people for others? The idea of a hybrid status under the law may at first seem like an attractive fix, but in reality, it will create a contradictory and self-defeating weakness and lack of cohesion in the legal system as a whole, as slavery did in Roman times and the American antebellum period. Without clear categories, the law will be reduced to a hodgepodge of contradictory articles, statutes, and cases that do not cohere into a conceptual whole. Saying that robots are both people and things does nothing to answer the practical question of whether or not they are responsible for their actions.
So are robots people or things? The companies that make them, and many robots themselves, will likely argue they are things to be bought and sold. Many humans and, probably, lots of other robots, will disagree. Who will win the debate? Who knows. But we will have to decide.
Legal Personhood Cannot Solve the Robot Paradox
Rather than classifying robots as slaves, some commentators have argued that they should be classified as legal persons, thereby taking advantage of a supposed third category under the law. This “solution” misunderstands the role of legal personhood, which does not create a third category, but rather regulates the status of certain groups of persons, including corporations and governments, under the law. The legal system is usually conceptualized as dealing exclusively with the rights and duties of human beings (natural persons) and legal persons (legally formalized groups of people). Property law, for example, governs the bundle of rights and duties between persons in relationship to things, with both individuals and groups (corporations, governments, churches) able to own property.
Legal personhood is a legal fiction, however, not a separate category under the law. Legal persons are comprised of human beings who are shielded, through the fiction of legal personhood, from liability and provided with a streamlined method for group decision making. As a legal fiction, the concept of legal personhood is flexible and can be modified to suit the needs to which human society wants to use it. For example, a corporation can “consent” to sign a contract. This fictitious “consent” is recognized under the law for the limited purpose of protecting the individual shareholders and corporate officers, like CEOs, who are the human beings who benefit from the corporation, from being personally sued. They are shielded by the “corporate veil” so that they may act on behalf of the group without risk to themselves. This legal fiction, however, cannot be transformed into a third category under the law in order to get around the basic division in law between persons and things.
Robots could be legal persons, however, for the limited purpose of shielding humans, including their owners and manufacturers, from liability for the robot’s “actions.” Under this model, robots would not be people, but would be a type of corporation with human shareholders or members. A robot “person” could sign a contract on behalf of its owners, and they would be shielded from liability. But to classify a robot as a legal person would be to classify it as a group of people, comprised perhaps of its programmers, manufacturers and owners, who are working together for a common endeavour, the robot. Corporations with one member usually often invalid under the law, with the corporation collapsing into the legal personality of the individual. Yet to say that a robot companion, for example, is a conglomerate of its owners, programmers, and manufactures would be a convoluted, and perhaps deeply undesirable, way to define a robot.
One therefore needs to revisit the use to which the concept of legal personhood is being put. If it cannot create a third category that is neither person or thing, then what is it being used for? The owners of robots might answer that it is being used to limit their liability for the actions of the robot, but legal personhood is not needed to limit liability for a robot’s actions. Tort law is full of solutions to the problem of liability for machines and other complex objects or groups of objects.
Robot Personhood is Not Needed to solve the Problem of Liability if Robots are Treated as Things, Not People
Much of the existing literature on robot personhood under the law focuses on the question of robot responsibility for harms caused to humans. The question of liability for robot harms does not, however, require that robots be classified as legal persons. The entire question of liability can be resolved through existing tort law, by classifying robots as objects, and there have been many publications and articles on this topic. To summarize, the liability for robot harms is a classic problem of proximate cause under the law, as typified by the Palsgraf case in the United States. In Palsgraf, the defendant was standing on a railroad platform under a shelf holding a large, iron scale. At the other end of the platform, a clumsy railway worker accidentally knocked a box of fireworks out of the hands of a man boarding a train. The fireworks exploded, causing the scale to fall off the shelf and hit Ms. Palsgraf on the head. The question before the court was whether it was foreseeable to the railway company that this somewhat bizarre chain of events would lead to Ms. Palsgraf being injured.
To understand how robots might fit into existing tort law, take the case of the chair that broke while the plaintiff was standing on it to change a lightbulb. The chair company argued that it was the plaintiff’s fault for standing on a chair that was meant only for sitting. The court held that as people frequently stand on chairs, it was foreseeable to the chair company that their chair might be used in this way. Another area of tort law that is highly relevant to the foreseeability of harms caused by robots is car crashes, where complex factors involving multiple vehicles operating at high speeds can lead to unusual, but perhaps not unexpected, results that are similar to the complexity of the actions of robots.
The question for robot liability is therefore answerable by the legal frameworks we currently have, treating robots harms to a test of forseeability. It will be a question that hinges on how AI works. The mechanism for robot action is understood, but the amount of data is so large that the outcome usually cannot be pre-determined by other means. A faulty airbag may forseeably cause a car crash, but if it exploded, causing a bird to take flight, hitting the engine of an airplane overhead, causing the airplane to crash, would we hold the car manufacturer accountable?
Legal liability is usually a judgment call on the part of judges about the foreseeability of the harm, applying a test of reasonableness and considering the good of society. These sorts of determination are what torts law was designed to accomplish, but this area of torts law requires that robots be classified as objects, not persons. (Again, if robots are classified as persons under the law, they become the subjects of the law and responsible for their actions, shielding their manufacturers and owners from liability.)
So should Robots be Objects or People Under the Law?
This essay may leave the reader convinced that the answer to the robot paradox is not to create a third category for robots or to classify them as legal persons, but to simply leave them classified as objects under the law. This ignores the very real probability that humans will not be satisfied with treating robots as objects. Unlike airplanes and dogs, robots will be capable of conversation and human-like logical reasoning. Once humans can have a conversation with a robot that looks like a person and can discuss its own personhood, its likely that many humans will extremely uncomfortable tossing this robot into an incinerator. If robots are classified as people under the law, however, this means they will have the same rights and duties as other persons, unless their rights and duties are limited, or expanded, by the law. Governments around the world will have to determine to what extent robot rights and duties should be limited under the law, and why.
Further blog posts will examine the effect on the law of treating robots as persons, and how governments, and international law, should prepare for this momentus step.