Prompt: “A cartoon of a robot dog fighting a real dog who looks scared.”
How will courts apply existing laws to the transformative and unpredictable technology that is AI? What new laws may be required of legislators? AI has been transforming civil procedure for years, first by allowing for the rapid and economical review of documents and, more recently (and, arguably, less successfully), their drafting. This blog post will begin to explore the ways in which AI may require changes not just to legal processes and procedures, but to the substance of the law. New technologies always require novel applications of existing law, and sometimes require new laws, but for reasons discussed in this blog, AI may necessitate much more of a rethink of the law than any technology that has been previously developed in our lifetimes.
This blog post will discuss some of the existing laws that may be used by courts and legislatures to regulate AI, while also exploring the usefulness of novel and underused frameworks that may come to play an important role: legal and moral personhood. As the European Parliament pointed out in its resolution of 16 February 2017 in paragraph AC, “the autonomy of (intelligent) robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications …”
Both courts and legislators have already adjusted existing laws to accommodate the challenges posed to society by the internet, including in the areas of torts, free speech, contracts, and privacy law. It may first appear that it would be easiest and simplest to regulate AI under one or more of these existing frameworks, as laid out in this article from RAND. Law professors Catherine Sharkey and Bryan Choi on the Lawfare podcast recently discussed the use of torts, contract and copyright to regulate AI. Their arguments for product liability law for generative AI are persuasive and may be applied in cases where AI has caused harm to humans.
Yet AI, particularly when combined with robotics, may develop to be profoundly different from legacy technology in ways that may make existing laws and frameworks difficult to apply, even with modifications. AI scientists have begun to use the word agentic to describe the unpredictability of a near future AI which acts autonomously, or with “minimal human intervention or oversight, to achieve a goal,” as discussed here in a White Paper from the World Economic Forum and here, in a paper from the Centre for the Governance of AI. The concept of agency is complex, as discussed in this blog post from philosophers Robert Long and Kathleen Finlinson. Agency, in the sense of having the capacity to influence an outcome, is not defined anywhere in the law. The legal definition of agency is to act as an agent on behalf of someone else. General intelligence, another concept that is important to AI scientists and engineers, is also not defined under the law. Precise and agreed-upon terms are necessary in the law, otherwise judges, juries, legislators, and regulators will argue and disagree, leading to unpredictability and unjust results.
Standing on the Shoulders of Copyright?
The current regulatory framework for the internet is heavily based on litigation applying copyright law, which makes it attractive for AI regulation because a large body of law already exists. It’s not clear, however, that copyright law will apply to AI. Web-based services are classified as “platforms” under Section 230 of the Communication Decency Act, so many areas of law, such as torts, often do not apply (though this has begun to change as platforms have been held liable in some jurisdictions, as discussed in this 2023 article by Xinyu Hua and Kathryn Spier). Experts have called Section 230 the “26 words that created the internet” because it shields web-based companies from many kinds of torts liability, which has allowed for the tech industry’s explosive growth and profits. Platforms, however, are liable for copyright infringement, which has emerged as one of the most successful ways to curb the wild west of the internet. With AI poised to become the Next Big Thing in lawsuits, copyright is again emerging as one of the primary battlegrounds. WIRED is tracking every copyright lawsuit for us, so we can keep up with the avalanche.
Yet, as experts like Micaela Mantegna have argued, copyright is a poor tool to regulate generative AI, which doesn’t produce a copy of existing work, so much as it synthesizes previous creative outputs to produce something new. It is hard to see how this process is functionally different from the ways that human artists “stand on the shoulders of giants” to incorporate influences from others in their work, and it may be both dangerous and difficult to make either judges, juries, or legislators the final arbitrators of what it takes to be creative.
An AI that responds not to the internet, but to the physical world, won’t be committing a copyright violation in any way because the physical world isn’t copyrighted. It may be that courts and/or governments will stipulate that “creativity” is something only humans do, but in the absence of such a sweeping ban on AI outputs, copyright law, the go-to tool to regulate the internet, may not be applicable to AI, particularly once it becomes agentic.
Abandon All Hope, Ye Tech Company Who Enters this Product Liability Lawsuit?
Product liability and negligence are also emerging as real contenders for AI regulation via the courts. Multiple lawsuits around the world claim that chatbots are a product that is harming vulnerable people. Agentic AI may continue to fit the definition of a product, defined in Black’s Law Dictionary as “something that is distributed commercially for use or consumption and that is usually (1) tangible personal property, (2) the result of fabrication or processing, and (3) an item that has passed through a chain of commercial distribution before ultimate use or consumption.” Negligence under contract and privacy laws, where AI is a type of license or software, may also provide avenues to regulation via the courts. The industry is clearly rattled, and they have a right to be. Classifying AI as a product would take AI out of the protective, liability-free nest in which internet companies have been living and fling it into the brutal, bloody world of strict liability, with its recalls, mandatory safety tests, and regulatory bodies.
Yet, defining AI as a product or a license under the law brings its own conceptual and practical challenges. Current forms of AI require a high level of human intervention and it is possible to map the supply chain for AI decision-making with identifiable moments where negligence may have occurred, or a design issue may pop up. Future AI may be so unpredictable and multipurpose that it may be impossible to apply the standard tort principles of foreseeability, reasonableness, negligence, warranty, ordinary use, etc.
AI will not only be unpredictable, but it may also surpass human abilities. Identifying what counts as a defect may become impossible. While it’s easy to see that a nail that breaks when hammered into a piece of wood is defective, because it’s not performing the task for which it was created, the creativity and unexpected nature of AI isn’t a defect in the system, it is the system. If AI were perfectly predictable and controllable, nobody would use it. Meanwhile, it is difficult to see how a license or warranty could even be written for an intelligent robot capable of an unlimited range of activity and action.
A system of insurance or a pooled fund to pay for AI harms is an option to be explored, but it’s not clear that such a system would be capable of handling the immense variety and severity of the harms caused by intelligent robots such that the making and selling of these robots would be financially possible. While tech companies often adopt a “move fast and break things” approach to technology by creating something and then waiting for the lawyers and regulators to catch up, liability may shut down agentic AI before it even gets started.
But just because it will be hard to classify AI robots as platforms or products, and just because we can’t hold their manufacturers liable for copyright infringement, doesn’t mean that we’ve exhausted the areas of law that might apply to AI.
Limiting Liability Via AI Legal Personality
Could legal personhood be applied to AI? Legal personality is a functional category used primarily to limit the liability of individuals by allowing objects and groups of humans to be the individual subjects of the law. In the United States, it developed to facilitate the growth of the railroads and is often called a “legal fiction” because courts may “pierce the corporate veil” to prevent abuse, to protect the integrity of the legal system, or to accomplish other goals in the public good.
Lawyers have done a sloppy job over the years of defining legal personhood. Black’s Law Dictionary, 2nd edition, which is authoritative in the United States, distinguishes between natural persons, which it defines as human beings, and artificial, or legal, persons, which it defines alternatively as, “(a) body, other than a natural person, that can function legally, sue or be sued, and make decisions through agents,” or, “(a)n entity, such as a corporation, created by law and given certain legal rights and duties of a human being” or “a being, real or imaginary, who for the purpose of legal reasoning is treated more or less as a human being.”
Bryant Smith in his seminal paper on legal personhood under the common law defines legal personality as the fact of being in a legal relationship, though he points out that some experts require that legal persons also have the capacity for legal relations. The requirement that an entity have the capacity to enter into legal relations is often satisfied by the legal person having a human agent acting on its behalf. Legal persons are also defined by some lawyers as the holders of rights and duties (or liabilities). Legal personality is distinct from legal identity, which is the fact of being recognized as a person under the law.
Legal persons are traditionally groups of humans, like the crews of ships, the employees of corporations, the clergy of churches, or the public servants in governments, for whom it is convenient and expedient to treat as a single “person” under the law, usually to shield individual employees from liability. More recently, the courts of some countries have recognized a very small number of rivers as legal persons because they are considered to be entities with spirits under the traditional laws of certain indigenous cultures.
While agency may be needed to exercise rights and duties and to participate in legal relations on behalf of a legal person, is it not necessary or sufficient for legal personhood. All that is necessary is that the legal person have a human agent to act on its behalf. Some humans lack all agency but remain persons. Some entities, like animals, have agency but are denied legal personality. It is necessary, however, that the human agent acting on behalf of the legal person have enough agency to participate in legal relations, whatever those may be in each case.
Legal personality for AI could therefore serve to limit the liability of AI manufacturers and owners to enable the creation of AI for the public good. It would not be necessary for AI to be agentic to qualify for legal personality, as a human agent or representative would represent the AI in its interactions with the legal system. AI is neither a group of humans, however, nor is it an entity with religious or cultural significance, meaning that it does not fit into any of the categories for which legal personality is currently used. For agentic AI, it is not clear what role legal personality would serve, as agentic AI will likely be able to represent its own interests in the courts or in its other interactions with the legal system. Agentic AI will not require a human to represent it, therefore, it is not clear what would be the point of legal personality, as its role is to be a device or fiction to enable the smooth functioning of the legal system.
AI, Natural Law and Moral Personhood
Entirely separately from legal personhood is the question of whether or not agentic AI should be recognized by human society as having moral personhood. The merging of AI and robotics poses important philosophical and religious questions as to whether or not such an entity would be a moral person with innate status and worth under natural law. The question of moral personality for AI implicates human rights, the specialness of humans under religious systems around the world, the moral status of human-robot hybrids, animal rights, and more. The debate over AI moral personhood is perhaps most developed among philosophers and lawyers working on animal rights. Yet, it has profound implications for the legal system as well.
This is not the first time human society has been confronted with a gap between legal personality and moral personality. Historically, slaves were often denied legal personality, while retaining moral personality, in places as diverse as ancient Rome and the United States of America, though their exact status under the law was often vague and contradictory. This contradiction between legal personality and moral personality in slave-based societies arguably destabilized their legal systems, and today, slavery is prohibited under international law.
Disturbingly, even broad, societal recognition of the moral personhood of AI would not automatically result in its legal personhood being recognized by courts or legislators. An intelligent companion robot could only sue in court if the court recognized its standing to sue. The court would have to extend standing to the robot, something that courts have refused to do for animals. The benefits of moral personhood for AI would be to submit them to the same laws as humans, including, for example, the criminal law system, a “solution” that may cause more problems for the functioning of the legal system than it resolves. Many areas of law designed for humans, such as criminal law, for example, take for granted a person’s unity of mind and body in a single location, or jurisdiction, a requirement which robotic, intelligent AI may not be able to fulfill if it is spread between several metal bodies, servers, and satellites, for example. Some of the rules for corporations, which are similarly disembodied, may be helpful, but there is no easy, obvious, or simple fit when it comes to incorporating AI into our legal systems as moral persons.
Future posts in this blog series will explore the ramifications of moral personhood for agentic, intelligent, and robotic AI.