What does "generative" really mean, and is it important? Distractions, vague concepts and literary terms make for bad AI laws and confused judges.
The "Thomson Reuters v. Ross Intelligence" court decision and Montana’s proposed law SB 212 demonstrate why unimportant distinctions, vagueness and metaphor are the enemy of all law.
Prompt: A golden ring having thoughts.
In the Lord of the Rings, the Ring of Power is both an object, (the MacGuffin to end all MacGuffins,) but also a character in its own right. The book never makes it clear if the One Ring can exercise free will or pursue goals of its own. Sometimes, it seems to be making decisions independently, such as by falling onto Frodo’s finger at inopportune moments. At other times, the Ring is simply a conduit for the will of Sauron, the main enemy of the story. The book never resolves this tension, but the Ring, as a metaphor in a book, can be both.
There would be no way to write coherent laws for the Ring of Power because sometimes the Ring is acting, and sometimes it is just a conduit for the actions of others. It’s all a bit vague and magical, and while we morally judge the various characters in the story by the extent to which they are able to control themselves and overcome the seductive pull of the Ring, we don’t need to work out a detailed system of responsibility and causation because we are reading a book, not developing a workable system of torts and criminal law for Middle Earth.
Laws are about clearly establishing rights and responsibilities. Lawyers need better definitions from AI experts for generative, loss of control, agentic, alignment, and intent, and better guidance for when these concepts matter.
We seem to be in the process of developing our very own Ring, as many researchers believe AGI and, in particular, “physical AI” or smart robots, may move from science fiction to near-future tech (according to recent surveys by the Association for the Advancement of Artificial Intelligence). The law must decide if the robots of the future will be responsible for their actions and capable of understanding their rights and responsibilities. If robots are merely extremely advanced tools, than human beings will remain totally responsible for both their “acts” and for their welfare. The question of AI consciousness, its capacity for free will and rational decision making, will therefore be crucial for the law. Yet, there is no “industry standard” for terms related to AI free will, intent, loss of control, alignment or agency. All these terms mean different things to different researchers.
Current confusion over the importance of the term “generative” is a case in point. Strictly speaking, what is "generative" (rather than "discriminative") is the model itself, not what a given application uses it to do. ChatGPT is a generative model, even if you just use it to do, e.g., sentiment analysis (classify whether passages of text seem "happy" or "sad"). To say that ChatGPT is generative is to say that it can do more than just classify (it can also produce new content). But ChatGPT is still a generative model, even when you only use it to classify.
This brings us to case law. The judge in Thomson Reuters v. Ross Intelligence withdrew his earlier motion for summary judgment and reissued a new motion based on his evolving understanding of how using AI training data differs from copying computer code (the latter being industry practice to ensure that software programs are compatible with one another). The judge was very clear that he had been wrong in his prior judgment and held that using copyrighted material to train AI is a copyright infringement.
The judge was also careful to note that the AI used by Ross Intelligence, the defendant in the case, was not “generative.” By stressing that the AI used was not “generative,” the judge granted relief to the plaintiff without actually ruling on whether or not the use of copyrighted information to train generative AI also violates copyright. According to public reporting, the Ross AI was not a generative model, but was more like a search engine. Even if true, a further conversation is necessary to determine whether the capacity of an AI to generate images or text should change the outcome of future copyright litigation.
Did the terminator robots developed free will? Is the paperclip AI just a tool that we forgot to turn off?
Another good example of the frustrating vagueness that is creeping into laws on AI is the fluctuating concept of “loss of control.” The concept of “loss of control” within the AI safety community in particular has fluctuated over time, which makes it difficult to write laws. A good example is the European Commission’s Third Draft of the General-Purpose AI Code of Practice. The definition of “loss of control” is so broad, it seems to encompass everything from terminator robots trying to take over the world to a forgotten AI diligently following its prompt to make a dangerously endless number of paperclips. Both these scenarios are dangerous, but in different ways that have different implications for the legal system. The first is a danger that is related to conscious AIs that have free will and are acting against human interests, while the second is a danger of AIs that have no idea what they are doing or why, but are simply hyper efficient and powerful tools of humans with unforeseen consequences.
For the law, there is a huge difference between these two things, because the question of responsibility and duty are fundamental to the law, even if the outcome (the destruction of humanity) is the same. AI experts often fail to distinguish between these two scenarios. This fuzziness isn’t necessarily a problem for AI research, where the level of risk might be the same, but it becomes a problem for tort law, when courts must try to determine who is responsible for cleaning up all the paperclips. An example is the definition below, where they are mushed together:
Risks related to the inability to oversee and control autonomous (general purpose AI systems that pose a systemic risk) GPAISRs that may result in large-scale safety or security threats or the realisation of other systemic risks. This includes model capabilities and propensities related to autonomy, alignment with human intent or values, self-reasoning, self-replication, self-improvement, evading human oversight, deception, or resistance to goal modification. It further includes model capabilities of conducting autonomous AI research and development that could lead to the unpredictable emergence of GPAISRs without adequate risk mitigations. Loss of control scenarios can emerge, e.g., from unintended model propensities and technical failures, intentional modifications, or attempts to bypass safeguards.
Another good example of the problems that can accrue to the legal system from vague language regarding “loss of control” or agency of AI can be found in Montana’s new Section 4 of SB 212:
Infrastructure controlled by artificial intelligence system -- shutdown. (1) When critical infrastructure facilities are controlled in whole or in part by an artificial intelligence system, the deployer shall ensure the capability to disable the artificial intelligence system's control over the infrastructure and revert to human control within a reasonable amount of time.
What is meant by “control?” Does this law apply only to AI systems that have taken over all decision making for a power grid (or other critical infrastructure), (and if this is the case, would it even be possible to take control back)? Or does this law apply to an automated AI system where a human still, at least in theory, has the final say, like the hypothetical paperclip AI? The law also doesn’t establish a clear mechanism to tell when “control” has been “lost.” Finally, it’s not clear from the law if AI systems that cannot be made to comply, those that are fully autonomous in the sense that they are acting completely independently of human control, are to be banned. Instead, the “deployer,” whoever that may be, must have a fail safe mechanism, though why a super intelligent AI system would abide by this fail safe is another question. The concept of “control” is therefore vague, a problem for the law.
Part of the problem with the vague terminology relating to AI is we seem to have entered a fun house world where we say that AIs are “scheming” and “hallucinating,” without really knowing if we mean these words metaphorically or not, or if there really is a difference. With laws on AI safety being drafted around the world and court decisions being handed down in crucial cases, the AI community needs to move faster to not only define key terms like generative, control and agency, but to also stop using metaphors or literary terms like “hallucinating.” Otherwise we risk ending up with vague laws that will do more harm than good.
As the judge in Thomson Reuters put it,
A smart man knows when he is right; a wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does––even if it comes late, as it did here.