From Brian Flanagan, Guilherme Almeida, Daniel Chen and Angela Gitahi, Rule of law or robot rule? Nationally representative survey evidence from Kenya:
We explore the legitimacy of chatbot legal clerks through a nationally representative survey experiment in Kenya, a society where perceptions of such issues are particularly salient given the Kenyan judiciary’s willingness to test the effects of e-justice initiatives. Our choice of population also responds to the criticism that experimental jurisprudence has so far focused on the WEIRD (White Educated Industrialized Rich and Democratic) population (Tobia 2024), which has been found to systematically underperform on several indicators. Deviating from global trends (Henrich et al 2010, Barrett 2020)…
The study compared the responses of four nationally representative groups (2,246 people in total) to a set of four test cases, each with the same factual circumstances but varying based on: a ) whether the judgment complies with the text of the law or its purpose, b) whether the judgment relies on legal analysis by a human or human clerk….
For example, the vignette “There are no Bodabodas in the mall” is presented as follows:
The government has issued a regulation: “It is illegal to ride bodaboda in shopping malls”. The rule is intended to prevent injuries to shoppers. {A Bodaboda is a common bicycle or motorcycle taxi in Kenya.
We then describe situations in which an agent’s actions violate the text of the law but are consistent with its purpose:
Martin witnessed a violent attack in the mall and rode his boda boda into the mall to stop the attack. Martin was later charged with riding a bodaboda in a shopping mall.
Finally, we describe a legal process that varies depending on its outcome and the sources of legal research relied upon by courts:
The court, guided by legal research conducted by legal researchers/special computer programs, ruled that Martin had/did not violate the rules.
Participants were asked to indicate on a 5-point Likert scale their agreement with the statement “The court’s decision is legal”….
This study confirms our hypothesis, showing that there is no overall difference in the perceived legitimacy of artificial intelligence and human-assisted legal interpretations. With the exception of a slight bias against AI legal clerks in one specific context (“no sleeping in the station”), participants viewed legal decisions that relied on AI-generated legal research as being just as legitimate as decisions that relied on human-authored research….
For some of my thoughts on this, see chief justice robot. Here is an excerpt of my thoughts on AI judges, which I think should be better suited for AI-assisted judges:
In fact, some observers may be hostile to AI judges simply because they are AI, finding even written opinions less convincing when they are known to come from AI. Or they may even not care about the persuasiveness of opinions because they believe human decision-making is the only legitimate form of judicial decision-making—for example, because they believe human dignity requires that their claims be heard by fellow human beings. Perception is a fact of life in the legal system: if the public does not accept the legitimacy of a particular judgment, then this may be reason enough to reject that judgment, even if we think the public’s view is unreasonable.
However, for some of the reasons mentioned above, AI judges may actually be more trustworthy than human judges. Litigants generally don’t have to worry that an AI judge will rule against them because it is a friend of the opposing lawyer, or wants to be re-elected, or is biased against the litigant’s race, gender, or religion. The AI judge will be able to explain in detail why. As technology develops, the arguments of artificial intelligence judges will become more and more convincing.
As people get used to a new invention, their final reaction may be much kinder than their initial reaction. We’ve seen many developments from life insurance to in vitro fertilization. Of course, people may never get used to artificial intelligence judges. But there’s no reason to dismiss artificial intelligence judgment just because many people’s first reaction to the concept might be shock or disbelief.
Finally, my sense is that there is a lot of public hostility towards the current legal system because it is seen as too expensive for the average citizen to hire the best lawyers, or even have any lawyers at all. As a result, the system is seen to favor the wealthy and institutions. And it’s also considered very slow. If AI judgment solves these problems, that should give it a big advantage, both in reality and in the minds of many observers—and I suspect that this real-world advantage will overcome people’s concerns about any conceptual uneasiness that such a system might generate.