Procedural Fairness and AI: Current Research

By Bill Raftery, NCSC Senior Knowledge Management Analyst

The question of what role artificial intelligence and machine learning will have in the law is not new. It has been discussed in one form or fashion for at least a decade. Recently, there has been a greater focus on the role of AI in the context of procedural fairness and has moved beyond legal literature. Three pieces, released in the last year, put these innovations into context.
  • “AI decision-making and the courts: A guide for judges, tribunal members and court administrators” focuses on the use of AI in the context of Australian courts and references the United States and other nations’ courts using it. The report takes a holistic approach and looks at AI in every step of the judicial process, from e-filing and case triage to sentencing. It finds several concerns in this arena. Are the AI models making judicial decisions (which the report concludes would deny procedural fairness)? Is there a registry of all AI systems used by the courts, or whose outputs are then used by tribunals? Can system outputs be challenged where litigants feel that the inputs were in error or that the system fails to take account of relevant factors? While the report does reach some conclusions, it is perhaps more helpful in the list of questions that are raised for courts and judges to ask before using or relying on such systems.
  • “Algorithms in the Court: Does it matter which part of the judicial decision-making is automated?” looks at the perceived procedural fairness of what the researchers describe as Algorithmic Decision Making (ADM) at four stages in the judicial process: information acquisition, information analysis, decision selection, and decision implementation. Based on survey data, people generally tend to view low levels of automation will ensure the fairest outcomes. However, there may be a willingness to accept the use of automation in the information acquisition stage as procedurally fair. This perception appears to occur in those both inside and outside the legal professions. Why this occurs is unclear. Perhaps people have become so accepting of using search engines to find out initial information that they will accept an AI system doing a similar initial information search in the judicial context. Another possibility of acceptance is that the use of AI at this stage will reduce confirmation bias; instead of judges forming an opinion and then searching for the initial information to support it (“confirmation bias”), the use of AI can boost perceptions of procedural fairness where the initial information is handled by the AI system instead.
  • The researchers in “Having your day in Robot Court” attempted to ascertain to what extent the total or near-total removal of humanity from the judicial process can boost perceptions of procedural fairness by focusing on one key element: the human voice. Two experiments were conducted. In the first three scenarios (consumer refund for a damaged camera, bail before trial, and sentence for manslaughter), the party or defendant “lost”: there was no refund given, bail was not allowed, and the maximum sentence was handed down. In some of the scenarios, the determination was made by a human, in others, by an algorithm. This was further subdivided into whether the decision was made with or without a hearing and whether or not the decision was “interpretable” (was the decision maker’s reasoning easy to understand?). The researchers concluded that with a human decision maker, having a hearing and interpretability/clarity of the decision does matter, reaffirming past research. When the decision is made by an algorithm, it is viewed as less procedurally fair, but the use of a hearing may make people more willing to accept the results are fair, more so than whether or not the AI’s ultimate decision is clear/interpretable.


With AI integrating itself into all aspects of life, a focus on the actual fairness of the decisions and determinations made is needed. However, because of the unique status of the courts in terms of the need to not just do justice but to be perceived as doing justice, the use of AI needs to reflect both perception and reality.

Barysė, D., & Sarel, R. (2023). Algorithms in the Court: Does it matter which part of the judicial decision-making is automated? Artificial Intelligence and Law. https://perma.cc/8WZG-KDK3

Bell, F., Bennett Moses, L., Legg, M., Silove, J., & Zalnieriute, M. (June 2022). AI decision-making and the courts: A guide for judges, tribunal members and court administrators. Australasian Institute of Judicial Administration. https://perma.cc/78VE-FEHP

Chen, B. M., Stremitzer, A., & Tobia, K. (Fall 2022). Having your day in Robot Court. Harvard Journal of Law & Technology. https://perma.cc/8MGP-MBU7

Leave a comment