I thank my former colleague Bill Raftery for introducing the topic of AI into the PF blog. I hope to build on what he started by looking at how PF theory and practice might be applied to AI-based decision-making. To do so, I offer some preliminary thoughts I plan to refine in a subsequent blog post. In the meantime, I solicit your help in the form of criticisms of my way of thinking about the potential benefits and risks for public trust in the state courts from a merger of AI and PF practices.
First, the use of AI by the courts is unlikely, on its own, to have a positive influence on public perceptions of state courts. That influence might turn out to be negative. Analyses of data from public opinion surveys conducted over the last 50 years about the state courts consistently find that people’s perceptions about instrumental factors, such as court efficiency, cost, or timeliness, have at most a minor influence on trust in the courts, or, more likely, are insignificant. Public opinion is best understood as a reflection of public perceptions of the extent people experience or believe they would experience PF from the judiciary. AI must enhance that relationship in some ways to influence trust.
Second, research to date about public perceptions of AI is not encouraging. They offer no reason to believe that implementing AI in judicial decision-making will change public opinion in a positive way. There is no research on this point, but it is reasonable to assume that people have more trust in the courts than they do in AI. So, associating the judiciary with AI might, in the short term at least, might have a negative influence on trust.
Third, it is difficult, at least for me, to integrate a role for AI into the theoretical framework that explains how adhering to PF practices translates into higher levels of trust in the courts (and compliance and willingness to be vulnerable).
Here are my initial thoughts about the possibilities and limitations:
· PF occurs in the context of the interaction of a decision-maker with a decision recipient, as perceived by the decision recipient.
· The decision-maker has authority and discretion in using that authority. The recipients themselves typically have discretion in how they respond to the authority’s decision (in the form of compliance or non-compliance, for example).
· The vast PF research literature tells us that PF influences trust and compliance via two types of perceptions embedded in the transaction: (1) the perceived quality of the decision-making process (demonstrating neutrality and voice) and (2) the perceived quality of the treatment the recipient experienced (demonstrating respect and benevolence).
· There are, perhaps, limited ways in which PF principles can be incorporated into AI-based decision-making. For example, software can select and perhaps modify scripts used for interacting with decision recipients that mimic PF best practices in a manner tailored to a specific decision-recipient. This might be possible for demonstrating respect and to some extent even neutrality. Demonstrating voice and benevolence seem more challenging for AI. A greater challenge would be to use AI in a way that will instill PF practices into dialogues between the algorithm and a human decision-recipient in the way that a judge practicing PF can achieve.
· Finally, the judiciary should be concerned with the potential for AI, and especially Generative AI, to introduce bias into decision-making.
I wrote this post to encourage others to criticize my logic and to counter my pessimism on the potential benefit of integrating PF principles into AI. What do you say?
David B. Rottman
D. Rottman and T. Tyler (2014) “Thinking about Judges and Judicial Performance: Perspectives of the Public and Court Users.” Onati Socio-Legal series (online) 4 (5), https://opo.iisj.net/index.php/osls/article/view/343/490.
Yalcin, G., Themeli, E., Stamhuis, E. et al. (2023) Perceptions of Justice by Algorithms. Artif Intell Law 31, 269–292. https://doi.org/10.1007/s10506-022-09312-z
Malek, M.A. (2022) “Criminal courts’ artificial intelligence: the way it reinforces bias and discrimination. AI Ethics 2, 233–245 (2022). https://doi.org/10.1007/s43681-022-00137-9

