By David Rottman
AI + Procedural Justice = ? Part 2: Algorithms with a Human Face
In a blog post last month, I argued that consideration of the likely impact of AI on the courts should begin with a review of procedural justice theory. PJ theory is based on the interaction between a human decision-maker and human decision-recipient: Can that work if the decision-maker is an algorithm and the defendant, or litigant, is a person? I was skeptical about that possibility. I try to bolster my argument here by considering the problems designers of algorithms must confront in providing decision-recipients with a “voice” and in signaling emotions. But I also describe recent innovative research demonstrating that procedural justice practices can be embedded into an algorithm to give it a human face. Judges and court administrators should take note.
The Challenge of Allowing Recipients “Voice”
Voice is one of the four elements of procedural justice, providing a decision-recipient with the perception that their claims have been heard and acknowledged by the decision-maker. Procedural justice research conducted in the off-line world consistently finds that the absence of having voice in this sense has a negative effect on decision compliance and on trust in the entity that rendered the decision. Moreover, research shows that the negative perception of having no voice “mediates” the effect of procedural justice on emotional reactions (see below). The challenge in the on-line world is to design software that provides a recipient with an opportunity to express themselves in a way that could be perceived as influencing decision outcome. Ideally, this would extend to an opportunity for the recipient to receive an explanation and a response to what recipients expressed through the opportunity for voice.
The Challenge of Emotions
Emotional reactions include anger, frustration, disgust, happiness, shame, and pride. If emotions are suppressed, decision-recipients tend to lose their sense of “individuality,” which people strongly value. People make inferences about justice based on the emotions they perceive (see Alkhadher et al., 2023, in the list of sources). In interactions with authority figures, emotions offer social information and provide signaling about the nature of decision process, both for recipients and observers of the interaction.
In the absence of the ability to signal emotions or to detect emotions on-line decision-making may be seen as artificial and unable to evoke positive responses on the part of decision-recipients or observers. Attempts to use software to model human emotions are unlikely, at the current state of the art, to have the influence a human decision-maker can have on how people perceive the level of procedural justice they are experiencing or observing. In the net section, I outline ways in which the state of the art is advancing through research about PJ on-line.
New Research Providing Evidence that Algorithms Can Have a “Human Face”
Research by Tom Tyler and others is exploring the potential for AI to provide a human face in interactions with decision-recipients (see Katsaros et al., 2023 and Tyler et al., 2021, (both articles are included in the list of sources). The main criterion in the research is the ability of procedural justice practices to promote self-regulation of the rules established by platforms like X or Meta for content. The subjects in the research were platform-users previously sanctioned for or warned for violating those rules. Users who received a warning or sanction based on procedural justice principles were less likely than those who were sent the platform’s standard messages to continue to be rule violators. The researchers concluded: “ . . . the necessary antecedents for procedural justice can be built into algorithmic decision making used in platform’s content moderation efforts” (Katsaros 2023). This allows algorithms to have a “human face.” Research on a hotel staffed by service robots supports that finding: The hotel robots were perceived as having “feelings” (Yam et al., 2021 in the list of sources). This is intriguing. Potentially, algorithms can demonstrate they adhered to procedural justice principles in their decision-making process. At the same time, algorithms can, in principle, be advertised as a method for decision-making not infected with human biases.
Conclusion
I’ve identified two challenges for courts seeking to improve the algorithms they use. One challenge is to provide opportunities for decision-recipients to express their voice before a decision is rendered. The second is finding ways to anticipate and accommodate the emotions inevitably surrounding decision-making processes. Progress in either area would make online decision-making seem less artificial. But I end with reasons for optimism. Recent research in social psychology, just described, demonstrates the potential of algorithms to be given a human face.
Please respond to this post with comments pointing out errors of fact, logic, or interpretation of research findings. Most of all, what do you believe is the potential for AI combined with PJ to improve the courts?
Want to Learn More?
I offer citations to some of the sources I consulted in preparing this blog. Be aware that they are written in a language allowing one expert to engage with another expert. Jargon is one barrier. Conclusions are based on advanced statistical analysis. Cost is another barrier; few academic articles are available for free. But if you want to know more, do a Google search of the article’s title, and read the abstract—available for free—to learn the basics.
Alkhadher, O. et al., L. (2023) “Emotions as social information in unambiguous situations: role of emotions on procedural justice perception.” Current Psychology [finding “significant interactions between the procedure and emotions”]
Katsaros, M, J. Kim, and T. Tyler) (2023), “Online content moderation: Does justice need a human face,” International Journal of Human-Computer Interaction.
Maguire, E. et al. (2023) “The role of anger in mediating the effects of procedural justice and injustice.” Group Processes & Intergroup Relations.
Tyler, T. et al. (2021) “Social media governance: Can social media companies motivate voluntary rules following behavior among their users?” Journal of Experimental Criminology 17 (1) 109-127. Yam, K. et al. (2021). “Robots at work: People prefer—and forgive—service robots with perceived feelings.” Journal of Applied Psychology, 106 (10).

