Emily LaGratta has just released the national report from her Fairness Challenge Pilot Project, and it deserves your attention. The report documents what happened when 26 judicial officers across four states committed to six months of focused work on procedural justice—and then invited litigants to tell them how they were doing.
The short version: it worked.
Nearly 1,700 litigants gave feedback during the pilot period. Perceptions of fairness hovered around 90% across the participating courts—well above the roughly 60% that national polling shows for public trust in state courts generally. Litigants reported high levels of feeling heard, understanding what happened in their cases, and being treated fairly.
But the numbers only tell part of the story. What makes this report especially valuable for practitioners is the detail on how judges made improvements. The pilot was built around monthly training engagements covering each element of procedural justice—voice, understanding, respect, and neutrality—with peer observations, self-assessments, and real-time litigant feedback woven in throughout.
The report includes specific practice enhancements that participating judges adopted, many of them small but meaningful: slowing down, pausing more and interrupting less, using a grounding practice at the start of each hearing, explaining why sensitive questions are being asked, and telling litigants to put in their own words what they need to do next. The report also includes targeted suggestions for working with self-represented litigants and for remote court settings—two areas where procedural justice practices require particular attention.
One finding I want to highlight: many judges discovered during the pilot that they had been overestimating how often they used certain procedural justice practices. They had the intention of using them but found, on closer examination, that their intentions weren’t always aligned with their behaviors. That kind of honest self-assessment is exactly what sustained professional development makes possible—and it’s hard to get there without the structure and peer accountability that this project provided.
Another notable finding was that judges initially underestimated how positively litigants would rate them. Before the pilot, judicial officers estimated that 50–80% of litigants would rate their fairness positively. The actual numbers came in higher than expected for nearly every participant. Collecting litigant feedback gave judges a more accurate—and more encouraging—picture than the one they were carrying around in their heads.
The full report is available here. It includes the training curriculum outline, the litigant survey questions (in English and Spanish), practice-enhancement lists coded to the four procedural-justice elements, and a peer-observation instrument that other courts could adapt.
If you’re a judge or court leader interested in this kind of work, I’d encourage you to visit Emily’s website at http://www.lagratta.com and sign up for updates. She’s planning another annual Fairness Challenge cycle, and the model she’s developed here (sustained engagement, real feedback, peer learning) is one that any court system could adopt and benefit from. And especially so if you sign up to work with Emily on it!
Steve Leben
P.S.—for more information, go to proceduraljustice.org, not the link that shows up below.

