Skip to main content

Search NYU Steinhardt

Pretrial Release Judgements and Measuring Fairness

Court House, Palais de Justice

Pretrial release judgments and decision fatigue

Dr. Ravi Shroff and collaborator Konstantinos Vamvourellis published an article in the journal Judgment and Decision Making called "Pretrial release judgments and decision fatigue.”

Abstract

Although field studies in many domains have found evidence of decision fatigue—a phenomenon describing how decision quality can be impaired by the act of making previous decisions—debate remains over posited psychological mechanisms and the size of effects in high-stakes settings. We examine an extensive set of initial arraignments in a large court system, and find that the time an arraignment occurs generally has little effect on a judge's release decision or whether a judge concurs with a prosecutor's bail request. Moreover, we find that release and concurrence rates remain unchanged after a meal break, even though judges have the opportunity to replenish their mental and physical resources by resting and eating. Our results imply that to the extent that decision fatigue plays a role in pretrial release determinations, effects are small and inconsistent with previous explanations implicating psychological depletion processes.

Read the Paper

The Measure and Mismeasure of Fairness

Dr. Shroff and colleagues also published  "The Measure and Mismeasure of Fairness" in the Journal of Machine Learning Research. This paper expands upon a paper titled "Causal Conceptions of Fairness and Their Consequences" that came out at the 2022 International Conference on Machine Learning Conference.

Abstract

The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last decade, several formal, mathematical definitions of fairness have gained prominence. Here we first assemble and categorize these definitions into two broad families and then show, analytically and empirically, that both families of definitions typically result in strongly Pareto dominated decision policies. In this sense, requiring that these fairness definitions hold can, perversely, harm the very groups they were designed to protect. In contrast to axiomatic notions of fairness, we argue that the equitable design of algorithms requires grappling with their context-specific consequences, akin to the equitable design of policy. We conclude by listing several open challenges in fair machine learning and offering strategies to ensure algorithms are better aligned with policy goals.

Read the Paper