Skip to main content

Does Algorithmic Uncertainty Sway Human Experts? Evidence from a Field Experiment in Selective College Admissions

Wed Apr 29
11 am - 12 pm ET

A PRIISM Seminar by Stanford University's Hansol Lee

Join PRIISM and Ph.D. candidate Hansol Lee to learn how high stakes decision makers, such as those in college admissions, aren't easily swayed by random differences, even though AI models can give different scores to the same person. 

This seminar will not be recorded.

Abstract

Algorithmic predictions are inherently uncertain: even models with similar aggregate accuracy can produce different predictions for the same individual, raising concerns that high-stakes decisions may become sensitive to arbitrary modeling choices. In this paper, we define algorithmic sensitivity as the extent to which a decision outcome depends on whether a more favorable versus less favorable algorithmic prediction is presented to the decision-maker. We estimate this in a randomized field experiment (n=19,545) embedded in a selective U.S. college admissions cycle, in which admissions officers reviewed each application alongside an algorithmic score while we randomly varied whether the score came from one of two similarly accurate prediction models. Although the two models performed similarly in aggregate, they frequently assigned different scores to the same applicant, creating exogenous variation in the score shown. Surprisingly, we find little evidence of algorithmic sensitivity: presenting a more favorable score does not meaningfully increase an applicant's probability of admission on average, even when the models disagree substantially. These findings suggest that, in this expert, high-stakes setting, human decision-making is largely invariant to arbitrary variation in algorithmic predictions, underscoring the role of professional discretion and institutional context in mediating the downstream effects of algorithmic uncertainty.

Hansol Lee

Hansol Lee is a Ph.D. candidate in Education Data Science at Stanford University. Her research examines how algorithmic systems shape high-stakes decisions in practice, with a focus on human-AI decision-making, algorithmic fairness, and measurement in educational contexts. She holds a B.A. and M.S. in Computer Science from Cornell University and is a recipient of the Stanford Graduate Fellowship. 

NYU provides reasonable accommodations to people with disabilities. Please submit your request for accommodations for events and services at least two weeks before the date of your accommodation need. Although we can't guarantee accommodation requests received less than two weeks before the event, you should still contact us and we will do our best to meet your accommodation need. Please email susana.toro@nyu.edu for assistance.