Skip to main content

Search NYU Steinhardt

Does question ordering matter on a survey or exam?

Student filling out multiple choice test

Dr. Klint Kanopka and colleague published a paper in the Journal of Education and Behavioral Statistics which shows what happens when response probability depends on item position and proposes a position-sensitive IRT model.

Abstract

Standard item response theory (IRT) models are ill-equipped for the situation in which the probability of a correct response depends on the location in the test where an item is encountered—a phenomenon we refer to as position effects. Unmodeled position effects complicate comparing respondents taking the same test. We propose a position-sensitive IRT model that is a mixture of two item response functions, capturing the difference in response probability when the item is encountered early versus late in the test. The mixing proportion depends on item location and latent person-level characteristics, separating person and item contributions to position effects. We present simulation studies outlining various features of model performance and end with an application to a large-scale admissions test with observed position effects.

Read paper