Skip to main content

Search NYU Steinhardt

New Assessment Tool Available to Measure Computational Thinking for Elementary Students

Posted

K-12 computer science education has expanded dramatically in NYC and around the country, driven by digital workforce demands, policy initiatives, and a desire for equitable access and participation. Educators are increasingly focused on cultivating “computational thinking” (CT) skills to prepare students for success in school and beyond. CT skills, which consist of problem-solving techniques like algorithmic thinking, abstraction, decomposition, and debugging, are essential not only for computer science but across various disciplines. However, measuring these skills in a consistent and efficient way remains a challenge in classrooms, despite their practical importance. 

In response to this need, the Maker Partnership Program, a research-practice partnership between the Research Alliance for New York City Schools, MakerState, and Schools That Can, developed the Computational Thinking Assessment for Elementary Students (CTAES), a tool designed to evaluate CT skills in 3rd to 5th grade students. 

About the CTAES

The CTAES was developed through an iterative process that included input from educators and our colleagues at MakerState and Schools That Can. Our team adopted and modified items from existing assessments like the Computational Thinking Test (CTt), the Computational Thinking Abilities-Middle Grades (CTA-M) Assessment, and the VELA Assessment, which were designed for use across various educational settings. The CTAES is made up of 10 items – 9 multiple-choice questions and 1 open-ended question – that evaluate students’ abilities to think algorithmically and apply computational processes in different contexts.

 

Click here to download a copy of the tool! 

Assessing the CTAES’ Validity and Reliability

A new study by Lijun Shen, Zitsi Mirakhur, and Sarah LaCour investigates the psychometric features of the tool, aiming to provide essential insights into how well the CTAES measures CT skills. The study draws on responses to the CTAES from over 200 students from eight elementary schools across New York City. The paper utilizes Rasch analyses to evaluate the assessment’s validity, reliability, and ability to perform across different demographic groups.

Key Findings

  1. Validity and Reliability: There is suggestive evidence that the CTAES efficiently, and effectively, measures CT and that the current assessment includes items that range in their level of difficulty, with some questions being easier for students to answer correctly than others. 
  2. Equity: A separate set of analyses shows that the CTAES generally performs similarly across gender and racial/ethnic groups, indicating that the assessment does not exhibit significant biases.
  3. Challenges and Recommendations: The CTAES may need further refinement to improve its sensitivity to a broader range of student abilities. Short assessments can result in limited differences in student scores, making it hard to compare their abilities accurately. Thus, it is important to find the right balance in assessment length and questions to get more reliable results. The authors suggest pairing the CTAES with other forms of assessment (e.g., examining students' code), which could provide a more comprehensive understanding of students’ CT abilities.

Conclusion

The CTAES appears to be a strong tool for assessing computational thinking in elementary students, though further refinement is necessary to enhance its accuracy. Teachers play a critical role in both utilizing and improving assessment tools like the CTAES, which can support the development of essential CT skills in students. 

 

This summary was authored by Faith Northern with Chelsea Farley and Zitsi Mirakhur. For more detailed findings, please see “Investigating the psychometric features of a locally designed computational thinking assessment for elementary students.”