Friday, February 20, 12-1 PM
370 Jay Street, 522
Generative AI is increasingly shaping how we learn, work, communicate, and create. These technologies offer unprecedented opportunities for people of all ages to explore ideas, express creativity, and personalize learning. Yet many AI systems operate as “black boxes,” making it difficult to understand its decision-making. Challenges such as hallucinations, biased outputs, and deepfakes underscore the need for new forms of literacy that help users question, verify, and make sense of AI-generated content. This is especially pressing for children, who often anthropomorphize AI and overtrust its responses, forming a relational frame in which AI feels both authoritative and emotionally safe. In this talk, I will share how I design interactive tools and learning experiences that help children, families, and educators make sense of and critically engage with AI systems.
About the speaker
Aayushi Dangol is a doctoral candidate in Human-Centered Design & Engineering at the University of Washington. Her research focuses on how children can learn with and about artificial intelligence. She designs and studies AI literacy tools and learning experiences that help children understand how AI systems work, reflect on their social and environmental impacts, and exercise agency in AI-mediated learning environments. She earned her BA in Computer Science and Studio Art from Swarthmore College and worked as a middle school teacher before beginning her PhD.