In fall of 2020, Dr. Alejandro Ganimian worked with the Center for Universal Education at the Brookings Institution to look into technology's impact on learning and teaching, especially in low- and middle-income countries. The subsequent report, "Realizing the Promise: How can education technology improve learning for all?" discusses how education technology, or "ed-tech," has been limited in these countries. Dr. Ganimian discusses with On the Ground some of the surprising takeaways, the impact of the COVID-19, and the connection between ed-tech and inequality.
The report emphasizes that interactions among students, teachers, and content are what matters most in improving learning. What is the relationship between these interactions and ed-tech interventions, and why is that important?
The importance of the interaction between students, teachers, and content is not a new idea from our work. It was first recognized by David Cohen and Deborah Ball, who called it the “instructional core” in a well-known 1999 report. Cohen and Ball argued that the reason why education reforms in the United States at the time had failed to produce meaningful changes in instruction and learning is because they focus on improving one of these three aspects at a time (e.g., changing the curriculum or increasing teachers’ knowledge). They contended that a school’s instructional capacity is a function of the interaction between students, teachers, and content. Teachers’ ability and resources influence how they incorporate, interpret, and respond to materials and students. Students’ resources influence what teachers can accomplish. Materials mediate students’ engagement with the material to be learned.
Emiliana Vegas, Rick Hess, and I use Cohen and Ball’s instructional core framework to explain how ed-tech interventions could impact interactions between students, teachers, and content. This is useful because it allows us to ask the question: if we wanted to improve the relationship between any two factors, what are the most effective ed-tech interventions to do so? For example, if we wanted to improve the relationship between students and content, we could provide students with “pre-loaded hardware” (e.g., laptops or tablets with educational content) or offer students the opportunity to interact with “computer-adaptive learning” (e.g., educational software that adjusts its difficulty depending on the level and rate of progress that each child is making)—among other options. Existing evidence from low- and middle-income countries suggests that while the former has consistently failed to produce meaningful impacts on student learning, the latter has produced moderate-to-large gains in student achievement.
In short, the instructional core is a lens through which researchers, policymakers, and practitioners can make sense of the evidence on ed-tech, setting a clear goal for improvement and then figuring out which are the best options to pursue that goal.
Your report suggests that, despite their promise, ed-tech interventions and resources have had few effects on student learning—and, in some cases, ed-tech has even distracted students from schoolwork. Can you tell us more about why ed-tech is not a “one size fits all” solution?
Our report makes the point that, despite the grandiose rhetoric around the potential of education technology to “disrupt” instruction and learning, its results have been disappointing: the effects of “hardware” interventions (e.g., free computers, laptops, or tablets) have been null or even negative (by allowing students to play video games instead of doing schoolwork) and those of most “software” interventions (e.g., remedial or game-based educational products) have been small to moderate, with few notable exceptions I discuss below. (See appendix C of my paper with Karthik Muralidharan and Abhijeet Singh for a review, and similar reviews from Maya Escueta and colleagues and Daniel Rodriguez-Segura).
We believe that most ed-tech interventions have failed to deliver on their promise for three main reasons. In some cases, governments adopt technologies for which they lack the adequate infrastructure. In other cases, they adopt technologies with a poor track record of improving student learning. And yet in others, they adopt a technology and “hope for the best.” The high-profile failure of the One Laptop per Child program in Peru illustrates all three reasons. The country rushed to give away computers to children in spite of a lack of availability of Internet connectivity in its most disadvantaged areas. It also had the misfortune of taking up a specific technology (i.e., free netbooks) that two rigorous evaluations had found had zero effects on students’ skills—either academic skills or computer skills (see here and here). And third, the policy continued long after it was found to be ineffective—not only in Peru, but also in many other Latin American countries who adopted similar free-netbook programs.
What are the major takeaways from your report for educators and school leaders? How do you propose schools should shift their thinking when it comes to ed-tech?
We argue for a simple yet surprisingly rare approach to education technology. We contend that developing countries interested in adopting an ed-tech intervention should first understand the needs, infrastructure, and capacity of their own school system; survey the best available evidence on interventions that match those conditions; and closely monitor the results of innovations before they are scaled up.
The first step (i.e., the “diagnosis”) should focus on understanding the specific needs to improve student learning (e.g., raising the average level of achievement, remediating gaps among low performers, and challenging high performers to develop higher-order skills); the infrastructure to adopt technology-enabled solutions (e.g., electricity connection, availability of space and outlets, stock of computers, and Internet connectivity at school and at students’ homes); and capacity to integrate technology in the instructional process (e.g., students’ and teachers’ level of familiarity and comfort with hardware and software, their beliefs about the level of usefulness of technology for learning purposes, and their current uses of such technology). Whenever possible, we encourage school systems to rely on data that they already have—e.g., from background questionnaires associated with national and international assessments—before trying to collect their own data, which is costly and time consuming.
The second step (i.e., the “evidence review”) should seek to identify policies or programs that leverage the “comparative advantages” of technology to complement—not substitute—regular instruction (i.e., that take advantage of the things that technology does best, relative to human beings). These include scaling up standardized instruction (e.g., through pre-recorded lessons, distance education, or pre-loaded hardware); facilitating differentiated instruction (e.g., through computer-adaptive learning or live one-on-one tutoring); expanding opportunities for practice (e.g., through practice exercises); and/or increasing student engagement (e.g., through video tutorials, games, and gamification; see also my recent contribution to this report from The Economist Intelligence Unit on this question).
The third and final step (i.e., the “prognosis”) should first assess the degree of overlap between the diagnosis and the evidence review. An intervention may leverage a comparative advantage of technology (e.g., it could provide personalized learning to students of varying learning levels), but it could be ill-suited for the infrastructure or capacity of the school system (e.g., it could be the case that very few schools have enough computers for students to interact with software on a one-on-one basis). Similarly, an intervention may be well suited for the context (e.g., a live tutoring program may help address a surplus of teachers), but it may not have enough of a proven track record (e.g., in developing countries, there is only one evaluation of a small-scale pilot of tutoring through Skype). The less “proven” the intervention is, the more the country should focus on closely monitoring its implementation and impact on both instruction and learning. Odds are it will take several iterations to identify gaps and course correct.
In short, ed-tech interventions can rarely be implemented “off the shelf.” Interventions have to be fit for purpose and context, and even when they are, they need to be monitored closely to identify gaps in implementation, address them, and observe the results of such changes.
It seems like the COVID-19 pandemic has changed schools' relationship to ed-tech more dramatically than anyone could have imagined. What (if any) lessons do you take from this experience that can inform how we move forward with ed-tech?
I would say that the COVID-19 pandemic has resuscitated the rhetoric about the potential of technology to disrupt education—a discourse as old as the invention of radio, which has resurfaced every time a new technology is adopted (e.g., TV, computers, laptops, and tablets).
The reality of “remote learning,” however, has been far more sobering. The bulk of analytical work that I have seen since the onset of the pandemic, largely from developing countries, reveals that most school systems were unprepared to transition from fully in-person classes to fully remote learning. Teachers were expected to shoulder the burden of this transition, having to constantly adjust their lessons, being asked to return to schools (in some cases, before the risks were fully understood) only to swiftly required to go back home (in most cases, with little to no notice). Meanwhile, students varied widely in their capacity to take advantage of remote learning (due to unequal access to computers, internet connectivity, and adequate bandwidth), engage with the material, and sustain their engagement during the school year. The parents of young children frequently shared the burden with teachers, either “playing teachers” (a role for which many of them felt unprepared) or seeking additional supports, reinforcing already existing inequalities along racial, ethnic, language, and socio-economic lines.
Certainly, some efforts to mitigate learning loss through technology have had promising results. For example, Noam Angrist and colleagues used a combination of text messages and phone calls to teach basic numeracy to primary-school students in Botswana, which had modest but cost-effective impacts on student learning when compared to a business-as-usual group.
I do not wish to take away from the ingenuity displayed by some during this period of crisis. Yet, what I fear is that attention to these efforts may lead us to miss the forest for the trees: most governments in the developing world remain unclear on what they want ed tech to do, many of them are ill-equipped to adopt available innovations, and once they do take them on, it is mostly to signal commitment to education, but with relatively little concern for ensuring implementation quality, and even less regard for the impacts of such efforts.
In my humble opinion, even as the pandemic has sparked talk of the “promise” of ed tech around policy and international aid circles (yet one more time), we continue to expect too much from technology and to do too little to ensure it delivers it. Not much has changed.
How does ed-tech intersect with issues of social justice and inequality? How should teachers, school administrators, and policymakers factor equity into their decision making around ed-tech interventions?
I think this an important question. My impression is that conversations about ed tech and social justice—although they are rarely couched in these terms—have centered on ensuring children from low-income families have equitable access to technology (e.g., computers, Internet, etc.)
To some extent, the current focus on equal access is intuitive: if children and adolescents from disadvantaged backgrounds do not have access to hardware and connectivity that is required to do schoolwork and participate in society, they are likely to fall behind their more affluent peers.
Yet, I cannot help noticing the parallels between this approach and the movement to increase access to schooling throughout the developing world over the past two decades. As Lant Pritchett has argued, this movement proceeded with the implicit assumption (and in some cases, explicit expectations) that if we built schools and hired teachers, learning would follow. Pritchett contends, however, that developing countries responded by building schools that look like those in developed nations, but that in practice fail to ensure even the most basic skills. (This explains why the United Nations went from prioritizing school enrollment in the Millennium Development Goals for 2015 to learning in the Sustainable Development Goals for 2025.) I worry that, much like developing countries once built school systems that largely failed to deliver learning, they will now provide technology hardware to all only to subsequently realize that they have not meaningfully changed students’ daily experiences at school.
In my view, our focus should be on how education technology affects the learning experience for the most disadvantaged students. For example, in a recent study in private unaided schools across seven Indian cities (joint with Andreas de Barros and Anuja Venkatachalam), we found that while tech-enabled independent practice had a null effect on the average middle-school student, it considerably improved the math achievement of initially low performing students. Similarly, in another study of “model” public schools (also with Andreas de Barros) in the Indian state of Rajasthan, we found that while tech-enabled personalization had no impact on the typical sixth-to-eighth-grader, it had large effects on the math performance of low performers.
I very much hope that the research and policy agenda for equity in ed tech in the developing world focuses on ensuring equity in learning, not inputs.
Other Recent Q&As
Research Partnerships: Mentte Cedat and Dr. Anil Chacko
Mentte Cedat, a Mexico-based organization that works with adolescents and policy makers to prevent violence, partners with IHDSC affiliate and NYU Steinhardt Associate Professor of Counseling Psychology Dr. Anil Chacko. IHDSC sat down with them to discuss the work of Mentte Cedat and the benefits of their partnership.
Q&A: Young Women's Freedom Center and Dr. Shabnam Javdani
On the Ground interviewed Young Women's Freedom Center executive Director Jessica Nowlan and NYU Steinhardt professor Dr. Shabnam Javdani about the role of their research and partnership in working with girls, young women, and TGNC young people towards decriminalization, decarceration, abolition, and reimagining the juvenile justice system response.
Q&A: Dr. Paula Chakravartty and Dr. Michelle Buckley
Drs. Paula Chakravartty (NYU) and Michelle Buckley (University of Toronto) partner with Khabar Lahariya, an India-based rural feminist media collective, to create lasting impact for brick kiln workers in Uttar Pradesh. On the Ground interviewed the researchers to learn about how this partnership helps to design the research, as well as the ways in which the research feeds back into the lives and work of migrant workers.