Skip to main content

Search NYU Steinhardt

The challenges for scaling up learning analytics (two people climbing up side of moutain)

Reflections on Scaling Up Learning Analytics

By Qiujie Li, LEARN Postdoctoral Scholar

On March 16, 2021, NYU-LEARN hosted the inaugural event of the new Learning Analytics Conversation Series on the topic of Scaling up Learning Analytics. Before diving into discussion, the two invited experts were asked to present their perspectives on the challenges and strategies of scaling up learning analytics. Incoming SoLAR president Dr. Maren Scheffel described the importance for higher education institutions to involve all stakeholders in developing policies to support learning analytics adoption, while Dr. John Whitmer from the Federation of American Scientists discussed strategies for integrating the research and production of learning analytics. The ensuing discussion, moderated by LEARN Director, Dr. Alyssa Wise, explored the common challenges to meet and the emergent high level principles of scaling up learning analytics. In this blog post, LEARN post-doctoral scholar Qiujie Li summarizes and reflects on the big ideas about Scaling Up Learning Analytics that emerged in the conversation and where they might take us in the near future. The full recording of the conversation is available on the NYU-LEARN YouTube channel.

The puzzling lack of successful large scale learning analytics adoption

As a Learning Analytics (LA) researcher, my early work focused on designing and testing LA in classroom-level interventions, where I was able to carefully shape how LA tools were introduced to have the greatest possibility to improve student learning. I was encouraged by these relatively small-scale successes, but my great hopes that we would be able to do the same thing on a larger scale to support teaching and learning were dampened rather quickly when I started to investigate how LA tools were used in more independent educational environments. While many instructors and staff were excited about the possibilities of LA, even the most dedicated ones weren’t using the tools regularly and their excitement for LA applications was accompanied with constant challenges and frustrations. Why is large scale roll-out of LA so difficult? What are the most important challenges to meet and how can we address them? I took away a few insights from the recent conversation between Maren, John and Alyssa as I started to explore these questions. 

Panelists’ perspectives on challenges and strategies for scaling up LA

Maren’s initial presentation focused on some key findings from the SHEILA project (for those who aren’t familiar, the SHEILA project engaged a wide range of stakeholders in higher education to build a “framework that can be used to assist with strategic planning and policy processes for learning analytics'' in order to “enhance systematic adoption of learning analytics on a wide scale”, Tsai, et al., 2018). Her comments grounded the tensions I feel between the high hope for LA and the lack of documented success in practice thus far. For example, while more than two thirds of the institutions that participated in the SHEILA project had adopted LA or had intended to do so, very few of them were able to meet their intended goals for the LA applications. Common challenges encountered included students’ concerns about privacy and the ethical use of their data, as well as instructors’ needs for agency in LA use and obligation to act when students were flagged as at-risk of low performance. These challenges have been increasingly recognized by the LA community (Buckingham Shum et al., 2019) and are, in my opinion, a potential consequence of what is known as the “black box” problem in analytics (Wise, et al., 2016; Brooks & Greer, 2014). Usually, students and educators have no knowledge about or control over how analytics are made and thus may not find the analytics trustworthy or actionable. Maren suggested one way such issues could be addressed (or even potentially reduced) was by engaging all stakeholders, especially students and educators, throughout the entire process of designing and implementing LA. Needless to say, this is not easy nor commonly done. But the field is moving towards this direction as human-centered learning analytics (HCLA) and participatory design emerged as key themes of the LAK (Learning Analytics and Knowledge) Conference this year.

While Maren talked about multiple challenges faced by students and educators when adopting LA, John focused on one specific conceptual challenge to scaling up: generalizability. From this perspective, the extent to which a LA application can be scaled up depends on the extent to which its design is context specific. For instance, the design of a LA application to determine the reason why “an 8th-grade student is struggling to learn math concepts” would require much more detailed contextual information about the student population and learning tasks and environments as compared to the development of a prediction model for college student dropout. This is actually quite a provocative idea since it challenges the basic premise that “large scale” should always be the target for LA. It may be, instead, that only some LA applications (e.g., high-level dashboards and dropout detection models) can be designed based on common needs across widely varying educational settings, and therefore can be successfully scaled up. Even so, let's not forget that it will still eventually require detailed contextual information to appropriately interpret and take action upon such analytics. For example, while a college dropout prediction model can be designed with little contextual information about the specific student population and learning tasks, it only serves as a starting point for educators to look into students who may need attention, and the best responses to address the situation are likely to vary across students.

For the follow-up discussion, Alyssa kicked things off with a stimulating question: is the primary reason that LA struggles to scale up due to a lack of compelling value proposition or because the amount of effort needed to scale up LA is too high? While Maren and John (not surprisingly) suggested that it is a combination of both, they also went a step further to discuss how to tackle the two issues. First, Maren highlighted the need to first define a clear goal that is educationally meaningful for the LA application and then work backwards to ensure its success. As John put it, the goal of LA should not be to answer analytical questions (e.g., predicting students’ dropout), but rather to address a pedagogical need (e.g., preventing students from dropping out). In addition, they both echoed a need to recognize that the effort to implement LA at scale is without doubt high (think about how much time and money it needs to collect and store the data, develop and advertise the application, and train students/educators for its use and how much time students/educators need to spend to learn new applications!). With all of this effort, a further factor contributing to the difficulty in scaling up LA may be the unclear assignment of responsibilities across the many different components of an LA implementation. To address this, John suggested the need for a more structured and systematic approach where researchers, instructional designers, educational technologists, and stakeholders work collaboratively to share the responsibilities in large scaled LA implementations.

How Human-Centered Design can set us up for large scale LA adoption

I found it interesting to see how this conversation about scaling up LA overlapped with a subsequent discussion in the workshop on HCLA design that I attended at LAK21. The HCLA design workshop started with a short discussion on the importance of giving voices to stakeholders in LA, and led into participatory design as a concrete strategy to put Maren’s advice to include all stakeholders into practice. However, a concern raised during the workshop was related to the generalizability of designs created with a small number of stakeholders. While this is a topic that requires further research, some strategies to combine participatory design with generalizability may include involving students with the greatest needs and piloting the resulting design with a sample that is representative of the target population. Moreover, I was inspired by the connection between the two topics to think about whether HCLA and participatory design could actually be used to support scaling up LA. While the current literature has mainly argued for and attempted to verify the use of participatory design to tailor LA to the needs of stakeholders, in my mind, I see a role for participatory design to also be used to collect data that can inform a successful implementation. Specifically, by bringing various stakeholders to sit at the table together, participatory design can help to reveal the diversity in needs among stakeholders and the differences in perceptions between stakeholders and designers and provide a guided way to identify a common ground. Insights emerging from the participatory design process can help anticipate obstacles for implementing the LA solution at large scale and be used to develop informed strategies to overcome them. Even though HCLA and participatory design are fairly new research areas, given the LA community’s increasing interest in them, I am hopeful to see more studies coming out exploring what data can be collected through participatory design in order to help inform large scale implementation.


Brooks, C., & Greer, J. (2014, March). Explaining predictive models to learning specialists using personas. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge (pp. 26-30).

Buckingham Shum, S., Ferguson, R., & Martinez-Maldonado, R. (2019). Human-centred learning analytics. Journal of Learning Analytics, 6(2), 1-9.

Tsai, Y. S., Moreno-Marcos, P. M., Jivet, I., Scheffel, M., Tammets, K., Kollom, K., & Gašević, D. (2018). The SHEILA framework: Informing institutional strategies and policy processes of learning analytics. Journal of Learning Analytics, 5(3), 5-20.

Wise, A. F., Vytasek, J. M., Hausknecht, S., & Zhao, Y. (2016). Developing learning analytics design knowledge in the “middle space”: The student tuning model and align design framework for learning analytics Use. Online Learning, 20(2), 155-182.