Speeding Time to Discovery with Digital Learning Platforms

Wondering about the challenges and opportunities presented by platform-embedded educational experiments? Read on!

In the ever-evolving landscape of educational research, cluster randomized trials have held high status as the ideal research design for demonstrating causal impact. Cluster randomized trials are a form of randomized controlled trials (RCTs) where, instead of randomly assigning individual students to different treatments, a researcher randomly assigns a group of students (e.g., entire classes or schools) to treatment conditions. A key benefit of cluster randomized trials is that students within any single class will receive the same treatment, preventing cross-treatment contamination between students. However, cluster randomized trials are currently facing scrutiny for being expensive, slow, and challenging to scale. Enter digital learning platforms (DLPs), offering a potential alternative that promises to lower costs and accelerate discovery. This shift prompts a closer look at the challenges and opportunities presented by platform-embedded educational experiments, specifically the tradeoffs between cluster randomized trials and individually randomized experiments in a digital platform, like Terracotta.

This is a trade-off because random assignment at the instructor or section level (a cluster randomized trial) doesn't really make much sense within Terracotta. If a researcher has the means to manage a cluster randomized trial, they can simply manipulate what the instructor is asked to implement. The instructor already has full flexibility of customization in the LMS for a common student experience, and there aren't many technical obstacles for the learning platform to overcome. There may still be some value in Terracotta's informed consent and deidentification features (e.g., at least one researcher has used Terracotta to run a single undifferentiated survey in a single large class, because it's soothing things with her IRB), but cluster randomized trials are not what we aim to support with Terracotta.  

For its part, the Institute of Education Sciences (IES) envisions DLP-enabled research as a means to move away from cluster randomized trials, or at least to offer a viable alternative to them. The issues with cluster randomized trials are that they're expensive, slow, and challenging to scale (due to intertwined recruitment and statistical power issues). IES has made it clear that DLP-enabled research is intended to lower costs and accelerate discovery, more easily at scale. While a cluster randomized trial has many design advantages, IES has repeatedly argued that these kinds of evaluations are unsustainable. One can avoid many potential research confounds by running cluster randomized trials, but there's no such thing as a perfect experiment, and we can still do a lot with individually randomized trials in a digital learning environment.

But what about cross-treatment contamination? Can you really control, at an individual level, what students are exposed to in Canvas?

We address this in a recent article in Behavior Research Methods:

Unlike laboratory research where participants are typically isolated from one another, student participants in a Terracotta experiment are classmates who are not isolated from one another. Students do communicate modestly about schoolwork with their classmates outside of class, commonly by sharing answers, artifacts, and summaries (Asterhan & Bouton, 2017; Bouton et al., 2021). If participants communicate about experimentally manipulated assignments, and this communication exposes them to treatments that were outside their assigned condition, contamination has occurred. Cross-treatment contamination is nothing new in education research (Cook, 2007), and while Terracotta differentiates the treatments that students can access in the LMS, it cannot prevent students from talking with one another. This possibility reinforces the importance of being transparent with student participants: letting them know that, should they agree to participate in a research study within Terracotta, they may have slightly different learning experiences than their classmates, and that they should avoid talking with each other about these experiences. Nevertheless, should cross-treatment contamination occur, this will blur the intended contrast between conditions, and at worst, the consequence would be an underestimate of the effect of an experimental manipulation. In general, researchers should be aware that experimental control is more challenging in the real world, and that there is a risk of observing smaller effect sizes than in the laboratory (Hulleman & Cordray, 2009; Vanhove & Harms, 2015), although sometimes such differences are not observed (Mitchell, 2012).

In this way, we see cross-treatment contamination as something of a boogeyman: there’s minimal evidence of problems caused by cross-treatment contamination (but please tell us if you are aware of such evidence!). And even if cross-treatment contamination exists, it might be overcome by the larger sample sizes afforded by platform-enabled research (Torgerson, 2001). But, isn’t it weird to have two different groups of classmates doing two different things in the LMS?

Not really. It's not at all uncommon in education to have different groups of students doing different things.  Different students will often be assigned to different topics (e.g., doing reports on different national parks, different Civil War battles, different historical periods, etc.), to enable broad coverage within a class. Universal design for learning (UDL) advocates that students should be given options in how they demonstrate learning and improvement.  Furthermore, to prevent themselves from being overwhelmed with grading, sometimes instructors will stagger which group submits writing assignments during which class unit.  Differentiating activities is a normal thing to do.

In these contexts, students know that they’ll be doing something different from their peers, and we think this should also be the case for research manipulations. In Terracotta, we don't think it's necessary, or ethical, for students to be totally uninformed that there's an experimental research study happening. Students should know that the instructor is trying something new, and that they'll be experiencing a different thing than their peers. Like cross-treatment contamination, the Hawthorne effect is another boogeyman -- there's minimal evidence of it causing issues in education. We think the benefits of transparency outweigh the potential issues that they may cause.

Taking stock, informed consent and transparency with students, coupled with the general normalcy of different students doing distinct activities during routine instruction, make individual randomization both feasible and ethical.

Digital learning platforms like Terracotta speed the time from research to results by randomizing participants automatically, thus easing study implementation; they can also ease the processes of data collection and analysis. Individual randomization isn’t difficult when using a digital learning platform; what’s more, it’s common to have students engaged in different activities, be they group projects, writing assignments, or creative praxis, within the context of a single classroom. Despite the possibility of some degree of cross-treatment contamination, individually randomized and digitally enabled research puts actionable insights within reach for researchers, and for the teachers and students who might benefit from them.