The “How” of Terracotta

Guiding the process of building experiments while maintaining flexibility

Saying that Terracotta is an "experiment-builder" should give you a general idea about what it does, and saying that it is an experiment builder in a learning management system (LMS) should indicate where it sits, but you might still wonder how it works. How do you give teachers and researchers the flexibility to build custom experiments within an environment that is not made for experimental manipulation?

It's a tough question. During our design process, we've noticed a recurring tension between the desire to completely model the entire experimental procedure within Terracotta (to control everything), and the opposing desire to make complete use of native structures and features in the LMS (to control as little as possible). Neither of these extreme approaches will be successful. If we control everything, Terracotta would be a behemoth with a constellation of parameters that duplicate LMS features (e.g., open dates, due dates, and grading policies; Terracotta would be like an LMS within an LMS), but if we rely too heavily on the LMS, then we'll lose flexibility and unnecessarily restrain research creativity.

The Assignment "Bucket"


Our solution is to think about an assignment within the LMS like a bucket. The assignment bucket is native to the LMS, but it's only an empty container. The bucket has useful features within the course, like open dates, due dates, and grading policies, but it's just an empty shell on its own.

Now imagine that you could fill a bucket with a quicksilver-like substance, where the contents of the bucket appear differently, depending on who's looking in the bucket. That's how Terracotta works.

Terracotta will populate LMS assignments ~will fill the buckets~ with learning activities and materials that change depending on who's looking at them, automatically managing experimental variation within the buckets. When a student is assigned to Condition A, an assignment will reveal one version of a learning activity, but when another student (assigned to Condition B) clicks on the same assignment, they'll see a different learning activity within the bucket. In other words, different experimental treatments will exist within a single assignment bucket, and Terracotta will keep track of who sees what.

From the student's perspective, they'll be completing assignments as normal within the LMS, with no outward appearance that the assignment is different from any other assignment.

From the teacher's perspective, there'll be a little extra work involved. Teachers will create their assignments within Terracotta, specifying which treatments will be contained within the assignment, and how these treatments correspond to students' randomly-assigned experimental conditions. Treatments (what a student sees and does for the assignment) can be uploaded or built directly within Terracotta. Once an assignment is created within Terracotta, the teacher will then go to the LMS to fill a bucket, to populate an LMS assignment container with the assignment that they'd just created in Terracotta.

This design allows Terracotta to tightly embed an experiment within a course, leveraging native LMS features for how assignments should normally appear. It also provides researchers the same flexibility to develop experimental treatments that teachers would normally have when populating assignments.