Matthew West

Using a computer-based testing facility to improve student learning in a programming languages and compilers course

T. Nip, E. L. Gunter, G. L. Herman, J. W. Morphew, and M. West

in Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE 2018), 568-573, 2018.

While most efforts to improve students' learning in computer science education have focused on designing new pedagogies or tools, comparatively little research has focused on redesigning examinations to improve students' learning. Cognitive science research, however, has robustly demonstrated that getting students to practice using their knowledge in testing environments can significantly improve learning through a phenomenon known as the testing effect. The testing effect has been shown to improve learning more than rehearsal strategies such as re-reading a textbook or re-watching lectures. In this paper, we present a quasi-experimental study to examine the effect of using frequent, automated examinations in an advanced computer science course, "Programming Languages and Compilers" (CS 421). In Fall 2014, students were given traditional paper-based exams, but in Fall 2015 a computer-based testing facility enabled the course to offer more frequent examinations while other aspects of the course were held constant. A comparison of 292 student scores across the two semesters revealed a significant change in the distribution of students' grades with fewer students failing the final examination, and proportionately more students now earning grades of B and C instead. This data suggests that focusing on redesigning the nature of examinations may indeed be a relatively untapped opportunity to improve students' learning.

DOI: 10.1145/3159450.3159500

Full text: NiGuHeMoWe2018.pdf