Matthew West

A validated scoring rubric for Explain-in-Plain-English questions

B. Chen, S. Azad, R. Haldar, M. West, and C. Zilles

in Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE 2020), 2020.

Previous research has identified the ability to read code and understand its high-level purpose as an important developmental skill that is harder to do (for a given piece of code) than executing code in one's head for a given input ("code tracing"), but easier to do than writing the code. Prior work involving code reading ("Explain in plain English") problems, have used a scoring rubric inspired by the SOLO taxonomy, but we found it difficult to employ because it didn't adequately handle the three dimensions of answer quality: correctness, level of abstraction, and ambiguity. In this paper, we describe a 7-point rubric that we developed for scoring student responses to "Explain in plain English'' questions, and we validate this rubric through four means. First, we find that the scale can be reliably applied with with a median Krippendorff's alpha (inter-rater reliability) of 0.775. Second, we report on an experiment to assess the validity of our scale. Third, we find that a survey consisting of 12 code reading questions had a high internal consistency (Cronbach's alpha = 0.954). Last, we find that our scores for code reading questions in a large enrollment (N = 452) data structures course are correlated (Pearson's R = 0.555) to code writing performance to a similar degree as found in previous work.

DOI: 10.1145/3328778.3366879

Full text: ChAzHaWeZi2020.pdf