Computerized testing: A vision and initial experiences
C. Zilles, R. T. Deloatch, J. Bailey, B. B. Khattar, W. Fagen, C. Heeren, D. Mussulman, and M. West
in Proceedings of the 122nd American Society for Engineering Education Annual Conference and Exposition (ASEE 2015), 26.387.1-26.387.13, 2015.
In a large (200+ students) class, running exams is a logistical nightmare. Such exams require conﬂict exams and figuring out how to address the full range of Bloom's taxonomy learning goals in a manner that can be eﬃciently graded to give quick student feedback. Typically, these exam hassles lead instructors to have a few, large, multiple-choice intense exams, which can be suboptimal for student learning. In this paper, we pursue a different vision, enabled by making a computer a central part of the testing process. We envision a computerized testing center, proctored 60-80 hours/week. When a course assigns a (computerized) exam, the professor specifies a range of days for the exam and the student reserves a time of their convenience. When the student arrives, they are ushered to a machine that has been booted into the specified exam configuration (many different exams are being run in the testing center concurrently). The student logs in and is ushered through their exam. Each exam consists of a random selection of parameterized problems meeting coverage and diffculty criteria, so each exam is different. The networking on the machine is configured to prevent unauthorized communication. The system displays and controls the remaining time for the exam. We see two main advantages of this approach. First, we centralize all of the hassles of running exams, so course staff no longer have to manage the scheduling, staffing, and paper shuffiing of running exams. As such, we drastically lower the effort of running exams, making more frequent, lower stakes testing and second chance testing practical. Second, we greatly broaden the kinds of questions that can be machine graded. Most large classes rely at least partially on scantrons for automation, but many of the questions that we want to ask aren’t multiple choice. With a computer involved, you can ask (and auto-grade) any question that can be objectively scored; you can ask students to design circuits, do graphical problems like drawing acceleration vectors, write code, write equations, draw force diagrams, align genetic sequences, etc. Furthermore, as modern engineering is practiced in a heavily computer-supported environment, we can have them use industry standard software to solve design and analysis problems. This is particularly compelling in programming classes, where students can compile, test, and debug their problems before submitting them for grading. In this paper, we'll describe our experiences with a prototype computerized testing lab and running all of a 200-student computer organization class's exams using computerized testing. We'll discuss the mechanics of operating the testing lab, the work required by the instructor to enable this approach (e.g., generating a diversity of equivalent difficulty problems), and the student response, which has been strongly positive: 75% prefer computerized testing, 12% prefer traditional written exams, and 13% had no preference.
Full text: ZiDeBaKhFa2015.pdf