Testing the test: Item response curves and test quality

Citation:

G. A. Morris, L. Branum-Martin, N. Harshman, S. D. Baker, E. Mazur, S. Nath Dutta, T. Mzoughi, and V. McCauley. 2005. “Testing the test: Item response curves and test quality.” Am. J. Phys., 74, Pp. 449–453. Publisher's Version

Abstract:

We present a simple technique for evaluating multiple-choice questions and their answer choices beyond the usual measures of difficulty and the effectiveness of distractors. The technique involves the construction and qualitative consideration of item response curves (IRCs) and is based upon Item Response Theory from the field of education measurement. Item response curves relate the percentage of students who select each possible answer choice to overall ability level. To demonstrate the technique, we apply IRC analysis to three questions from the Force Concept Inventory (FCI). IRC analysis allows us to characterize qualitatively whether these questions are efficient, where efficient is defined in terms of the construction, performance, and discrimination of a question and its answer choices. Such analysis can be useful both in the development of future multiple-choice examination questions and in the development of a better understanding of results from existing diagnostic instruments such as the FCI.
See also: Other education
Last updated on 07/24/2019