Thursday, July 29, 2021

By Lois J. Gray

Catherine “Catie” Mintz, a PhD candidate in the University of Iowa College of Education’s Educational Measurement and Statistics program, recently received a National Science Foundation (NSF) grant to pursue research on multidimensional computerized adaptive tests.

The nearly $12,000 highly-competitive NSF: Methodology, Measurement, & Statistics: Doctoral Dissertation Research Improvement grant will allow Mintz to continue her research, “Multidimensional Nominal Response Models in Adaptive Testing.”

“Particularly in education, we are seeing a push to convert to computerized adaptive testing, where each test question administered is chosen based on how the examinee answered the previous test questions,” says Mintz who is originally from Saint Leonard, Maryland.

The benefits of computerized adaptive testing are that it is highly personalized and, theoretically, more accurate, explains Mintz.

“The majority of computerized adaptive tests currently implemented are single-subject tests that utilize right-wrong scoring to aid in selecting test questions,” Mintz says.

She adds that decades of research on single-subject tests, however, have shown that the test question response options a given examinee chooses contain useful information for building computerized adaptive tests and calculating the examinee’s ability.

“My dissertation examines the benefit of using this information in multi-subject adaptive tests built under two different statistical frameworks: Diagnostic Classification Modeling and Multidimensional Item Response Theory,” Mintz says.

Diagnostic Classification Modeling assumes ability/ies can be categorized—e.g., has the examinee “mastered”  the rules of exponents or factoring equations in an Algebra context, for instance, or just one of those subdomains? Multidimensional Item Response Theory, on the other hand, can pinpoint an examinee’s exact ability levels.

“Both have their place in operational testing and thus must be vetted,” Mintz says. “In a nutshell, this dissertation examines the benefit of using examinees’ test question answers over whether they answered the test questions correctly to create individualized, multi-subject tests on the fly.”

While this research can help test takers and teachers in K-12 classrooms and those preparing for college or graduate school entrance exams, it will help people in many professions.

“This could have a huge impact on test takers because standardized testing is fundamental to many disciplines,” Mintz says.

This includes testing done by licensure boards for different professions such as doctors, lawyers, CPAs, and the military, to mention just a few.

In fact, Mintz began work this July as a test development specialist for the American Board of Pediatrics in Chapel Hill, North Carolina, while finishing her dissertation. She says she is grateful to all of the professors, students, and staff who have supported her along the way.

“My faculty told me that a good dissertation raises more questions than it answers and even now, I’m finding that to be true,” Mintz says.  I’m eager to investigate how this type of computerized adaptive test works when the test questions measure examinee misconceptions, as well as the impact of ability correlations and distributions on test results.”

The results of Mintz’ work could have a profound impact on how students learn and how educators teach.

“Ideally, incorporating test question response option choice will lead to shorter, more accurate tests, in turn leading to better understanding of students’ strengths and weaknesses while reducing testing time,” Mintz says. “One of my motivations in pursuing this topic was improving formative, classroom-based assessment. I would be overjoyed if this research led to operational tests that enhance instruction, not take away from it. Because the methodology is so general, I likewise hope that it can improve testing across disciplines.”