Leveraging cognitive theory to create large-scale learning tools

Darryl Chamberlain, Russell Jeter

Research output: Contribution to journalArticlepeer-review

Abstract

At the 21st Annual Conference on Research in Undergraduate Mathematics Education, Ed Dubinsky highlighted the disparity between what the research community knows and what is actually used by practicing instructors. One of the heaviest burdens on instructors is the continual assessment of student understanding as it develops. This theoretical paper proposes to address this practical issue by describing how to dynamically construct multiple-choice items that assess student knowledge as it progresses throughout a course. By utilizing Automated Item Generation in conjunction with already-published results or any theoretical foundation that describes how students may develop understanding of a concept, the research community can develop and disseminate theoretically grounded and easy-to-use assessments that can track student understanding over the course of a
semester.
Original languageAmerican English
Journal22nd Annual Conference on Research in Undergraduate Mathematics Education
StatePublished - Feb 28 2019
Externally publishedYes

Keywords

  • Assessment
  • Automated Item Generation
  • Technology in Mathematics Education

Disciplines

  • Educational Assessment, Evaluation, and Research
  • Science and Mathematics Education
  • Educational Technology

Cite this