Electronic Theses and Dissertations

Date

2024

Document Type

Thesis

Degree Name

Master of Science

Department

Psychology

Committee Chair

Philip Pavlik Jr

Committee Member

Philip I. Pavlik Jr

Committee Member

John Sabatini

Committee Member

Cheryl Bowers

Abstract

Maximizing the efficiency of testing for learning is an important educational goal (Schellekens et al., 2021). According to Portela et al. (2017), educational efficiency is achieved when learning and test results are generated with minimal resources and high efficacy. Traditionally in classrooms, feedback from tests is provided after a test is finished. However, research on the testing effect (Kang et al., 2007; Rowland, 2014) and studies on assessment during feedback (Chang & Li, 2019) suggest that educational efficiency can be enhanced by providing immediate feedback after each response. Our goal is to investigate whether immediate feedback during assessment affects the quality of the assessment compared to conventional testing practices. We explored this question by seeing if learning from feedback interferes with assessment reliability in multiple-choice tests. Without strong reliability, the purpose of a multiple-choice test is greatly diminished. We conducted a within-subjects design, test-retest reliability study comparing participants' performance on a multiple-choice test with narrow and wide spacing between a first trial and a second trial of items with immediate feedback present after every item. We found moderate evidence for assessment reliability during learning. We also conducted a follow-up study with a between-subjects design to directly compare the reliability of our test with vs without feedback present. We begin by reviewing the importance of reliability in multiple-choice formats.

Comments

Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to ProQuest.

Notes

Open Access

Share

COinS