An open vocabulary approach for estimating teacher use of authentic questions in classroom discourse
Automatic assessment of the quality of classroom discourse can have a transformative effect on research and practice on improving teaching effectiveness. We improve on a previous automated method to measure teacher authentic questions – open-ended questions without pre-scripted responses that predict student achievement growth – using classroom audio and expert question codes from two sources: (1) a large archival database of text transcripts of 428 class-sessions from 116 classrooms, and (2) a newly collected sample of 132 high-quality audio recordings with automatic speech recognition transcripts from 27 classrooms. Whereas previous work utilized a “closed vocabulary” approach, consisting of 732 pre-defined word, sentence, and discourse level features, the present “open vocabulary” approach exclusively utilized word and phrase counts from the transcripts themselves. The two approaches yielded substantial, but statistically equivalent, correlations with gold-standard human codes of authenticity (Pearson r’s of 0.396 vs. 0.424 and 0.602 vs. 0.613 for datasets 1 and 2, respectively). Importantly, averaging estimates from the two approaches resulted in statistically significant improvements over either approach (r’s of 0.492 and 0.686 for datasets 1 and 2, respectively). We discuss implications of our findings for automated analysis of classroom discourse.
Proceedings of the 11th International Conference on Educational Data Mining, EDM 2018
Cook, C., Olney, A., Kelly, S., & D’Mello, S. (2018). An open vocabulary approach for estimating teacher use of authentic questions in classroom discourse. Proceedings of the 11th International Conference on Educational Data Mining, EDM 2018 Retrieved from https://digitalcommons.memphis.edu/facpubs/7379