Generating Response-Specific Elaborated Feedback Using Long-Form Neural Question Answering


In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and knowledge engineering to generate. This study presents an alternative approach that formulates elaborated feedback in terms of long-form question answering (LFQA). An off-the-shelf LFQA system was evaluated by human raters in a 2x2x2x2 ablation design that manipulated the context documents given to the LFQA model and the post-processing of model output. Results indicate that context manipulations improve performance but that post-processing can have detrimental results.

Publication Title

L@S 2021 - Proceedings of the 8th ACM Conference on Learning @ Scale