Assessing student response in tutorial dialogue context using probabilistic soft logic


Automatic answer assessment systems typically apply semantic similarity methods where student responses are compared with some reference answers in order to access their correctness. But student responses in dialogue based tutoring systems are often grammatically and semantically incomplete and additional information (e.g., dialogue history) is needed to better assess their correctness. In that, we have proposed augmenting semantic similarity based models with, for example, knowledge level of the student and question difficulty and jointly modeled their complex interactions using Probabilistic Soft Logic (PSL). The results of the proposed PSL models to infer the correctness of the given answer on DT-Grade dataset show the more than 7% improvement on accuracy over the results obtained using a semantic similarity model.

Publication Title

EDM 2019 - Proceedings of the 12th International Conference on Educational Data Mining

This document is currently not available here.