Assessing forward-, reverse-, and average-entailer indices on natural language input from the Intelligent Tutoring System, iSTART

Abstract

This study reports on an experiment that analyzes a variety of entailment evaluations provided by a lexico-syntactic tool, the Entailer. The environment for these analyses is from a corpus of self-explanations taken from the Intelligent Tutoring System, iSTART. The purpose of this study is to examine how evaluations of hand-coded entailment, paraphrase, and elaboration compare to various evaluations provided by the Entailer. The evaluations include standard entailment (forward) as well as the new indices of Reverse- and Average-Entailment. The study finds that the Entailer's indices match or surpass human evaluators in making textual evaluations. The findings have important implications for providing accurate and appropriate feedback to users of Intelligent Tutoring Systems. Copyright © 2008, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.

Publication Title

Proceedings of the 21th International Florida Artificial Intelligence Research Society Conference, FLAIRS-21

This document is currently not available here.

Share

COinS