Semantic methods for textual entailment

Abstract

The problem of recognizing textual entailment (RTE) has been recently addressed using syntactic and lexical models with some success. Here, a new approach is taken to apply world knowledge in much the same way as humans, but captured in large semantic graphs such as WordNet. We show that semantic graphs made of synsets and selected relationships between them enable fairly simple methods that provide very competitive performance. First, assuming a solution to word sense disambiguation, we report on the performance of these methods in four basic areas: Information retrieval (IR), information extraction (IE), question answering (QA), and multi-document summarization (SUM), as described using benchmark datasets designed to test the entailment problem in the 2006 Recognizing Textual Entailment (RTE-2) challenge. We then show how the same methods yield a solution to word sense disambiguation, which combined with the previous solution, yields a fully automated solution with about the same performance. We then evaluate this solution on two subsequent RTE Challenge datasets. Finally, we evaluate the contribution of WordNet to provide world knowledge. We conclude that the protocol itself works well at solving entailment given a quality source of world knowledge, but WordNet is not able to provide enough information to resolve entailment with this inclusion protocol. © 2012, IGI Global.

Publication Title

Applied Natural Language Processing: Identification, Investigation and Resolution

Share

COinS