Semantic methods for textual entailment: How much world knowledge is enough?
Abstract
The problem of recognizing textual entailment (RTE) has been recently addressed with some success using semantic models. That attempt to capture the complexity of world knowledge. (Neel et al., 2008) has shown that semantic graphs made of synsets and selected relationships between them enable fairly simple methods to provide very competitive performance for RTE. Here, we extend the original results and show that RTE with automated word sense disambiguation (WSD) performs better using an updated WordNet database which presumably has evolved to capture more world knowledge than was available for the original evaluation. We obtain better results on datasets provided by subsequent RTE Challenge of 2008 and 2009. We report on the performance of these methods overall and in the four basic areas of information retrieval (IR), information extraction (IE), question answering (QA), and multi-document summarization (SUM). We conclude that WordNet is not rich enough to provide appropriate information to resolve entailment with this inclusion protocol. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Publication Title
Proceedings of the 23rd International Florida Artificial Intelligence Research Society Conference, FLAIRS-23
Recommended Citation
Neel, A., & Garzon, M. (2010). Semantic methods for textual entailment: How much world knowledge is enough?. Proceedings of the 23rd International Florida Artificial Intelligence Research Society Conference, FLAIRS-23, 253-258. Retrieved from https://digitalcommons.memphis.edu/facpubs/3196