Contextual definition generation

Abstract

This paper explores the concept of dynamically generating definitions using a deep-learning model. We do this by creating a dataset that contains definition entries and contexts associated with each definition. We then fine-tune a GPT-2 based model on the dataset to allow the model to generate contextual definitions. We evaluate our model with human raters by generating definitions using two context types: short-form (the word used in a sentence) and long-form (the word used in a sentence along with the prior and following sentences). Results indicate that the model performed significantly better when generating definitions using short-form contexts. Additionally, we evaluate our model against human-generated definitions. The results show promise for the model, showing that the model was able to match human-level fluency. However, while it was able to reach human-level accuracy in some instances, it failed in others.

Publication Title

CEUR Workshop Proceedings

This document is currently not available here.

Share

COinS