Automatic generation and ranking of questions for critical review
Abstract
Critical review skill is one important aspect of academic writing. Generic trigger questions have been widely used to support this activity. When students have a concrete topic in mind, trigger questions are less effective if they are too general. This article presents a learning-to-rank based system which automatically generates specific trigger questions from citations for critical review support. The performance of the proposed question ranking models was evaluated and the quality of generated questions is reported. Experimental results showed an accuracy of 75.8% on the top 25% ranked questions. These top ranked questions are as useful for selfreflection as questions generated by human tutors and supervisors. A qualitative analysis was also conducted using an information seeking question taxonomy in order to further analyze the questions generated by humans. The analysis revealed that explanation and association questions are the most frequent question types and that the explanation questions are considered the most valuables by student writers. © International Forum of Educational Technology & Society (IFETS).
Publication Title
Educational Technology and Society
Recommended Citation
Liu, M., Calvo, R., & Rus, V. (2014). Automatic generation and ranking of questions for critical review. Educational Technology and Society, 17 (2), 333-346. Retrieved from https://digitalcommons.memphis.edu/facpubs/2568