On interactive computation: Intelligent tutoring systems


This talk will give an overview of an interdisciplinary research project being developed at The University of Memphis, led by a team of computer scientists, psychologists, and educators. The project's goal is to research and develop prototypes for an intelligent autonomous software agent capable of tutoring a human user on a narrow, but fairly open, domain of expertise. The chosen prototype domain is computer literacy. The agent interacts with the user in natural language and other modalities. It receives input in typewritten form, possesses a good deal of syntactic and semantic capabilities to interpret inputs in context relevant fashion, select appropriate responses (short feedback, dialog moves), and completes the dialog cycle in multimodal form (feedback delivered in short spoken expressions and/or facial gestures, spoken information delivery and pointing to appropriate illustrations, animations, etc.). The performance of the agent is expected to be consistent with the level of performance of untrained human tutors. The talk will give a brief overview of the overall architecture of the tutor, explore some of the challenges and tools that have been used in solving them, and provide a demo of the current version, AutoTutor, with an emphasis on the multimodal delivery of the dialog cycle.

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)