A Study of LLM Generated Line-by-Line Explanations in the Context of Conversational Program Comprehension Tutoring Systems
Abstract
This research paper explores to what extent large language models (LLMs) can generate line-by-line explanations of code examples used in intro-to-programming courses such as CS1 (Computer Science) and CS2. While it is known that LLMs can generate code explanations, a systematic analysis of those explanations and their appropriateness for instructional and learning purposes is needed, which is the goal of this paper. Specifically, the paper explores how different types of prompts impact the nature and quality of line-by-line explanations relative to human expert explanations. We report a quantitative and qualitative analysis that compares AI-generated explanations with explanations produced by human experts. Furthermore, we investigate to what degree LLM can generate explanations for learners of various levels of mastery.
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Recommended Citation
Chapagain, J., Sajib, M., Prodan, R., & Rus, V. (2024). A Study of LLM Generated Line-by-Line Explanations in the Context of Conversational Program Comprehension Tutoring Systems. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 15159 LNCS, 64-74. https://doi.org/10.1007/978-3-031-72315-5_5