Automated Assessment of Students’ Code Comprehension using LLMs
Abstract
Assessing students’ answers, particularly natural language answers, is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks, evaluating LLMs in the realm of automated answer assessment has not received much attention. To address this gap, we explore the potential of using LLMs for automated assessment of student’s short and open-ended answers in program comprehension tasks. Particularly, we use LLMs to compare students’ explanations with expert explanations in the context of line-by-line explanations of computer programs. For comparison purposes, we assess both decoder-only Large Language Models (LLMs) and encoder-based Semantic Textual Similarity (STS) models in the context of assessing the correctness of students’ explanation of computer code. Our findings indicate that decoder-only LLMs, when prompted in few-shot and chain-of-thought setting perform comparable to fine-tuned encoder-based models in evaluating students’ short answers in the programming domain.
Publication Title
Proceedings of Machine Learning Research
Recommended Citation
Oli, P., Banjade, R., Chapagain, J., & Rus, V. (2024). Automated Assessment of Students’ Code Comprehension using LLMs. Proceedings of Machine Learning Research, 257, 118-128. Retrieved from https://digitalcommons.memphis.edu/facpubs/19914