Treffer: A Knowledge-Component-Based Methodology for Evaluating AI Assistants

Title:
A Knowledge-Component-Based Methodology for Evaluating AI Assistants
Source:
Proceedings of the ACM Global on Computing Education Conference 2025 Vol 1. :78-84
Publication Status:
Preprint
Publisher Information:
ACM, 2025.
Publication Year:
2025
Document Type:
Fachzeitschrift Article
DOI:
10.1145/3736181.3747167
DOI:
10.48550/arxiv.2406.05603
Rights:
CC BY NC ND
Accession Number:
edsair.doi.dedup.....943505e58ff4c11aa6b1c7bd3ff1e18c
Database:
OpenAIRE

Weitere Informationen

We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4, a large language model. This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises. A hint can be requested each time a student fails a test case. Our evaluation addresses three Research Questions: RQ1: Do the hints help students improve their code? RQ2: How effectively do the hints capture problems in student code? RQ3: Are the issues that students resolve the same as the issues addressed in the hints? To address these research questions quantitatively, we identified a set of fine-grained knowledge components and determined which ones apply to each exercise, incorrect solution, and generated hint. Comparing data from two large CS1 offerings, we found that access to the hints helps students to address problems with their code more quickly, that hints are able to consistently capture the most pressing errors in students' code, and that hints that address a few issues at once rather than a single bug are more likely to lead to direct student progress.