Treffer: Integrating LLM-based code optimization with human-like exclusionary reasoning for computational education.
Weitere Informationen
Large Language Models (LLMs) are increasingly deployed as intelligent tutors that not only generate but also refine source code for educational purposes. Yet existing end-to-end fine-tuning strategies compel models to transform every input, often introducing superfluous or even detrimental edits that undermine both software quality and pedagogical clarity. We address this limitation by formulating exclusionary reasoning-the human practice of asking “Should I optimize?” before acting-as an explicit decision layer in the code-optimization pipeline. Concretely, we devise a two-stage framework in which an LLM first diagnoses whether a code segment merits modification and proceeds with optimization only when necessary, otherwise returning the original snippet verbatim. Implemented on a suite of open-source models and trained with publicly available Python corpora, our method proves model-agnostic and lightweight. Experiments on three standard benchmarks show consistent gains in functional correctness (pass@1/3/5) over conventional fine-tuning, yielding feedback that is both more accurate and easier for students to interpret. By aligning automated optimization with human selective judgment, the proposed framework transforms LLMs from indiscriminate code generators into credible virtual teaching assistants that intervene sparingly, explain clearly, and foster deeper learning of principled programming practices. [ABSTRACT FROM AUTHOR]