Treffer: The Behavior of Large Language Models When Prompted to Generate Code Explanations

Title:
The Behavior of Large Language Models When Prompted to Generate Code Explanations
Language:
English
Source:
Grantee Submission. 2023Paper presented at the Conference on Neural Information Processing Systems (NeurIPS 2023) (37th, New Orleans, LA, Dec 2023).
Peer Reviewed:
Y
Page Count:
19
Publication Date:
2023
Sponsoring Agency:
National Science Foundation (NSF)
Institute of Education Sciences (ED)
Contract Number:
1822816
R305A220385
Document Type:
Konferenz Speeches/Meeting Papers<br />Reports - Research
Education Level:
Elementary Education
Grade 7
Junior High Schools
Middle Schools
Secondary Education
Grade 8
IES Funded:
Yes
Entry Date:
2024
Accession Number:
ED638958
Database:
ERIC

Weitere Informationen

This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the temperature parameter, and the version of the LLM. Nevertheless, they are consistent in two major respects for Java and Python: the readability level, which hovers around 7-8 grade, and lexical density, i.e., the relative size of the meaninful words with respect to the total explanation size. Furthermore, the explanations score very high in correctness but less on three other metrics: completeness, conciseness, and contextualization. [This paper is in: Proceedings of the workshop on Generative AI for Education(GAIED): Advances, Opportunities, and Challenges, 2003.]

As Provided