Result: LLMCRIT: Teaching Large Language Models to Use Criteria

Title:
LLMCRIT: Teaching Large Language Models to Use Criteria
Publisher Information:
2024-03-01 2024-06-04
Document Type:
Electronic Resource Electronic Resource
Availability:
Open access content. Open access content
Other Numbers:
COO oai:arXiv.org:2403.01069
1438530952
Contributing Source:
CORNELL UNIV
From OAIster®, provided by the OCLC Cooperative.
Accession Number:
edsoai.on1438530952
Database:
OAIster

Further Information

Humans follow criteria when they execute tasks, and these criteria are directly used to assess the quality of task completion. Therefore, having models learn to use criteria to provide feedback can help humans or models to perform tasks better. However, existing research in this field tends to consider only a limited set of criteria or quality assessment aspects. To fill this gap, we propose a general framework that enables large language models (LLMs) to use comprehensive criteria for a task in delivering natural language feedback on task execution. In particular, we present a model-in-the-loop framework that semi-automatically derives criteria from collected guidelines for different writing tasks and constructs in-context demonstrations for each criterion. We choose three tasks from real-world scenarios to operationalize this idea: paper introduction writing, Python code writing, and Reddit post writing, and evaluate our feedback generation framework using different LLMs. The results reveal the fine-grained effects of incorporating criteria and demonstrations and provide valuable insights on how to teach LLMs to use criteria more effectively.
Comment: ACL 2024 findings