Result: Exploring LoRA for parameter-efficient fine-tuning of LLMs in enhanced algorithm-to-python-source-code translation task.

Title:
Exploring LoRA for parameter-efficient fine-tuning of LLMs in enhanced algorithm-to-python-source-code translation task.
Authors:
Thomas, Allwyn Bat1 (AUTHOR), Noble, Ananya Reetha1 (AUTHOR) ananyarita@gmail.com, Wilson, Anna1 (AUTHOR), Sunny, Leya Elizabeth1 (AUTHOR), Paul, Rini Thazhathoot1 (AUTHOR) rinitpaul@mace.ac.in
Source:
AIP Conference Proceedings. 2025, Vol. 3280 Issue 1, p1-13. 13p.
Database:
Academic Search Index

Further Information

Pseudo-code is an informal notation for representing algorithms using plain language, serving as a vital tool for effective communication among developers and researchers. However, converting human-readable pseudo-code into executable source code presents numerous challenges. An automated pseudo-code to the Python source code conversion system utilizing the fine-tuned Mistral 7B model can streamline this process. To achieve this, Parameter-Efficient Fine-Tuning is employed on the Large Language Model, Mistral 7B. LoRA or Low Rank Adaption helps the model to prioritize relevant information by using low-rank parameters alongside the original model parameters. Quantization can reduce the memory footprint by storing the model's weights in a lower precision format. Hence, the system addresses the challenges of pseudo-code conversion, providing an innovative solution for streamlined, error-resistant conversion. Benefits include enhanced productivity, reduced errors, improved coding consistency, and a valuable learning tool for novice programmers. [ABSTRACT FROM AUTHOR]