Result: Exploring LoRA for parameter-efficient fine-tuning of LLMs in enhanced algorithm-to-python-source-code translation task.
Further Information
Pseudo-code is an informal notation for representing algorithms using plain language, serving as a vital tool for effective communication among developers and researchers. However, converting human-readable pseudo-code into executable source code presents numerous challenges. An automated pseudo-code to the Python source code conversion system utilizing the fine-tuned Mistral 7B model can streamline this process. To achieve this, Parameter-Efficient Fine-Tuning is employed on the Large Language Model, Mistral 7B. LoRA or Low Rank Adaption helps the model to prioritize relevant information by using low-rank parameters alongside the original model parameters. Quantization can reduce the memory footprint by storing the model's weights in a lower precision format. Hence, the system addresses the challenges of pseudo-code conversion, providing an innovative solution for streamlined, error-resistant conversion. Benefits include enhanced productivity, reduced errors, improved coding consistency, and a valuable learning tool for novice programmers. [ABSTRACT FROM AUTHOR]