350 rub
Journal Dynamics of Complex Systems - XXI century №1 for 2024 г.
Article in number:
Overcoming linguistic barriers in code assistants: creating a Qlora adapter to improve support for russian-language code writing instructions
Type of article: scientific article
DOI: 10.18127/j19997493-202401-03
UDC: 004.032.26
Authors:

C.B. Pronin1, A.V. Volosova2, A.V. Ostroukh3, Yu.N. Strogov4

1, 3, 4 Moscow Automobile and Road Engineering State Technical University (MADI) (Moscow, Russia)
2 Bauman Moscow State Technical University (National Research University) (Moscow, Russia)
1 caesarpr12@gmail.com, 2 volosova@bmstu.ru, 3 ostroukh@mail.ru, 4 zelkame@gmail.com

Abstract:

This article describes an approach to learning and evaluating an adapter model for the popular zephyr-7b-beta language model. The adapter was designed to improve the performance of the basic model in tasks related to programming and understanding the Russian language. Taking into account the high quality of the original model's work in problems in English, the purpose of the study was to expand its linguistic and technical spectrum. The proposed adapter was trained using a large and diverse dataset, including question-answer pairs related to programming, as well as a text with a discussion of the code in Russian. The applied training methodology ensures an improvement in the quality of the model's responses in understanding and generating Python code based on Russian-language instructions. The authors evaluated the performance of the basic model with the adapter installed using various metrics, comparing it with the basic model, as well as with other advanced models in this area. The results obtained by the authors showed significant improvement both in tasks related to writing Python code and in processing the Russian language, confirming the effectiveness of the proposed adapter.

The analysis of the experiment results showed that the adapter not only adds new functionality to the model but also significantly changes the priority of the model's output in favor of the instructions that were included during the retraining stage.

To eliminate the fact of changing the model's output priority mentioned above, the following methods are proposed:

1) Using a different format for writing the query (prompt) when formatting the dataset for training the model, different from the original format in Zephyr.

2) Including instructions from the first stage of fine-tuning (retraining) in the instruction set.

3) Creating a "mixture of experts" from several different models, where each model will specialize in the most accurate solution to one or several tasks from the overall range of solvable tasks.

The methods of retraining the model without losing the original characteristics are promising directions for future research in the field of resource-efficient modification of large language models.

In the conducted experiment, a large language model "HuggingFaceH4/zephyr-7b-beta" was successfully fine-tuned (using the QLoRA adapter creation method) to improve its capabilities in writing Python code based on Russian instructions and providing explanations in Russian. The testing results showed that when the created adapter is installed, the model's text generation priority shifts towards generating code with explanations. Synthetic tests also demonstrated significant improvements in the model's ability to solve programming and mathematical problems. Therefore, it is advisable to consider the possibility of changing adapters depending on the type of task being solved.

Changing the adapter installed on the model (loaded into memory) does not take a significant amount of time, which allows for the creation of a set of adapters tailored to specific tasks and their interchange based on the type of tasks being solved. Changing adapters instead of models eliminates the need to load multiple full versions of the model into the memory of the computational accelerator, which, depending on the implementation method, either significantly reduces the use of memory or the model loading time. Thus, the use of adapters speeds up training and optimizes the utilization of computational resources during model operation.

This presented research contributes to the current efforts in the field of machine learning and natural language processing towards creating more versatile AI models. The observed potential of adapter models in improving the quality of responses in specific domains without the need for extensive retraining helps meet the growing demand for models for multilingual natural language processing and code generation.

In the conducted experiment, an assessment was made of the possibility of retraining a model that had already been trained on a set of instructions. The methods of additional training of a language model with new instructions without losing the ability to perform original instructions are not well-studied, which is why a decision was made to test the feasibility and effectiveness of such retraining. Based on the description of the LoRA method, this approach is possible because during such training, the original weights of the model remain unchanged and are only supplemented with new coefficients from the adapter model (the total number of weights equals that of the base model), which was trained on new instructions.

The experiment determined that multilingual adaptation of language models is viable, and that it also could be implemented using parametrically efficient methods on comparatively low-end hardware.

Pages: 32-40
For citation

Pronin C.B., Volosova A.V., Ostroukh A.V., Strogov Yu.N. Overcoming linguistic barriers in code assistants: creating a Qlora adapter to improve support for russian-language code writing instructions. Dynamics of complex systems. 2024. V. 18. № 1. P. 32−40. DOI: 10.18127/j19997493-202401-03 (in Russian)

References
  1. Tunstall L., Beeching E., Lambert N., Rajani N., Rasul K., Belkada Y., Huang S., von Werra L., Fourrier C., Habib N., Sarrazin N. Zephyr: Direct Distillation of LM Alignment. arXiv preprint. 2023. DOI 10.48550/arXiv.2310.16944.
  2. Hu E.J., Shen Y., Wallis P., Allen-Zhu Z., Li Y., Wang S., Wang L., Chen W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint. 2021. DOI 10.48550/arXiv.2106.09685.
  3. Dettmers T., Pagnoni A., Holtzman A., Zettlemoyer L. QLoRA: Efficient Finetuning of Quantized LLMs. arXiv preprint. 2023. DOI 10.48550/arXiv.2305.14314.
  4. Pronin C.B., Volosova A.V., Ostroukh A.V., Strogov Yu.N., Kurbatov V.V., Umarova A.S. Языковая модель: MexIvanov/zephyr-python-ru. 2023. URL: https://huggingface.co/MexIvanov/zephyr-python-ru.
  5. Volosova A.V. Tekhnologii iskusstvennogo intellekta v ULS-sistemah: Ucheb. posobie dlya vuzov. SPb.: Lan'. 2022. 308 s. (in Russian)
  6. Nabor dannyh HuggingFaceH4/CodeAlpaca_20K. 2023. URL: https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K.
  7. Beeching Edward, Fourrier Clémentine, Habib Nathan, Han Sheon, Lambert Nathan, Rajani Nazneen, Sanseviero Omar, Tunstall Lewis, Wolf Thomas. Open LLM Leaderboard. Hugging Face. 2023. URL: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
  8. Bahman A.A., Vasyunin M.A., Galkin V.A., Gapanyuk Yu.E. Podhod k generacii tekstov programm na osnove nejrosetevyh algoritmov. Dinamika slozhnyh sistem. 2023. T. 17. № 3. S. 58–63. DOI 10.18127/j19997493-202303-08 (in Russian).
  9. Volkov A.S., Chernen'kij M.V, Silant'eva E.Yu. Dvuhetapnaya procedura nejrosetevogo analiza tonal'nosti tekstov na russkom yazyke. Dinamika slozhnyh sistem. 2021. T. 15. № 4. S. 5–13. DOI 10.18127/j19997493-202104-01 (in Russian).
  10. Pronin C.B., Maksimychev O.I., Ostroukh A.V., Volosova A.V., Matukhina E.N. Creating Quantum Circuits for Training Perceptron Neural Networks on the Principles of Grover's Algorithm. 2022 Systems of Signals Generating and Processing in the Field of on Board Communications. 2022. p. 1–5. DOI 10.1109/IEEECONF53456.2022.9744279.
  11. Ostroukh A.V., Pronin C.B., Volosova A.V., Subbotin B.S., Smirnov P.I. Parametric Synthesis of Quantum Circuits for Training Perceptron Neural Networks. 2022 Intelligent Technologies and Electronic Devices in Vehicle and Road Transport Complex (TIRVED). 2022. p. 1–4. DOI 10.1109/TIRVED56496.2022.9965536.
  12. Volosova A.V. Ispol'zovanie tenzornoj modeli dlya obrabotki neopredelennosti v slozhnyh dinamicheskih sistemah. Computation Nanotechnology. 2023. T. 10. № 1. S. 79–87. DOI 10.33693/2313-223X-2023-10-1-79-87 (in Russian).
Date of receipt: 29.01.2024
Approved after review: 08.02.2024
Accepted for publication: 15.02.2024