350 rub
Journal Dynamics of Complex Systems - XXI century №3 for 2023 г.
Article in number:
An approach to generating program code based on neural network algorithms
Type of article: scientific article
DOI: 10.18127/j19997493-202303-08
UDC: 004.89
Authors:

A.A. Bakhman1, M.A. Vasyunin2, V.A. Galkin3, Yu.E. Gapanyuk4

1–4 Bauman Moscow State Technical University (Moscow, Russia)

Abstract:

The development of intelligent assistants based on machine learning methods is an urgent task in various fields, including program text generation. An important task is to develop, investigate and compare neural network generative models for program text generation. Improving the quality of software text generation models will make it easier for developers to solve routine tasks in software development.

Goal – the purpose of this work is to compare the performance of neural network generative models Incoder and CodeGen.

Research of neural network generative models Incoder and CodeGen have been performed using both practical examples and pass@k metrics.

Practical meaning. The research enables to determine the applicability of neural network enerative models Incoder and CodeGen for program text generation, to better understand the strengths and weaknesses of these neural network models.

Pages: 58-63
For citation

Bakhman A.A., Vasyunin M.A., Galkin V.A., Gapanyuk Yu.E. An approach to generating program code based on neural network algorithms Representation of a metagraph model as a category. Dynamics of complex systems. 2023. V. 17. № 3. P. 58−63. DOI: 10.18127/j19997493-202303-08 (in Russian).

References
  1. Myshenkov K.S., Nekula H. Ispol'zovanie metodov mashinnogo obucheniya dlya prognozirovaniya nevrologicheskih zabolevanij. Dinamika slozhnyh sistem – XXI vek. 2022. T. 16. № 1. S. 66–74 (in Russian).
  2. Galkin V.A., Biushkin I.S., ZHuravleva U.V. Analiz programmnogo koda s ispol'zovaniem ansamblevyh metodov mashinnogo obucheniya. Dinamika slozhnyh sistem – XXI vek. 2020. T. 14. № 2. S. 34–41 (in Russian).
  3. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis. InCoder: A Generative Model for Code Infilling and Synthesis. URL: https://arxiv.org/abs/2204.05999 (data obrashcheniya: 31.05.2023)
  4. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. URL: https://arxiv.org/abs/2203.13474 (data obrashcheniya: 31.05.2023)
  5. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention Is All You Need. URL: https://arxiv.org/abs/1706.03762 (data obrashcheniya: 31.05.2023).
  6. Pu-Chin Chen, Henry Tsai, Srinadh Bhojanapalli, Hyung Won Chung, Yin-Wen Chang, Chun-Sung Ferng. A Simple and Effective Positional Encoding for Transformers. URL: https://arxiv.org/abs/2104.08698 (data obrashcheniya: 31.05.2023)
  7. Datasets: openai_humaneval. URL: https://huggingface.co/datasets/openai_humaneval (data obrashcheniya: 31.05.2023)
  8. Leetcode – Problems. URL: https://leetcode.com/problemset/all (data obrashcheniya: 31.05.2023)
  9. Mark Chen, Jerry Tworek, Heewoo Jun et al. Evaluating Large Language Models Trained on Code. URL: https://arxiv.org/abs/2107. 03374 (data obrashcheniya: 31.05.2023)
Date of receipt: 19.05.2023
Approved after review: 30.05.2023
Accepted for publication: 26.06.2023