A.V. Ermolenko1, V.M. Polushkin2, M.V. Smirnov3
1,2 FSBI "46 Central Research Institute" of the Russian Ministry of Defense (Moscow, Russia)
3 Financial University under the Government of the Russian Federation (Moscow, Russia)
3 academy@fa.ru
For the development of socio-economic systems, it is necessary to constantly increase the level of digitalization, including using artificial intelligence technologies. One of the promising directions in this area is the introduction of neural network systems based on competitive and generative learning into modern intelligent image recognition systems. Adversarial machine learning in the field of attack development and protection against them is an urgent research topic for a wide range of scientists and specialists. To consider the practical features of algorithms for malicious attacks on neural network image recognition systems. The results of the analysis of adversarial and malicious machine learning in relation to aspects of the stability of computer vision systems are presented. Errors in the construction of neural network image recognition systems based on the same type of frameworks are considered. The analysis of methods of artificial interference in the work of neural network classifiers is carried out. An overview of malware attack algorithms at various stages of neural network operation is presented. It is emphasized that some types of attacks are carried out by means of a slight distortion of the frame, almost imperceptible to humans, but at the same time radically changing the result of neural network classification. The considered examples of natural errors in the operation of neural network algorithms of vision systems emphasize the need for a more in-depth study of their side effects at the stages of experimental product development, acceptance of finished samples, system integration and functioning of automated safety and security systems.
Ermolenko A.V., Polushkin V.M., Smirnov M.V. Errors in the operation of neural network systems and malicious machine learning in technical vision tasks. Neurocomputers. 2024. V. 26. № 6. Р. 23-30. DOI: https://doi.org/10.18127/j19998554-202406-04 (In Russian)
- Kulik S.D. Pattern recognition algorithms and modeling of automated factual information retrieval systems. Neurocomputers. 2007. № 2-3. P. 67–82. (In Russian)
- Park H., Ryu G., Choi D. Partial Retraining Substitute Model for Query-Limited Black-Box Attacks. Applied Sciences. 2020. V. 10. № 20. P. 7168. DOI 10.3390/app10207168.
- Tang W., Li H. Robust Airport Surface Object Detection Based on Graph Neural Network. Applied Sciences. 2024. V. 14. № 9. P. 3555. DOI 10.3390/app14093555.
- Chen X., Si Y., Zhang Z., Yang W., Feng J. Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression. Sensors. 2024. V. 24. № 9. P. 2954. DOI 10.3390/s24092954.
- Zhu Y., Li Y., Duan Z. Adaptive Whitening and Feature Gradient Smoothing-Based Anti-Sample Attack Method for Modulated Signals in Frequency-Hopping Communication. Electronics. 2024. V. 13. № 9. P. 1784. DOI 10.3390/electronics13091784.
- Garaev R., Rasheed B., Khan A.M. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks. Algorithms. 2024. V. 17. № 4. P. 162. DOI 10.3390/a17040162.
- Malicious machine learning: how dangerous it is and how to protect yourself. [Electronic resource] – Access mode: https://www.tadviser.ru/index.php/Статья:Вредоносное_машинное_обучение_(Adversarial_Machine_Learning,_AML), date of reference 04.06.2024. (in Russian)
- Kurakin A., Goodfellow I.J., Bengio S. Adversarial examples in the physical world. [Electronic resource] – Access mode: https://ar-xiv.org/pdf/1607.02533, date of reference 04.06.2024.
- Evasion attacks on Machine Learning (or "Adversarial Examples"). [Electronic resource] – Access mode: https://towardsdata-science.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1, date of reference 04.06.2024.
- Goodfellow I.J., Shelns J., Szegedy C. Explaining and harnessing adversarial examples. [Electronic resource] – Access mode: https://arxiv.org/pdf/1412.6572, date of reference 04.06.2024.