350 rub
Journal Neurocomputers №5 for 2022 г.
Article in number:
Hardware in artificial intelligence
Type of article: scientific article
DOI: https://doi.org/10.18127/j19998554-202205-07
UDC: 004.31
Authors:

N.A. Andriyanov1

1 Financial University under the Government of the Russian Federation (Moscow, Russia)

Abstract:

Problem. Currently, there is a rapid growth of researchers in the field of artificial intelligence, namely deep learning. At the same time, there is a tendency to complicate neural networks, which also requires effective solutions in the hardware.

Target. The main purpose of the article is an analytical review of various hardware solutions used in modern deep learning.

Results. A comparative analysis of such hardware devices for computing as the central processor unit CPU, graphic processor GPU, tensor processor TPU, as well as programmable logic integrated circuits FPGA and ASIC was carried out. Examples of devices of each type are presented, advantages and disadvantages are noted.

Practical significance. The work will be useful for engineers and specialists in the field of deep learning, it will allow to form a range of tasks for which one or another hardware device can be used.

Pages: 67-73
For citation

Andriyanov N.A. Hardware in artificial intelligence. Neurocomputers. 2022. V. 24. № 5. Р. 67-73. DOI: https://doi.org/10.18127/j19998554-202205-07 (in Russian)

References
  1. Non von Neumann computing. Nature Nanotechnology (Nat. Nanotechnol.) [Jelektronnyj resurs] – Rezhim dostupa: https://www.nature.com/collections/dhdjceebhg, data obrashhenija 05.09.2022.
  2. Brodowicz M., Sterling T. A non von Neumann continuum computer architecture for scalability beyond Moore's law. The ACM International Conference. May 2016. Р. 112-118.
  3. Molyakov A. A prototype computer with non-von Neumann architecture based on strategic domestic J7 microprocessor. Automatic Control and Computer Sciences. 2016. V. 50(8). Р. 682-686. DOI: 10.3103/S0146411616080137.
  4. Князьков В.С. Архитектура и особенности тензорных процессоров семейства Google TPU. [Jelektronnyj resurs] – Rezhim dostupa: https://www.researchgate.net/publication/343057328_Arhitektura_i_osobennosti_tenzornyh_processorov_semejstva_Google_TPU, data obrashhenija 07.09.2022.
  5. TPU Cloud. [Jelektronnyj resurs] – Rezhim dostupa: https://cloud.google.com/tpu/, data obrashhenija 07.09.2022.
  6. Fitch A. Inside Intel’s Strategy to Compete with Nvidia in the AI-Chip Market. [Jelektronnyj resurs] – Rezhim dostupa: https://www.wsj.com/articles/inside-intels-strategy-to-compete-with-nvidia-in-the-ai-chip-market-11649447815, data obrashhenija 08.09.2022.
  7. Suhih A.V., Vasjaeva N.S. Issledovanie klassifikacij klasternyh sistem. Kibernetika i programmirovanie. 2016. № 2. S. 20-27.
    DOI: 10.7256/2306-4196.2016.2.18074 (in Russian).
  8. Hennessy J.L., Patterson D.A. A New Golden Age for Computer Architecture. Communications of the ACM. 2019. V. 62(2). Р. 48-60. DOI: 10.1145/3282307.
  9. Varghese S. OpenBSD disables hyperthreading support for Intel CPUs due to likely data leaks. [Jelektronnyj resurs] – Rezhim dostupa: https://www.itwire.com/business-it-news/security/83301-openbsd-disables-hyperthreading-support-for-intel-cpus-due-to-likely-data-leaks.html, data obrashhenija 08.09.2022.
  10. Андриянов Н.А. Анализ ускорения логического вывода нейронных сетей на процессорах Intel с использованием инструментария OpenVINO Toolkit. Системы синхронизации, формирования и обработки сигналов. 2020. Т. 11. № 4. С. 32-39.
  11. Andriyanov N., Khasanshin I., Utkin D., Gataullin T., Ignar S., Shumaev V., Soloviev V. Intelligent System for Estimation of the Spatial Position of Apples Based on YOLOv3 and Real Sense Depth Camera D415. Symmetry. 2022. V. 14(1), id 148.
    DOI: 10.3390/sym14010148.
  12. Andrijanov N.A., Dement'ev V.E., Tashlinskij A.G. Obnaruzhenie ob’ektov na izobrazhenii: ot kriteriev Bajesa i Nejmana–Pirsona
    k detektoram na baze nejronnyh setej EfficientDet. Komp'juternaja optika. 2022. T. 46. № 1. S. 139-159. DOI: 10.18287/2412-6179-CO-922 (in Russian).
  13. Andriyanov N.A. Analysis of the acceleration of neural networks inference on Intel processors based on OpenVINO Toolkit. Proceedings of IEEE 2020 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO). 2020. Р. 1-4. DOI: 10.1109/SYNCHROINFO49631.2020.9166067.
  14. Anand A. Wide&Deep Learning for Recommender Systems. [Jelektronnyj resurs] – Rezhim dostupa: https://me-dium.com/analytics-vidhya/wide-deep-learning-for-recommender-systems-dc99094fc291, data obrashhenija: 11.09.2022.
  15. A decade of accelerated computing augurs well for gpus. [Jelektronnyj resurs] – Rezhim dostupa: https://www.next-platform.com/2019/07/10/a-decade-of-accelerated-computing-augurs-well-for-gpus, data obrashhenija: 03.09.2022.
  16. Simpson P.A. FPGA Design, Best Practices for Team Based Reuse. 2nd edition. Switzerland: Springer International Publishing AG. 2015. 16 p. ISBN 978-3-319-17924-7.
  17. OpenVINO Benchmark. [Jelektronnyj resurs] – Rezhim dostupa: https://softline.ru/uploads/f/3f/76/35/d0/35/fd/a3/f7/d7/lab5.pdf, data obrashhenija: 12.09.2022.
  18. Barkalov A., Titarenko L., Mazurkiewicz M. Foundations of Embedded Systems. Studies in Systems, Decision and Control. 2019. V. 195. id 86596100. DOI:10.1007/978-3-030-11961-4.
  19. Feretti L. Tensor Processing Units: enabling the next generation of fast, affordable AI. [Jelektronnyj resurs] – Rezhim dostupa: https://www.codemotion.com/magazine/ai-ml/machine-learning/tensor-processing-units-enabling-the-next-generation-of-fast-affor-dable-ai. data obrashhenija: 15.09.2022.
  20. Nagalin A.V., Hil'chenko R.G., Shut'ko E.M. Model' processa adaptivnogo upravlenija moshhnost'ju izluchenija lazera v uslovijah pomeh po velichine otrazhennogo ot retroreflektora opticheskogo signala. Radiotehnika. 2021. T. 85. № 1. S. 13-19 (in Russian).
Date of receipt: 18.08.2022
Approved after review: 01.09.2022
Accepted for publication: 22.09.2022