350 rub
Journal Science Intensive Technologies №2 for 2024 г.
Article in number:
Enhanced vision imaging algorithms for remote robotic systems control
Type of article: different
UDC: 621.397
Authors:

S.N. Kirillov – Dr.Sc.(Eng.), Professor, Head of Department «Radio Control and Communications»,
Ryazan State Radio Engineering University
E-mail: kirillov.lab@gmail.com
P.S. Pokrovsky – Ph.D.(Eng.), Associate Professor, Department «Radio Control and Communications»,
Ryazan State Radio Engineering University
E-mail: paulps@list.ru
A.A. Baukov – Engineer, Department «Radio Control and Communications»,
Ryazan State Radio Engineering University
E-mail: baukov.andrej@yandex.ru
P.N. Skonnikov – Engineer, Department «Radio Control and Communications»,
Ryazan State Radio Engineering University
E-mail: skonnikovpn@yandex.ru

Abstract:

Issues of computer vision systems implementation in ground robotic systems are observed. The block diagram of enhanced vision system generating an image for robotic platform remote control is presented. The video quality enhancement and multispectral image fusion algorithms are proposed. Results of proposed algorithms implementation are represented. It is shown that proposed fusion algorithm leads over known methods by Fast-FMI, QE and SSIM metrics for 0.068, 0.186 and 0.164 respectively.

In order to effective remote control of ground-based robotic systems, it is expedient to use multispectral enhanced vision systems for ensuring visibility in difficult meteorological conditions. Such systems include the fusion of channels of different ranges, as well as improving the quality of images. This sistems makes it possible to combine in one frame the objects present in the images of different ranges, and significantly improve visibility in low light and the presence of some interfering factors, such as fog, smoke, rain and snow.

In general, the task of enhanced vision image obtaining includes the following steps:

1) improving the quality of the original images in individual channels;

2) geometric transformation of images of different channels;

3) combining transformed images into a single frame;

4) improving the quality of the resulting image.

Here, quality improvement refers to contrast enhancement and sharpness of the image, as well as brightness correction. Nowadays, the contrast limited adaptive histogram equalization (CLAHE) algorithm is widely used. According to this algorithm, the dynamic range of brightness values ​​is expanded, and as a result, the contrast is increased. The modification of this algorithm, which ensures the selectivity over different frame areas, is known. An algorithm for enhancing contrast is proposed, which is based on limiting and equalizing pixel brightness histograms, like CLAHE, but at the same time it includes a new procedure for isolating «problem» image areas exposed to fog, smoke. Besides that, the other features aimed to reduce the drawbacks of known approaches are proposed.

Combining several images of different ranges into a single frame is necessary to display the details present on the original images on the resulting image. The one known approach, which provides a relatively high value of fusion quality assessment, is weight summation with an evaluation of information content. A fusion algorithm has been proposed. Its main principle is the weight summation of images of various channels with individual weighting coefficients for each pixel.

According to the experiments, the proposed fusion and contrast enhancement algorithms provide, on average, the best processed images quality assessments compared to known approaches.

Pages: 30-39
References
  1. Kirillov S.N. i dr. Intellektualnaya sistema zhiznedeyatelnosti mobilnogo nazemnogo robototekhnicheskogo kompleksa. Vestnik Ryazanskogo gosudarstvennogo radiotekhnicheskogo universiteta. 2017. № 60. S. 6−16.
  2. Kholopov I.S. Realizatsiya algoritma formirovaniya tsvetnogo izobrazheniya po signalam monokhromnykh videodatchikov vidimogo i dlinnovolnovogo infrakrasnogo diapazonov v tsvetovom prostranstve YCbCr. Kompyuternaya optika. 2016. T. 40. № 2. S. 266−274.
  3. Efimov A.I. i dr. Obrabotka izobrazhenii v mnogospektralnykh sistemakh tekhnicheskogo zreniya. Vestnik Ryazanskogo gosudarstvennogo radiotekhnicheskogo universiteta. 2017. № 60. S. 83−92.
  4. Myatov G.N. i dr. Primenenie nechetkikh mer podobiya v zadache sovmeshcheniya izobrazhenii poverkhnosti zemli. Vestnik Ryazanskogo gosudarstvennogo radiotekhnicheskogo universiteta. 2013. № 44. S. 18−26.
  5. Zuiderveld K. Contrast limited adaptive histogram equalization. Graphics gems. 1994. V. 4. P. 474−485.
  6. Fisenko T.Yu., Fisenko V.T. Issledovanie i razrabotka metodov uluchsheniya podvodnykh izobrazhenii. Sb. trudov X Mezhdunar. konf. «Prikladnaya optika – 2012». 2012. T. 3. S. 294−298.
  7. Gonsales R., Vuds R. Tsifrovaya obrabotka izobrazhenii. Izd. 3-e, ispr. i dop. M.: Tekhnosfera. 2012. 1104 s.
  8. Jia Z., Wang H., Caballero R.E., Xiong Z., Zhao J., Finn A. A two-step approach to see-through bad weather for surveillance video quality enhancement. Machine Vision and Applications. 2012. V. 23. № 6. P. 1059−1082.
  9. Asatryan D.G. Otsenivanie stepeni razmytosti izobrazheniya putem analiza gradientnogo polya. Kompyuternaya optika. 2017. T. 41. № 6. S. 957−962. DOI: 10.18287/2412-6179-2017-41-6-957-962.
  10. Isaenko O.K., Urbakh V.Yu. Razdelenie smesei raspredelenii veroyatnostei na ikh sostavlyayushchie. Itogi nauki i tekhniki. Ser. Teor. veroyatn. Mat. stat. Teor. kibernet. 1976. T. 13. S. 37−58.
  11. Insarov V.V., Tikhonova S.V., Mikhailov I.I. Problemy postroeniya sistem tekhnicheskogo zreniya, ispolzuyushchikh kompleksirovanie informatsionnykh kanalov razlichnykh spektralnykh diapazonov. Informatsionnye tekhnologii. 2014. Prilozhenie k № 3. S. 1−32.
  12. Bondarenko M.A., Drynkin V.N. Otsenka informativnosti kombinirovannykh izobrazhenii v multispektralnykh sistemakh tekhnicheskogo zreniya. Programmnye sistemy i vychislitelnye metody. 2016. № 1. S. 64−79.
  13. Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing. 2004. V. 13. № 4. P. 600−612.
  14. Xydeas C.S., Petrovic V. Objective image fusion performance measure. Electronics letters. 2000. V. 36. № 4. P. 308−309.
  15. Haghighat M., Razian M.A. Fast-FMI: non-reference image fusion metric. IEEE 8th International Conference on Application of Information and Communication Technologies (AICT). 2014. P. 1−3.
  16. Li B., Ren W., Wang Z. RESIDE Dataset: OTS (Outdoor Training Set). URL = https://sites.google.com/view/reside-dehaze-datasets. Data dostupa 01.02.2019.
  17. Drynkin V.N., Falkov E.Ya., Tsareva T.I. Formirovanie kombinirovannogo izobrazheniya v dvukhzonalnoi bortovoi aviatsionno-kosmicheskoi sisteme. Mekhanika, upravlenie i informatika. 2012. № 9. S. 33−39.
  18. Yang C. et al. A novel similarity based quality metric for image fusion. Information Fusion. 2008. V. 9. № 2. P. 156−160.
Date of receipt: 10 апреля 2019 г.