350 rub
Journal Radioengineering №2 for 2020 г.
Article in number:
Constructing a depth map using a camera with a wide-angle fisheye lens
Type of article: scientific article
DOI: 10.18127/j00338486-202002(03)-07
UDC: 004.896
Authors:

V.P. Kirnos – Senior Lecturer, 

Department of Infocommunication and Radiophysics, P.G. Demidov Yaroslavl State University E-mail: crafter76@gmail.com

V.A. Antipov – Post-graduate Student, 

Department of Infocommunication and Radiophysics, P.G. Demidov Yaroslavl State University E-mail: valant777@gmail.com

V.A. Kokovkina – Assistant, 

Department of Infocommunication and Radiophysics, P.G. Demidov Yaroslavl State University E-mail: thief_rus@yahoo.com

A.L. Priorov – Dr.Sc.(Eng.), Associate Professor, 

Department of Infocommunication and Radiophysics, P.G. Demidov Yaroslavl State University E-mail: andcat@yandex.ru

E.D. Gurianov – Post-graduate Student, 

Department of Infocommunication and Radiophysics, P.G. Demidov Yaroslavl State University

E-mail: guryanoved@yandex.ru

Abstract:

The article describes an algorithm for constructing a depth map, where it is proposed to use as an image source one of the types of omnidirectional optical systems, a camera with a wide-angle fisheye lens, rather than conventional cameras.

The first step, necessary for using this type of camera in any technical vision tasks, is to build a model of fisheye camera, which is required for further correct processing of images obtained from it. It is proposed to use a spherical model of the camera, which is based on a spherical projection. The second step is to calibrate the camera. The method of calibration of the spherical camera model is follows. In a spherical camera model, two planes can be distinguished: the camera array plane and the image plane. To move from the system associated with the array plane to the system associated with the image plane, an affine transformation is applied. In fact, camera calibration refers to finding the corresponding matrices and nonlinear functions.

The fisheye camera described in the article is calibrated using the Zhengyou Zhang method. The polynomial function and average reprojection error obtained during camera calibration are given. After calibration, knowing the full model of the spherical camera, various technical vision algorithms can be applied. In this article, using the above-described, the implementation of the algorithm for constructing a depth map is presented. To construct a depth map based on a stereo pair of images, it is necessary to find a paired point on another image for each point on one image, which is a non-trivial task when using a camera with a wide-angle lens. The search of paired points should be carried out on the epipolar line, which is a circle. Searching along such curves has a high computational complexity compared to searching along straight lines. Therefore, for simplification, the spherical image must be transformed so that the epipolar lines are straight lines and the usual correlation-based algorithm for constructing a depth map can be applied. This is achieved if the spherical image is converted to a panoramic image.

Next, a local stereo-matching algorithm is used, in which the depth map is determined based on matching the pixel on the epipolar line using the sum of absolute differences. The accuracy of the depth map estimate often suffers from extreme scenarios. Post-processing is necessary to improve the accuracy of the depth map. At the post-processing stage, weighted least-squares filtering is applied.

This algorithm is researched using specially designed scenes to determine the accuracy of the recovered depth information from the resulting depth map. The values of the depth estimation error were obtained depending on various factors. Also, a three-dimensional scene was built using the depth map and the spherical model of the camera.

Pages: 64-71
References
  1. Song M., Watanabe H. Robust 3D reconstruction with omni-directional camera based on structure from motion. 2018.
  2. Li S. Binocular spherical stereo. IEEE Transactions on Intelligent Transportation Systems. 2008. 9(4): 589−600.
  3. Scaramuzza D., Martinelli A. and Siegwart R. A flexible technique for accurate omnidirectional camera calibration and structure from motion. Proceedings of IEEE International Conference of Vision Systems (ICVS'06). New York. 5−7 January 2006.
  4. Zhang Z. A flexible new technique for camera calibration. Microsoft Research, One Microsoft Way, Redmond, WA 98052-6399. USA. 1998. P. 1−21.
  5. Song M., Watanabe H. Robust 3D reconstruction with omni-directional camera based on structure from motion. 2018.
  6. Igbinedion I., Han H. 3D stereo reconstruction using multiple spherical views.
  7. Scaramuzza D. (2008). Omnidirectional vision: from calibration to robot motion estimation. ETH Zurich, PhD Thesis. Zurich. 22 February 2008.
  8. Prozorov A.V., Priorov A.L. Trekhmernaya rekonstruktsiya stseny s primeneniem monokulyarnogo zreniya. Izmeritelnaya tekhnika. 2014. S. 24−28.(in Russian)
  9. Prozorov A.V., Priorov A.L. Predobrabotka karty glubiny dlya povysheniya tochnosti pozitsionirovaniya kamery v zadache odnovremennoi lokalizatsii i kartirovaniya. Uspekhi sovremennoi radioelektroniki. 2016. № 4. S. 66−71.(in Russian)
Date of receipt: 8 января 2020 г.