350 rub
Journal Highly available systems №3 for 2024 г.
Article in number:
Research and analysis of existing point cloud postprocessing methods
Type of article: scientific article
DOI: 10.18127/j20729472-202403-05
UDC: 004.92
Authors:

Yu.A. Maniakov1, P. O. Arkhipov2, P. L. Stavtsev3

1–3 Orel Branch of Federal Research Center «Computer Science and Control» of the RAS (Orel, Russia)
1 maniakov_yuri@mail.ru; 2 arpaul@mail.ru; 3 pavelstavcev@gmail.com

Abstract:

One of the most demanded topics in the field of computer vision is 3D reconstruction, whose goal is to determine the three-dimensional geometry and structure of scene objects based on information from various sensors. 3D reconstruction technologies are used to create and visualize three-dimensional plans of premises, architectural structures, settlements, interior spaces of geological formations. In addition, such technologies can be used to implement systems for visual information presentation and transmission to solve tasks of remote control, augmented reality systems, user interfaces, decision support systems, monitoring, quality control, scientific research systems for biomechanical analysis, in spatial navigation subsystems. Despite the extensive range of applications, the primary result of applying most 3D reconstruction methods is a point cloud. A point cloud is an unstructured collection of point coordinates in three-dimensional space, which may optionally contain color information. Due to various limitations of both 3D reconstruction methods and equipment used, as well as technologies, resulting data (point clouds) may contain errors of two main classes: noise and visual incompleteness. Noise refers to random insignificant distortions of the point cloud shape. In the tasks of three-dimensional reconstruction, noise not only deteriorates the visual perception of the model of the studied object but also creates difficulties for further work with it. Partial loss of information about object areas can be considered as visual incompleteness. The presence of these errors leads to distortion of the final results, reduction in calculation accuracy, and quality of visualization of three-dimensional models. Therefore, the development of methods to reduce noise and incompleteness in the results of 3D reconstruction is highly demanded. In most modern researches solving these problems is divided into two unrelated stages: noise reduction and outlier removal (point cloud denoising); and restoring the completeness of the point cloud (point cloud completion). The study and analysis of the most relevant existing methods for noise reduction and incompleteness elimination are carried out within the scope of the work, and conclusions are formulated with the aim of developing a unified comprehensive method for eliminating these errors. In this paper the research and analysis of existing state-of-the-art post-processing methods for point clouds are being carried out in order to develop a comprehensive approach that combines point cloud denoising and point cloud completion methods.

Pages: 51-58
For citation

Maniakov Yu.A., Arkhipov P.O., Stavtsev P.L. Research and analysis of existing point cloud postprocessing methods. Highly Available Systems. 2024. V. 20. № 3. P. 51−58. DOI: https://doi.org/ 10.18127/j20729472-202403-05 (in Russian)

References
  1. Cristian Sbrolli – State of the Art on: 3D object reconstruction. Politecnico di Milano, Honours Programme, March 2022. 2022.
  2. Zhang F., Zhang C., Yang H., Zhao L. Point cloud denoising with principal 915 component analysis and a novel bilateral filter. Traitement du Signal. 2019. V. 36 (5). P. 393–398.
  3. Lipman Y., Cohen-Or D., Levin D., Tal-Ezer H. Parameterization-free projection for geometry reconstruction. ACM Transactions on Graphics. 2007. V. 26 (3). P. 22:1–22:5.
  4. Lang Zhou, Guoxing Sun, Yong Li, Weiqing Li, Zhiyong Su. Point cloud denoising review: from classical to deep learning-based approaches 2021.
  5. Rakotosaona M.-J., La Barbera V., Guerrero P., Mitra N.J., Ovsjanikov M. PointCleanNet: Learning to Denoise and Remove Outliers from Dense Point Clouds. arXiv:1901.01060v3. 2019. DOI: 10.48550/arXiv.1901.01060.
  6. Roveri R., A. Öztirel C., Pandele I., Gross M. PointProNets: Consolidation of Point Clouds with Convolutional Neural Networks. EUROGRAPHICS. 2018. V. 37. № 2.
  7. Dongbo Zhang, Xuequan Lu, Hong Qin, Ying He. Pointfilter: Point Cloud Filtering via Encoder-Decoder Modeling. arXiv:2002.05968v2. 2020. DOI: 10.48550/arXiv.2002.05968.
  8. Hermosilla P., Ritschel T., Ropinski T. Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning. arXiv:1904.07615v2. 2019. DOI: 10.48550/arXiv.1904.07615.
  9. Siheng Chen, Chaojing Duan, Yaoqing Yang, Duanshun Li, Chen Feng, Dong Tian. Deep Unsupervised Learning of 3D Point Clouds via Graph Topology Inference and Filtering. arXiv:1905.04571v2. 2019. DOI: 10.48550/arXiv.1905.04571.
  10. Yuan W., Khot T., Held D., Mertz Ch. Martial Hebert. PCN: Point Completion Network. arXiv:1808.00671v3. 2019. DOI: 10.48550/arXiv.1808.00671.
  11. Weichao Wu, Zhong Xie, Yongyang Xu, Ziyin Zeng, Jie Wan. Point Projection Network: A Multi-View-Based PointCompletion Network with Encoder-Decoder Architecture. Remote Sens. 2021. V. 13. № 23. P. 4917. DOI: 10.3390/rs13234917.
  12. Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, Xinyi Le. PF-Net: Point Fractal Network for 3D Point Cloud Completion. arXiv:2003.00410v1. 2020. DOI: 10.48550/arXiv.2003.00410.
  13. Muhammad Sarmad, Hyunjoo Jenny Lee, Young Min Kim. RL-GAN-Net: A Reinforcement Learning Agent Controlled GAN Network for Real-Time Point Cloud Shape Completion. arXiv:1904.12304v1. 2019. DOI: 10.48550/arXiv.1904.12304.
  14. Minghua Liu, Lu Sheng, Sheng Yang, Jing Shao, Shi-Min Hu. Morphing and Sampling Network for Dense Point Cloud Completion. arXiv:1912.00280v1. 2019. DOI: 10.48550/arXiv.1912.00280.
  15. Alliegro A., Valsesia D., Fracastoro G., Magli E., Tommas T. Denoise and Contrast for Category Agnostic Shape Completion. arXiv:2103.16671v1. 2021. DOI: 10.48550/arXiv.2103.16671.
Date of receipt: 12.08.2024
Approved after review: 26.08.2024
Accepted for publication: 29.08.2024