| Peer-Reviewed

Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors

Received: 22 December 2021     Accepted: 7 January 2022     Published: 15 January 2022
Views:       Downloads:
Abstract

As a consumer-grade portable depth image data acquisition device, the depth camera is widely used in the field of computer vision, such as slam, autonomous driving, environment perception, etc. However, due to the limitation of the device angle, the complete 3D point cloud of the target cannot be obtained at one time. Point cloud registration can complete the overlap of two frames of point clouds. Therefore, a multi-frame point cloud fusion method based on key points and registration is proposed. First, the point cloud is calculated on the depth map obtained by the depth camera, and then an improved point cloud filtering algorithm based on the normal vector inner cumulus is used to remove the background and noise points. Secondly, four key point detection algorithms and three registration algorithms with different principles are applied to the point cloud data obtained by the depth camera, and the applicable scenarios and limitations of each algorithm are analyzed. Finally, a multi-frame point cloud fusion algorithm is used to splice the point clouds, and the redundant points after splicing are filtered out to obtain a complete point cloud of the object. The experimental verification of the target object using the depth camera shows that the proposed method can obtain the complete point cloud data of the target object robustly.

Published in International Journal of Sensors and Sensor Networks (Volume 10, Issue 1)
DOI 10.11648/j.ijssn.20221001.11
Page(s) 1-6
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2022. Published by Science Publishing Group

Keywords

Point Cloud, Kinect, Filter, Registration

References
[1] Liu Lexuan. Lane Offset Survey for One-Lane Horizontal Curvatures Using Binocular Stereo Vision Measurement System [J]. Journal of Surveying Engineering, 2021, 147 (4).
[2] Zhao Jianping, Feng Chang, Cai Gen, Zhang Run, Chen Zhibo, Cheng Yong, Xu Bing. Three-dimensional reconstruction and measurement of fuel assemblies for sodium-cooled fast reactor using linear structured light [J]. Annals of Nuclear Energy, 2021, 160.
[3] Zhenzhou Wang, Qi Zhou, YongCan Shuang. Three-dimensional reconstruction with single-shot structured light dot pattern and analytic solutions [J]. Measurement, 2020, 151 (C).
[4] Li Xingdong, Gao Zhiming, Chen Xiandong, Sun Shufa, Liu Jiuqing. Research on Estimation Method of Geometric Features of Structured Negative Obstacle Based on Single-Frame 3D Laser Point Cloud [J]. Information, 2021, 12 (6).
[5] Yevgeny Milanov, Vladimir Badenko, Vladimir Yadykin, Leonid Perlovsky. Method for clustering and identification of objects in laser scanning point clouds using dynamic logic [J]. The International Journal of Advanced Manufacturing Technology, 2021, 117 (7-8).
[6] M. Ruiz-Rodriguez, V. I. Kober, V. N. Karnaukhov, M. G. Mozerov. Algorithm for Three-Dimensional Reconstruction of Nonrigid Objects Using a Depth Camera [J]. Journal of Communications Technology and Electronics, 2020, 65 (6).
[7] Baibing Ji, Qixin Cao. A monocular real-time 3D reconstruction system for robot grasping [J]. Machinery Design and Manufacturing, 2021 (09): 287-290. DOI: 10.19356/j.cnki.1001-3997.2021.09.064.
[8] Shuo Sun, Xiaoqiang Ji, Dan Liu. Design of three-dimensional face reconstruction system for facial virtual plastic surgery [J]. Science Technology and Engineering, 2021, 21 (25): 10806-10813.
[9] Zhiming Huang. Research on Transparent Obstacle Detection Technology Used for Visual Aid for the Blind [D]. Zhejiang University, 2020. DOI: 10.27461/d.cnki.gzjdx.2020.003506.
[10] Wen Dai. 3D measurement of complex workpiece based on depth camera [D]. Hunan University, 2019. DOI: 10.27135/d.cnki.ghudu.2019.003470.
[11] B. Rister, M. A. Horowitz and D. L. Rubin, "Volumetric Image Registration from Invariant Keypoints," in IEEE Transactions on Image Processing, vol. 26, no. 10, pp. 4900-4910, Oct. 2017. DOI: 10.1109/TIP.2017.2722689.
[12] Ivan Sipiran, Benjamin Bustos. Harris 3D: a robust extension of the Harris operator for interest point detection on 3D meshes [J]. The Visual Computer, 2011, 27 (11).
[13] B. Steder, R. B. Rusu, K. Konolige and W. Burgard, "Point feature extraction on 3D range scans taking into account object boundaries," 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 2601-2608, DOI: 10.1109/ICRA.2011.5980187.
[14] Zhong Y. Intrinsic shape signatures: a shape descriptor for 3D object recognition [C]. IEEE International Conference on Computer Vision Workshops, 2009: 689-696.
[15] Z. Yang, X. Wang and J. Hou, "A 4PCS Coarse Registration Algorithm Based on ISS Feature Points," 2021 40th Chinese Control Conference (CCC), 2021, pp. 7371-7375, DOI: 10.23919/CCC52363.2021.9549486.
[16] RUSU R B, BLODOW N, Fast point feature histograms (FPFH) for 3D registration [C] // IEEE International Conference on Robotics and Automation. Kobe: IEEE, 2009.
[17] P. Biber and W. Strasser, "The normal distributions transform: a new approach to laser scan matching," Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), 2003, pp. 2743-2748 vol. 3, DOI: 10.1109/IROS.2003.1249285.
Cite This Article
  • APA Style

    Yang Zhongfan, Wang Xiaogang, Hou Jing. (2022). Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors. International Journal of Sensors and Sensor Networks, 10(1), 1-6. https://doi.org/10.11648/j.ijssn.20221001.11

    Copy | Download

    ACS Style

    Yang Zhongfan; Wang Xiaogang; Hou Jing. Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors. Int. J. Sens. Sens. Netw. 2022, 10(1), 1-6. doi: 10.11648/j.ijssn.20221001.11

    Copy | Download

    AMA Style

    Yang Zhongfan, Wang Xiaogang, Hou Jing. Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors. Int J Sens Sens Netw. 2022;10(1):1-6. doi: 10.11648/j.ijssn.20221001.11

    Copy | Download

  • @article{10.11648/j.ijssn.20221001.11,
      author = {Yang Zhongfan and Wang Xiaogang and Hou Jing},
      title = {Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors},
      journal = {International Journal of Sensors and Sensor Networks},
      volume = {10},
      number = {1},
      pages = {1-6},
      doi = {10.11648/j.ijssn.20221001.11},
      url = {https://doi.org/10.11648/j.ijssn.20221001.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijssn.20221001.11},
      abstract = {As a consumer-grade portable depth image data acquisition device, the depth camera is widely used in the field of computer vision, such as slam, autonomous driving, environment perception, etc. However, due to the limitation of the device angle, the complete 3D point cloud of the target cannot be obtained at one time. Point cloud registration can complete the overlap of two frames of point clouds. Therefore, a multi-frame point cloud fusion method based on key points and registration is proposed. First, the point cloud is calculated on the depth map obtained by the depth camera, and then an improved point cloud filtering algorithm based on the normal vector inner cumulus is used to remove the background and noise points. Secondly, four key point detection algorithms and three registration algorithms with different principles are applied to the point cloud data obtained by the depth camera, and the applicable scenarios and limitations of each algorithm are analyzed. Finally, a multi-frame point cloud fusion algorithm is used to splice the point clouds, and the redundant points after splicing are filtered out to obtain a complete point cloud of the object. The experimental verification of the target object using the depth camera shows that the proposed method can obtain the complete point cloud data of the target object robustly.},
     year = {2022}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Multi-frame Point Cloud Fusion Method Based on Depth Camera Sensors
    AU  - Yang Zhongfan
    AU  - Wang Xiaogang
    AU  - Hou Jing
    Y1  - 2022/01/15
    PY  - 2022
    N1  - https://doi.org/10.11648/j.ijssn.20221001.11
    DO  - 10.11648/j.ijssn.20221001.11
    T2  - International Journal of Sensors and Sensor Networks
    JF  - International Journal of Sensors and Sensor Networks
    JO  - International Journal of Sensors and Sensor Networks
    SP  - 1
    EP  - 6
    PB  - Science Publishing Group
    SN  - 2329-1788
    UR  - https://doi.org/10.11648/j.ijssn.20221001.11
    AB  - As a consumer-grade portable depth image data acquisition device, the depth camera is widely used in the field of computer vision, such as slam, autonomous driving, environment perception, etc. However, due to the limitation of the device angle, the complete 3D point cloud of the target cannot be obtained at one time. Point cloud registration can complete the overlap of two frames of point clouds. Therefore, a multi-frame point cloud fusion method based on key points and registration is proposed. First, the point cloud is calculated on the depth map obtained by the depth camera, and then an improved point cloud filtering algorithm based on the normal vector inner cumulus is used to remove the background and noise points. Secondly, four key point detection algorithms and three registration algorithms with different principles are applied to the point cloud data obtained by the depth camera, and the applicable scenarios and limitations of each algorithm are analyzed. Finally, a multi-frame point cloud fusion algorithm is used to splice the point clouds, and the redundant points after splicing are filtered out to obtain a complete point cloud of the object. The experimental verification of the target object using the depth camera shows that the proposed method can obtain the complete point cloud data of the target object robustly.
    VL  - 10
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • School of Automation & Information Engineering, Sichuan University of Science & Engineering, Yibin, China

  • Sections