研究目的
Investigating the integration of laser radar and visual sensors to improve robot positioning accuracy and environmental mapping.
研究成果
The proposed fusion system improves positioning accuracy by leveraging laser data for initial pose optimization in visual SLAM, enabling more accurate 3D mapping. Future work aims to enhance trajectory stability.
研究不足
Occasional trajectory fluctuations when switching between tracking modes; requires further optimization for smooth and stable high-precision positioning.
1:Experimental Design and Method Selection:
Combines laser SLAM and visual SLAM (ORB-SLAM2) for improved positioning and mapping.
2:Sample Selection and Data Sources:
Uses a mobile robot equipped with RPLIDAR A2 laser sensors and Kinect v2 depth camera.
3:List of Experimental Equipment and Materials:
Includes UR5 robot arm, Mecanum Wheels, RPLIDAR A2, Kinect v2, and Intel NUC computer.
4:Experimental Procedures and Operational Workflow:
Laser data is fused and used to provide initial pose values for visual SLAM optimization.
5:Data Analysis Methods:
Compares positioning accuracy and mapping results between laser SLAM, visual SLAM, and the fusion system.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容