研究目的
To form a 3D map for finding the initial position of a mobile robot by combining LiDAR and stereo camera data to compensate for the low accuracy of stereo camera-based maps.
研究成果
The proposed method successfully constructs a 3D global map using LiDAR and stereo camera data, enabling initial position estimation for mobile robots. Experimental results show that most position errors are within 1m, but accuracy can be affected by feature distribution. Future work should focus on improving map formation and position estimation processes, possibly integrating more sensor data for higher accuracy and additional applications like object classification.
研究不足
The method has limitations in accuracy when features are too small or crowded in a small area, leading to large errors in some cases. It relies on fusion with additional sensors for improvement, and the current implementation may not handle all environmental variations effectively.
1:Experimental Design and Method Selection:
The experiment involves using a mobile robot (AGV) equipped with LiDAR and a stereo camera to collect environmental data. Methods include 2D localization with LiDAR and encoder data, 3D local map generation with stereo camera images, and integration into a global 3D map using iterative closest point (ICP) and singular value decomposition (SVD) algorithms.
2:Sample Selection and Data Sources:
Data is collected from a real environment at a test center, using 13 stereo images and LiDAR scans during robot movement in a square path. Reference data is obtained from a NAV200 sensor for validation.
3:List of Experimental Equipment and Materials:
AGV mobile robot, LiDAR (LMS200 model), stereo camera (ZED model), encoders for wheel velocity, and NAV200 for reference positioning.
4:Experimental Procedures and Operational Workflow:
The robot moves and collects data; LiDAR data is preprocessed and matched with global map using ICP; stereo images are rectified, feature points extracted with SURF algorithm, and 3D coordinates calculated; local maps are transformed to global coordinates using camera position from 2D localization; initial position is searched by matching test images with the global map using RANSAC and SVD.
5:Data Analysis Methods:
Position errors are calculated by comparing estimated positions from stereo camera with reference positions from NAV200, including max, min, average error, standard deviation, and root mean square error.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容