研究目的
To improve the data alignment accuracy between 3-D points from LiDAR and pixels from camera images for intelligent vehicles by proposing a novel algorithm that directly calculates the alignment without the need for camera parameters and calibration of the coordinate transformation matrix.
研究成果
The PPA method provides a direct and accurate alignment between 3-D points from LiDAR and pixels from camera images, simplifying the calibration process and improving robustness against noise. It outperforms existing methods in terms of accuracy and efficiency, especially in scenarios with severe image distortion.
研究不足
The method's accuracy may be relatively low when using only a linear alignment matrix without considering image distortion, especially at the edges of the image where distortion is severe.
1:Experimental Design and Method Selection:
The PPA method involves extracting corresponding points between LiDAR and camera data, calculating a linear alignment matrix without considering image distortion, and then optimizing the parameters using maximum likelihood estimation to account for camera distortion.
2:Sample Selection and Data Sources:
The method uses calibration boards in the field of view of both sensors to extract corner points from images and point clouds.
3:List of Experimental Equipment and Materials:
Velodyne HDL-32e LiDAR sensor and Basler acA1920-40gc CCD camera with 1920 × 1200 resolution.
4:Experimental Procedures and Operational Workflow:
The process includes automatic extraction of calibration board points from the point cloud and image, calculation of the linear alignment matrix, and optimization considering image distortion.
5:Data Analysis Methods:
The root mean square of reprojection errors is used as the metric for performance evaluation.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容