研究目的
To propose a multi-modality sensor calibration method based on deep learning for USV that realizes a continuous online calibration for different types of onboard sensors and only requires an initial calibration for training.
研究成果
The proposed calibration method for onboard multi-modality sensors of USV, based on an end-to-end deep learning network, meets the accuracy and performance requirements of USV operation. Future work includes introducing other sensor data for direct scene structure acquisition and updating the network structure for flexibility with different sensor configurations.
研究不足
The method requires an initial calibration for training and is not flexible for different sensor configurations.
1:Experimental Design and Method Selection:
The methodology involves an end-to-end deep learning network that combines feature extraction, feature matching, and global optimization procedures of sensor calibration.
2:Sample Selection and Data Sources:
The dataset includes simulation data generated by 3ds Max and real-world data collected from environments where sensors are mounted to a frame.
3:List of Experimental Equipment and Materials:
Sensors used include a MindVision industrial camera, VLP-16 LIDAR, and DJI Guidance depth camera.
4:Experimental Procedures and Operational Workflow:
The calibration process is divided into two phases: calibrating two common depth sensors to fuse all point clouds, and calibrating the camera and the virtual depth sensor.
5:Data Analysis Methods:
The extrinsic parameters derived from the simulation scenario are used as ground truth to compare the translation and the rotation MAE.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容