研究目的
To develop a method for accurate, drift-free pose estimation of autonomous underwater vehicles (AUVs) in indoor underwater environments, such as nuclear spent pools, by using an upward-facing stereo camera to observe natural visual landmarks above the water surface and correcting for light refraction at the water-air interface.
研究成果
The proposed SLAM framework effectively reduces drift in AUV localization by utilizing upward-facing stereo vision and correcting for refraction, achieving accurate pose estimates in both simulation and real-world experiments. It demonstrates robustness across various challenging scenarios, with significant improvements over dead reckoning. Future work could involve modeling non-planar water surfaces, integrating dense or semi-dense methods for mapping, and enhancing computational efficiency.
研究不足
The method assumes a planar water surface, which may not hold in the presence of waves or ripples. It is intended for indoor environments with adequate visibility and lighting, and may not perform well in turbid waters or with significant surface disturbances. Computational efficiency could be improved for real-time applications, and the feature-based approach might be sensitive to environmental changes.
1:Experimental Design and Method Selection:
The methodology involves a SLAM framework using a factor graph optimization with an incremental smoothing and mapping backend. It includes refraction-corrected projection and triangulation functions based on Snell's law to account for light bending at the water-air interface.
2:Sample Selection and Data Sources:
Simulations with generated vehicle motions and known data association, and real-world experiments in a test tank with a depth of 3m and radius of
3:5m, using datasets that vary in depth, visibility, and lighting conditions. List of Experimental Equipment and Materials:
Hovering Autonomous Underwater Vehicle (HAUV) from Bluefin Robotics, equipped with a Doppler velocity log (DVL), attitude and heading reference system (AHRS), depth sensor, and an upward-facing stereo camera pair (Prosilica GC1380 cameras).
4:Experimental Procedures and Operational Workflow:
The vehicle executes pre-programmed trajectories; stereo images are captured, features are extracted using ORB detectors with adaptive non-maximal suppression, matches are established, and data association is performed. Refraction correction is applied, and the factor graph is optimized using GTSAM and iSAM
5:Data Analysis Methods:
Evaluation metrics include absolute trajectory error (ATE), relative pose error (RPE), and absolute landmark error (ALE), computed by comparing SLAM estimates with ground truth odometry.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容