研究目的
To propose a real-time 3D scene reconstruction and localization system with surface optimization for applications such as robot navigation, augmented reality, and virtual reality.
研究成果
The proposed real-time 3D scene reconstruction method performs well in terms of camera pose trajectory and surface reconstruction, outperforming state-of-the-art techniques in accuracy and surface quality. Future work includes integrating inertial sensors and deep learning methods for faster image blur level judgment and camera pose estimation.
研究不足
The system may lose tracking when facing featureless planar surfaces like white walls. The processing speed is reduced when evaluating the sequence of images for surface optimization.
1:Experimental Design and Method Selection:
The system utilizes rotation and orientation invariant feature matching along with loop-closure detection algorithm on RGB-D images.
2:Sample Selection and Data Sources:
RGB-D images from a mobile robot equipped with Microsoft Kinect V
3:List of Experimental Equipment and Materials:
NVIDIA Jetson TX2 Developer Kit, iRobot Create2 mobile robot base, Microsoft Kinect V
4:Experimental Procedures and Operational Workflow:
The system processes RGB-D images to create a dense 3D point cloud model, which is then optimized and smoothed on a GPU-based computer.
5:Data Analysis Methods:
The system uses bundle adjustment for camera pose optimization and TSDF for surface reconstruction.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容