研究目的
To provide a method for automatically rectifying a dynamic hybrid stereo vision system that combines an omnidirectional camera and a conventional camera, reducing computational complexity and improving accuracy in 3D reconstruction for surveillance applications.
研究成果
The proposed automatic rectification method for hybrid stereo vision systems effectively reduces computational complexity and improves accuracy compared to state-of-the-art methods. Experimental results show a 34.78% increase in rectification accuracy and reliable odometry with mean errors less than 12%. The system is suitable for surveillance tasks, demonstrating practical applicability in target tracking and 3D reconstruction.
研究不足
The method's performance is affected by the number of corresponding points; it may fail in scenes with no salient features. Computational complexity, though reduced, still exists. Assumptions include vertical installation of cameras and fixed intrinsic parameters. Future work could extend to other camera types and improve feature independence.
1:Experimental Design and Method Selection:
The study uses a hybrid vision system with an omnidirectional camera and a conventional camera. A perspective projection model is proposed for the omnidirectional image to simplify geometry. Stereo rectification is achieved by calculating the fundamental matrix using the 8-point algorithm with an optimized normalization matrix, and ASIFT is used for feature matching.
2:Sample Selection and Data Sources:
Real image pairs from indoor environments and simulated data with 3D points are used. Fifteen image pairs are evaluated for rectification accuracy, and ten experiments with different orientations are conducted for odometry.
3:List of Experimental Equipment and Materials:
A computer with Intel Pentium G2020 processor, 4 GB RAM, Windows 10, VS2015, OpenCV
4:9, OpenGL Cameras include an omnidirectional camera (FL2G-50S5C-C) and a conventional camera (FL2G-50S5C-C), synchronized via external trigger. Hyperbolic mirror parameters:
major axis a=31.2888 mm, minor axis b=51.1958 mm, mapping parameter=0.82, vertical viewing angle=120°.
5:2888 mm, minor axis b=1958 mm, mapping parameter=82, vertical viewing angle=120°.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: Virtual perspective images are generated from omnidirectional images. Corresponding points are matched using ASIFT. The fundamental matrix is computed with the 8-point algorithm and optimized normalization. Rectification involves decomposing the fundamental matrix into rotation and translation matrices for image remapping. Odometry is performed based on triangulation.
6:Data Analysis Methods:
Rectification error is quantified by mean deviation of horizontal distances from corresponding corners. Odometry accuracy is assessed by comparing calculated distances to ground truth using mean errors.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容