研究目的
To design an embedded vision system for structure from motion computation to enable scene depth reconstruction from video frames for UAV navigation, focusing on real-time processing.
研究成果
The implemented SFM module successfully estimates scene geometry in real-time for 1280x720@60fps video, enabling cost-effective UAV navigation with simplified hardware. However, trade-offs in quality and dependency on motion conditions were identified, suggesting it as an alternative for applications where high precision is not critical.
研究不足
Reduction in result quality due to decreased RANSAC iterations; system performance depends on vehicle velocity and camera angle; 3D quality lower than stereo vision; optimized for less demanding systems.
1:Experimental Design and Method Selection:
The system uses a hardware-software co-design on a Xilinx Zynq SoC, with programmable logic for feature detection and matching, and an ARM-based processing system for fundamental matrix estimation and triangulation. The algorithm involves Harris corner detection, SAD-based matching, RANSAC for fundamental matrix estimation, and triangulation.
2:Sample Selection and Data Sources:
Video sequences captured with Google Earth Pro PC application, simulating flight over a city, providing low-noise and low-jitter images.
3:List of Experimental Equipment and Materials:
Xilinx Zynq SoC device (Zynq-7010), ZYBO board, PC or camera for HDMI input, monitor for visualization, Pmod WiFi module for communication.
4:Experimental Procedures and Operational Workflow:
Video stream is processed in a parallel-pipeline architecture; frames are divided into 32x32 pixel windows for feature tracking; data is buffered in BRAM; hardware handles detection and matching; software handles complex computations; results are visualized via HDMI.
5:Data Analysis Methods:
Evaluated in Matlab for different corner detection and matching methods; performance metrics include processing time and resource utilization.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容