研究目的
To analyze and study hyper-local environmental phenomena by combining deep-learning object detection and traditional object tracking techniques, enabling real-time visualization and enhanced situational awareness.
研究成果
The study presents a valid approach for integrating video data with UAS products, demonstrating relatively low registration error over a large area. The combination of monoplotting, deep-learning object detection, and traditional tracking techniques offers a novel method for analyzing hyper-local environmental phenomena. Future work includes automating the processing workflow and testing the system with reduced image quality to determine minimum operational requirements.
研究不足
The study's accuracy is affected by image imperfections due to the lack of camera calibration and distortion caused by the weather dome. The occlusion extraction method, while effective in the tested case, requires further validation in other scenarios.
1:Experimental Design and Method Selection:
The study combines monoplotting, deep-learning object detection (YOLO framework), and traditional object tracking algorithms (TLD, KCF, MIL, and Optical-flow) to analyze hyper-local environments.
2:Sample Selection and Data Sources:
Perspective video from an AXIS Q6044-E PTZ Dome Network Camera and aerial imagery products from a fixed-wing UAV (Sensefly eBee) platform were used.
3:List of Experimental Equipment and Materials:
AXIS Q6044-E PTZ Dome Network Camera, Sensefly eBee UAV platform, RGB camera, and a digital elevation model (DEM).
4:Experimental Procedures and Operational Workflow:
Keypoints were collected to compute image homography, which was then used to transform perspective images to aerial orthophotos and vice versa. Occlusion areas were estimated by transforming the DEM with an inverted image homography.
5:Data Analysis Methods:
The accuracy of registration, occlusion extraction, and tracking algorithms was evaluated using standard distance equations and performance metrics.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容