研究目的
To improve structured-light-based depth acquisition by proposing a network (SLNet) that extracts effective matching features, considering different receptive ?elds and assigning weights, and to create a dataset for training and testing.
研究成果
The SLNet effectively improves depth acquisition in structured light systems by integrating deep learning techniques, providing real-time and high-quality depth maps. It demonstrates superior performance over existing methods in both simulated and real-world experiments.
研究不足
Tiny objects smaller than the patch size may not be clearly identified, and accuracy can decrease with short baselines between projector and camera. Requires high-density patterns and high-resolution cameras for improvement.
1:Experimental Design and Method Selection:
The SLNet architecture integrates an efficient Siamese network, pyramid pooling, and an improved Squeeze-and-Excitation Network (SENet) for feature extraction and weight calculation. It treats image-patch matching as a multi-classification task.
2:Sample Selection and Data Sources:
A structured-light dataset is created by projecting a random pattern onto 3D scenes reconstructed from the Monkaa binocular stereo dataset (for training) and Middlebury dataset (for testing).
3:List of Experimental Equipment and Materials:
Includes a projector (LightCrafter), camera (PointGrey), and computational hardware (Geforce TITAN Xp GPU).
4:Experimental Procedures and Operational Workflow:
Steps involve 3D scene reconstruction, pattern projection, image capture, dataset generation, network training using stochastic gradient descent with AdaGrad, and testing with entire images for efficiency.
5:Data Analysis Methods:
Performance is evaluated using Bad1.0 and Bad2.0 metrics, and subjective comparisons with classic methods and Kinect-V1.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容