研究目的
To detect and reconstruct 3D arrangements of objects from single indoor images, especially under medium to heavy occlusion, by leveraging holistic contextual 3D information and object co-occurrence statistics.
研究成果
SEETHROUGH significantly improves the detection and reconstruction of partially occluded objects in indoor scenes by incorporating higher-level scene statistics. The method outperforms state-of-the-art alternatives across multiple quantitative measures, demonstrating its utility in scenarios with medium to heavy occlusion.
研究不足
The method is currently focused on few classes (chairs, tables, cabinets, bookshelves) and requires appropriately annotated data for retraining to other classes. Performance may vary with the extent of occlusion and the quality of the initial camera estimate.
1:Experimental Design and Method Selection:
The approach involves a neural network for 2D keypoint detection, a 3D candidate object generation stage, and a global selection problem solved using pairwise co-occurrence statistics from a 3D scene database.
2:Sample Selection and Data Sources:
Utilizes real indoor annotated images for training the neural network and a large 3D scene database for co-occurrence statistics.
3:List of Experimental Equipment and Materials:
Neural network (ResNet-50 variant), 3D models from ShapeNet database, and synthetic scenes from PBRS dataset.
4:Experimental Procedures and Operational Workflow:
Keypoint detection, candidate object generation, and iterative scene mockup refinement.
5:Data Analysis Methods:
Quantitative measures include IOU3D, IOU2D, LOC, LOCANG, and ANGDIFF to evaluate performance.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容