研究目的
To propose a robust hand gesture segmentation method that associates depth and color information with online training, addressing challenges in cluttered backgrounds and when hands are close to the body.
研究成果
The proposed algorithm is the least restrictive one when implementing the algorithm in practically, with potential future improvements focusing on content-aware analysis to improve segmentation efficiency and accuracy around the edge boundary.
研究不足
Occasional migration of the hand point to other parts of the body leading to wrong segmentation, large errors in acquisition of depth data at the edge of the object.
1:Experimental Design and Method Selection:
The method involves detecting the hand region of interest with the gesture center point in three-dimensional spatial coordination, determining an adaptive rectangular region around this point, using depth information to initiate segmentation, applying an ellipse skin color model with on-line learning to filter outlier samples, and obtaining the improved hand region with morphological processing.
2:Sample Selection and Data Sources:
Uses human skeleton modeling by Microsoft Kinect for Windows SDK 2.
3:List of Experimental Equipment and Materials:
0.
3. List of Experimental Equipment and Materials: Microsoft Kinect sensor.
4:Experimental Procedures and Operational Workflow:
Adaptive rectangle size changes based on the distance between the hand location and the Kinect sensor, theoretical calculations and experimental validation to obtain adaptive relationship, on-line learning of ellipse skin color model.
5:Data Analysis Methods:
Comparison of segmentation results using different models and conditions.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容