研究目的
To address the occlusion problem in augmented reality (AR) interaction, where virtual objects are always displayed in front of the user's hand, leading to unnatural scenes, by developing a system that performs hidden-surface processing based on hand depth and color information to enable intuitive manipulation of virtual objects using hand gestures.
研究成果
The developed system successfully performs hidden-surface processing along the fingertip using depth and color information, enabling more natural AR interactions. It handles multiple virtual objects via the Z-buffer method. However, issues with finger overlap and incomplete processing indicate a need for further refinement in hand extraction and discrimination techniques to achieve optimal performance.
研究不足
The system has limitations in discriminating fingers when they overlap in the binary image, leading to improper hidden-surface processing. Parts of the fingertip may not be processed due to holes in the skin color extraction. The Z-buffer method requires time to stabilize and has accuracy issues compared to simpler methods for single virtual objects. Future improvements are needed for hand area acquisition and finger discrimination.
1:Experimental Design and Method Selection:
The study uses a system combining a web camera and Leap Motion Controller to acquire hand depth and color data. Hidden-surface processing is performed using the Z-buffer method and skin color extraction in the HSV color space to handle occlusion between hands and virtual objects.
2:Sample Selection and Data Sources:
The system uses real-time images from a web camera and depth data from the Leap Motion Controller, with AR markers for virtual object placement. No specific sample selection criteria are detailed beyond general use.
3:List of Experimental Equipment and Materials:
Web camera (resolution 320x240 pixels), Leap Motion Controller, PC, AR markers. The Leap Motion Controller is used for finger position tracking, and the web camera for image acquisition and marker recognition.
4:Experimental Procedures and Operational Workflow:
The system is implemented with network programming to avoid interference between devices. On the server side, hand position data is acquired with the Leap Motion Controller and sent to the client side. On the client side, images are processed: skin color is extracted from RGB images converted to HSV, a binary image of the hand area is generated and noise-filtered, green point groups are plotted based on finger joint positions for depth comparison with virtual objects using the Z-buffer method, and images are synthesized to display hands in front of virtual objects based on depth.
5:Data Analysis Methods:
Evaluation involves counting pixels in inappropriately hidden-surface processed areas outside the hand region over 100 frames for each experimental case, averaging the results to quantify improvements.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容