研究目的
To propose a unified framework for face alignment that can handle images with occlusion and extreme poses by ignoring points that cannot be seen under these conditions.
研究成果
The proposed unified framework effectively handles images with occlusion and extreme poses, achieving superior performance on AFLW and COFW datasets and comparable results on LFPW dataset. It also reduces the influence of face detection result drift on alignment.
研究不足
The method requires getting facial parts first, which can be non-trivial when facial parts are partially occluded.
1:Experimental Design and Method Selection:
The study uses a CNN-based solution for face alignment, focusing on classifying facial parts and then training regression models for key points.
2:Sample Selection and Data Sources:
The method is tested on LFPW, AFLW, and COFW datasets, with training data carefully chosen from the CelebA dataset.
3:List of Experimental Equipment and Materials:
The method involves converting face images to gray scale, canny detection for contours, and using CNN structures for classification and regression.
4:Experimental Procedures and Operational Workflow:
The process includes extracting candidate windows based on contours and corner points, classifying facial parts, and training regression models for key points.
5:Data Analysis Methods:
The performance is evaluated based on error rate and failure rate, with results compared against state-of-the-art methods.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容