研究目的
Determining whether face images before and after makeup refer to the same identity.
研究成果
The proposed GIP-NN method is highly effective for makeup-robust face recognition, offering a balance between accuracy and computational efficiency. It outperforms traditional hand-crafted feature extraction methods and is significantly faster than CNN-based approaches, making it suitable for applications requiring real-time processing.
研究不足
The dataset is relatively small for training a CNN, which may affect the performance of CNN-based methods. The method's effectiveness is demonstrated on a specific dataset, and its generalization to other datasets or more diverse makeup styles is not explored.
1:Experimental Design and Method Selection:
The proposed method combines the Gestalt Interest Points algorithm (GIP) with an artificial neural network (ANN) for makeup-robust face recognition.
2:Sample Selection and Data Sources:
A self-compiled dataset of 26 subjects from YouTube makeup tutorials, totaling 23,145 video frames before and after makeup application.
3:List of Experimental Equipment and Materials:
Matlab R2016b on a 64-bit Windows operating system with Intel Core i7-3632QM
4:20GHz CPU and 8GB RAM. Experimental Procedures and Operational Workflow:
The GIP algorithm extracts features from images, which are then classified by an ANN.
5:Data Analysis Methods:
Comparison with state-of-the-art methods (SIFT, SURF, BRISK, FREAK, and CNN) in terms of accuracy and feature extraction time.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容