研究目的
To propose and validate a new framework for view and illumination invariant image matching for application in face recognition, comparing its performance with SIFT-based approaches.
研究成果
The proposed approach outperforms SIFT in identifying the number of image matches and delivers enhanced accuracy for detection in a heterogeneous data set. It also exhibits reduced computational complexity, making it suitable for real-time applications like face recognition.
研究不足
The method's performance on datasets with random views and under variable illumination conditions needs further optimization. The threshold for disparity consistency is determined through trial and error, which may not be optimal for all cases.
1:Experimental Design and Method Selection:
The proposed method uses affine transforms to recognize descriptors and classifies them using Bayes theorem. It is compared with SIFT-based face recognition.
2:Sample Selection and Data Sources:
Yale facial data set comprising 165 grayscale images of 15 individuals with different facial expressions and configurations.
3:List of Experimental Equipment and Materials:
Viola Jones Object Detection algorithm for cropping face regions, OpenCV trained classifiers.
4:Experimental Procedures and Operational Workflow:
Key points are selected using ASIFT, descriptors are computed, and matching is performed using Bayes theory. The process is applied to cropped and uncropped images.
5:Data Analysis Methods:
Performance is evaluated based on the number of key points identified and accuracy of face recognition in a heterogeneous data set.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容