研究目的
To develop a blind quality index for tone-mapped images that accurately evaluates the performance of tone-mapping operators by leveraging luminance partition and extracting quality-aware features from different luminance areas.
研究成果
The proposed blind quality index effectively evaluates tone-mapped images by partitioning based on luminance and extracting complementary features (local entropy for information loss, local colorfulness for color reproduction, global contrast for overall distortion). Extensive experiments show it outperforms state-of-the-art metrics in accuracy and generalization across databases. It provides a reliable tool for optimizing tone-mapping operators without reference HDR images, though improvements in threshold selection and feature design are needed for better HVS simulation.
研究不足
The luminance partition method may not be optimal when histograms are skewed (e.g., many pixels at 0 or 255), as the median value Tmid could lose effectiveness. The entropy feature, while computationally simple, is statistical and may not fully capture human visual characteristics without considering HVS specifics. The method relies on learning from databases and may require further robustness to varying exposure conditions.
1:Experimental Design and Method Selection:
The study uses a luminance partition method to segment images into dark, bright, and normal areas based on adaptive thresholds. Features such as local entropy, local colorfulness, and global contrast are extracted under a multi-resolution framework to mimic the Human Visual System (HVS). A random forest regression model is employed for mapping features to quality scores.
2:Sample Selection and Data Sources:
Two publicly available databases are used: the TMID database from the University of Waterloo, Canada, with 120 images, and the ESPL-LIVE HDR database from the University of Texas at Austin, USA, with 747 tone-mapped images. Subjective scores (Mean Opinion Scores) serve as ground truth.
3:List of Experimental Equipment and Materials:
MATLAB R2016b software is used for implementation on a computer with a 4.20 GHz CPU and 32 GB RAM. No specific hardware devices are mentioned beyond this computational setup.
4:20 GHz CPU and 32 GB RAM. No specific hardware devices are mentioned beyond this computational setup.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: For each image, luminance is calculated and gamma-corrected. Adaptive thresholds partition the image. Features are extracted at multiple resolutions (downsampled twice). The random forest model is trained on 80% of the data and tested on 20%, with 1000 repetitions to avoid bias. Performance is evaluated using Pearson Linear Correlation Coefficient (PLCC), Spearman Rank-Order Correlation Coefficient (SRCC), and Kendall Rank-Order Correlation Coefficient (KRCC).
5:Data Analysis Methods:
Statistical correlation coefficients (PLCC, SRCC, KRCC) are computed to assess prediction accuracy and monotonicity. Feature contributions are analyzed by testing individual and combined features.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容