研究目的
To develop a multi-scale deep learning model for fusing multispectral and hyperspectral satellite images to achieve high spatial-spectral resolution while minimizing the loss of spatial information.
研究成果
The LSTM-based scalable fusion model outperforms CNN models in generating high spatial-spectral resolution images, as evidenced by higher SSIM and PSNR values across all tested datasets.
研究不足
The study primarily uses datasets with homogeneous pixels; future work could explore images with more heterogeneous pixels and assess the impact on image classification.
1:Experimental Design and Method Selection:
The study employs a scalable high spatial resolution process using LSTM networks to transition from low to high spatial resolution through an intermediate step.
2:Sample Selection and Data Sources:
Utilizes datasets from Salinas, Indian Pines, and Enrique Reef, including multispectral and hyperspectral images.
3:List of Experimental Equipment and Materials:
Uses satellite images from AVIRIS, IKONOS, and AISA Eagle sensors.
4:Experimental Procedures and Operational Workflow:
Involves SVD for dimensionality reduction, LSTM for spatial enhancement, and comparison with CNN models.
5:Data Analysis Methods:
Evaluates performance using SSIM and PSNR metrics.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容