- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Scale-variable region-merging for high resolution remote sensing image segmentation
摘要: In high resolution remote sensing imagery (HRI), the sizes of different geo-objects often vary greatly, posing serious difficulties to their successful segmentation. Although existent segmentation approaches have provided some solutions to this problem, the complexity of HRI may still lead to great challenges for previous methods. In order to further enhance the quality of HRI segmentation, this paper proposes a new segmentation algorithm based on scale-variable region merging. Scale-variable means that the scale parameters (SP) adopted for segmentation are adaptively estimated, so that geo-objects of various sizes can be better segmented out. To implement the proposed technique, 3 steps are designed. The first step produces a coarse-segmentation result with slight degree of under segmentation error. This is achieved by segmenting a half size image with the global optimal SP. Such a SP is determined by using the image of original size. In the second step, structural and spatial contextual information is extracted from the coarse-segmentation, enabling the estimation of variable SPs. In the last step, a region merging process is initiated, and the SPs used to terminate this process are estimated based on the information obtained in the second step. The proposed method was tested by using 3 scenes of HRI with different landscape patterns. Experimental results indicated that our approach produced good segmentation accuracy, outperforming some competitive methods in comparison.
关键词: Image segmentation,High resolution remote sensing imagery,Scale-variable,Region merging
更新于2025-09-23 15:23:52
-
Remote sensing images super-resolution with deep convolution networks
摘要: Remote sensing image data have been widely applied in many applications, such as agriculture, military, and land use. It is difficult to obtain remote sensing images in both high spatial and spectral resolutions due to the limitation of implements in image acquisition and the law of energy conservation. Super-resolution (SR) is a technique to improve the resolution from a low-resolution (LR) to a high-resolution (HR). In this paper, a novel deep convolution network (DCN) SR method (SRDCN) is proposed. Based on hierarchical architectures, the proposed SRDCN learns an end-to-end mapping function to reconstruct an HR image from its LR version; furthermore, extensions of SRDCN based on residual learning and multi scale version are investigated for further improvement, namely Developed SRDCN(DSRDCN) and Extensive SRDCN(ESRDCN). Experimental results using different types of remote sensing data (e.g., multispectral and hyperspectral) demonstrate that the proposed methods outperform the traditional sparse representation based methods.
关键词: Convolution neural network,Remote sensing imagery,Super-resolution
更新于2025-09-23 15:23:52
-
Dense Semantic Labeling with Atrous Spatial Pyramid Pooling and Decoder for High-Resolution Remote Sensing Imagery
摘要: Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, atrous spatial pyramid pooling (ASPP) and encoder-decoder are two successful ones. The former structure is able to extract multi-scale contextual information and multiple effective field-of-view, while the latter structure can recover the spatial information to obtain sharper object boundaries. In this study, we propose a more efficient fully convolutional network by combining the advantages from both structures. Our model utilizes the deep residual network (ResNet) followed by ASPP as the encoder and combines two scales of high-level features with corresponding low-level features as the decoder at the upsampling stage. We further develop a multi-scale loss function to enhance the learning procedure. In the postprocessing, a novel superpixel-based dense conditional random field is employed to refine the predictions. We evaluate the proposed method on the Potsdam and Vaihingen datasets and the experimental results demonstrate that our method performs better than other machine learning or deep learning methods. Compared with the state-of-the-art DeepLab_v3+ our model gains 0.4% and 0.6% improvements in overall accuracy on these two datasets respectively.
关键词: dense semantic labeling,encoder-decoder,superpixel-based DenseCRF,remote sensing imagery,fully convolutional networks,atrous spatial pyramid pooling
更新于2025-09-23 15:23:52
-
Segmentation for remote-sensing imagery using the object-based Gaussian-Markov random field model with region coefficients
摘要: The Markov random ?eld (MRF) model is a widely used method for remote-sensing image segmentation, especially the object-based MRF (OMRF) method has attracted great attention in recent years. However, the OMRF method usually fails to capture the correlation between regional features by just considering the mixed-Gaussian model. In order to solve this problem and improve the segmentation accuracy, this article proposes a new method, object-based Gaussian-Markov random ?eld model with region coe?cients (OGMRF-RC), for remote-sensing image segmentation. First, to describe the complicated interactions among regional features, the OGMRF-RC method employs the region size and edge information as region coe?cients to build the each object-based region. Second, the classic Gaussian-Markov model is extended to region level for modelling the errors in OLREs. Finally, the segmentation is achieved through a principled probabilistic inference designed for the OGMRF-RC method. Experimental results over synthetic texture images and remote-sensing images from di?erent datasets show that the proposed OGMRF-RC method can achieve more accurate segmentation than other state-of-the-art MRF-based methods and the method using convolutional neural networks.
关键词: Segmentation,Gaussian-Markov random field,region coefficients,object-based,remote-sensing imagery
更新于2025-09-23 15:23:52
-
Deep Distillation Recursive Network for Remote Sensing Imagery Super-Resolution
摘要: Deep convolutional neural networks (CNNs) have been widely used and achieved state-of-the-art performance in many image or video processing and analysis tasks. In particular, for image super-resolution (SR) processing, previous CNN-based methods have led to significant improvements, when compared with shallow learning-based methods. However, previous CNN-based algorithms with simple direct or skip connections are of poor performance when applied to remote sensing satellite images SR. In this study, a simple but effective CNN framework, namely deep distillation recursive network (DDRN), is presented for video satellite image SR. DDRN includes a group of ultra-dense residual blocks (UDB), a multi-scale purification unit (MSPU), and a reconstruction module. In particular, through the addition of rich interactive links in and between multiple-path units in each UDB, features extracted from multiple parallel convolution layers can be shared effectively. Compared with classical dense-connection-based models, DDRN possesses the following main properties. (1) DDRN contains more linking nodes with the same convolution layers. (2) A distillation and compensation mechanism, which performs feature distillation and compensation in different stages of the network, is also constructed. In particular, the high-frequency components lost during information propagation can be compensated in MSPU. (3) The final SR image can benefit from the feature maps extracted from UDB and the compensated components obtained from MSPU. Experiments on Kaggle Open Source Dataset and Jilin-1 video satellite images illustrate that DDRN outperforms the conventional CNN-based baselines and some state-of-the-art feature extraction approaches.
关键词: feature distillation,compensation unit,ultra-dense connection,super-resolution,video satellite,remote sensing imagery
更新于2025-09-23 15:22:29