- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
A Generative Discriminatory Classified Network for Change Detection in Multispectral Imagery
摘要: Multispectral image change detection based on deep learning generally needs a large amount of training data. However, it is difficult and expensive to mark a large amount of labeled data. To deal with this problem, we propose a generative discriminatory classified network (GDCN) for multispectral image change detection, in which labeled data, unlabeled data, and new fake data generated by generative adversarial networks are used. The GDCN consists of a discriminatory classified network (DCN) and a generator. The DCN divides the input data into changed class, unchanged class, and extra class, i.e., fake class. The generator recovers the real data from input noises to provide additional training samples so as to boost the performance of the DCN. Finally, the bitemporal multispectral images are input to the DCN to get the final change map. Experimental results on the real multispectral imagery datasets demonstrate that the proposed GDCN trained by unlabeled data and a small amount of labeled data can achieve competitive performance compared with existing methods.
关键词: Change detection,deep learning,multispectral imagery,generative adversarial networks (GANs)
更新于2025-09-23 15:23:52
-
3D auto-context-based locality adaptive multi-modality GANs for PET synthesis
摘要: Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.
关键词: Image synthesis,Positron emission topography (PET),Locality adaptive fusion,Generative adversarial networks (GANs),Multi-modality
更新于2025-09-23 15:21:01
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia, Spain (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Generative Adversarial Networks for Cross-Scene Classification in Remote Sensing Images
摘要: In this paper, we present a novel method for cross-scene classification in remote sensing images based on generative adversarial networks (GANs). To this end, we train in an adversarial manner an encoder-decoder network coupled with a discriminator network on labeled and unlabeled data coming from two different domains. The encoder-decoder network aims to reduce the discrepancy between the distributions of the two domains, while the discriminator tries to discriminate between them. At the end of the optimization process, we train an extra network on the obtained encoded labeled data and then classify the encoded unlabeled data. Experimental results on two datasets acquired over the cities of Potsdam and Vaihingen with spatial resolutions of 5cm and 9cm, respectively, confirm the promising capability of the proposed method.
关键词: domain adaptation,generative adversarial networks (GANs),Cross-scene classification
更新于2025-09-10 09:29:36