- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia, Spain (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Generative Adversarial Networks for Cross-Scene Classification in Remote Sensing Images
摘要: In this paper, we present a novel method for cross-scene classification in remote sensing images based on generative adversarial networks (GANs). To this end, we train in an adversarial manner an encoder-decoder network coupled with a discriminator network on labeled and unlabeled data coming from two different domains. The encoder-decoder network aims to reduce the discrepancy between the distributions of the two domains, while the discriminator tries to discriminate between them. At the end of the optimization process, we train an extra network on the obtained encoded labeled data and then classify the encoded unlabeled data. Experimental results on two datasets acquired over the cities of Potsdam and Vaihingen with spatial resolutions of 5cm and 9cm, respectively, confirm the promising capability of the proposed method.
关键词: domain adaptation,generative adversarial networks (GANs),Cross-scene classification
更新于2025-09-10 09:29:36
-
[IEEE 2018 IEEE 16th International Conference on Software Engineering Research, Management and Applications (SERA) - Kunming (2018.6.13-2018.6.15)] 2018 IEEE 16th International Conference on Software Engineering Research, Management and Applications (SERA) - Motion Deblurring via Using Generative Adversarial Networks for Space-Based Imaging
摘要: In some missions of NanoSats, we find images captured are disturbed by motion blur which caused under the situation that NanoSats work in low-earth orbit at high speeds. In this paper, we address the problem of deblurring images degraded due to space-based imaging system shaking or movements of observing targets. We propose a motion deblurring strategy via using Generative Adversarial Networks(GAN) to realize an end-to-end image processing without kernel estimation in orbit. We combine Wasserstein GAN(WGAN) and loss function based on adversarial loss and perceptual loss to optimize the result of deblurred image. The experimental results on the two different datasets prove the feasibility and effectiveness of the proposed strategy which outperforms the state-of-the-art blind deblurring algorithms using for remote sensing images both quantitatively and qualitatively.
关键词: Space-Based Imaging,Generative Adversarial Networks,NanoSats,Motion Deblurring
更新于2025-09-09 09:28:46
-
[IEEE 2018 24th International Conference on Pattern Recognition (ICPR) - Beijing, China (2018.8.20-2018.8.24)] 2018 24th International Conference on Pattern Recognition (ICPR) - Wasserstein Generative Recurrent Adversarial Networks for Image Generating
摘要: Most generative models are generating images at a time, but in fact, painting is usually done iteratively and repeatedly. Generative Adversarial Networks (GAN) are well known for generating images, however, it is hard to train stably. To tackle this problem, we propose a framework named the Wasserstein generative recurrent adversarial networks (WGRAN), which merges Wasserstein distance with recurrent neural networks to iteratively generate realistic looking images and trains our model in an adversarial way. Therefore, our generative model is gradually generates images using the feedback of discriminate model. And our approach allows us to control the number of iterations of generation. We train our model on various image datasets and compare our model with the recurrent generative adversarial networks (GRAN) and other state-of-the-art generative models using Generative Adversarial Metric. From these experiments, we show evidence that our model has the ability to generate high quantity images.
关键词: recurrent nerual netwoks,image generating,Generative Adversarial networks,Wasserstein distance
更新于2025-09-09 09:28:46
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia, Spain (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Deep Domain Adaptation for Single-Shot Vehicle Detector in Satellite Images
摘要: In this paper, we designed unsupervised domain adaptation (DA) methods to vehicle detection in high-resolution satellite images. We applied two Single Shot MultiBox Detectors, which have advantages in handling image feature differences among various kinds of image data: Correlation Alignment DA (CORAL DA) and adversarial DA. These novel methods can much improve accuracy without annotated data by finding the common feature space of source and target domains and aligning the features. While a mean of average precision (AP) and F1 dropped from 84.1% in the source domain to 66.3% in the target domain, the CORAL DA and adversarial DA improved it to 76.8% and 75.9% respectively. These improvements were over a half of the performance degradation, indicating the usability of our methods.
关键词: CORAL,domain adaptation,vehicle detection,satellite images,single shot multibox detector (SSD),adversarial training
更新于2025-09-09 09:28:46
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Deep Generative Matching Network for Optical and SAR Image Registration
摘要: Multimodal remote sensing images contain complementary information, thus, could potentially benefit many remote sensing applications. To this end, the image registration is a common requirement for utilizing the multimodal images. However, due to the rather different imaging mechanisms, multimodal image registration becomes much more challenging than ordinary registration, particular for optical and synthetic aperture radar (SAR) images. In this work, we design a deep matching network to exploit the latent and coherent features between multimodal patch pairs for inferring their matching labels. But, the network requires immense data for training, which is not usually met. To address this issue, we propose a generative matching network (GMN) to generate the coupled optical and SAR images, hence, improve the quantity and diversity of the training data. The experimental results show that our proposal significantly improves the registration performance of optical and SAR image registration, and achieves subpixel or close to subpixel error.
关键词: multimodal images,generative adversarial network,optical and SAR,deep matching network,image registration
更新于2025-09-09 09:28:46
-
[IEEE 2018 37th Chinese Control Conference (CCC) - Wuhan (2018.7.25-2018.7.27)] 2018 37th Chinese Control Conference (CCC) - Monocular Image Depth Estimation Using a Conditional Generative Adversarial Net
摘要: Depth estimation plays an essential part in understanding the three-dimensional (3D) geometric relations of a scene. Compared with other methods such as binocular vision, estimating depth from monocular image is much more challenging. In this paper, we propose a conditional generative adversarial net (cGAN) to tackle the problem of monocular image depth estimation. For enhancing the learning of our net in the training phrase, cycle consistency is applied to our network to form a closed loop. We use the network to model the mapping between the RGB images domain and the depth images domain. After training the network adequately, the model can output depth image according to the input RGB image. Experiments on NYU Depth v2 dataset demonstrate the proposed method outperforms state-of-art depth estimation approaches.
关键词: cycle consistency,conditional generative adversarial nets,monocular depth estimation,deep learning
更新于2025-09-09 09:28:46
-
[IEEE 2018 25th IEEE International Conference on Image Processing (ICIP) - Athens, Greece (2018.10.7-2018.10.10)] 2018 25th IEEE International Conference on Image Processing (ICIP) - Retinal Vessel Detection in Wide-Field Fluorescein Angiography with Deep Neural Networks: A Novel Training Data Generation Approach
摘要: Retinal blood vessel detection is a crucial step in automatic retinal image analysis. Recently, deep neural networks have significantly advanced the state of the art for retinal blood vessel detection in color fundus (CF) images. Thus far, similar gains have not been seen in fluorescein angiography (FA) because the FA modality is entirely different from CF and annotated training data has not been available for FA imagery. We address retinal vessel detection in wide-field FA images with generative adversarial networks (GAN) via a novel approach for generating training data. Using a publicly available dataset that contains concurrently acquired pairs of CF and fundus FA images, vessel maps are detected in CF images via a pre-trained neural network and registered with fundus FA images via parametric chamfer matching to a preliminary FA vessel detection map. The co-aligned pairs of vessel maps (detected from CF images) and fundus FA images are used as ground truth labeled data for de novo training of a deep neural network for FA vessel detection. Specifically, we utilize adversarial learning to train a GAN where the generator learns to map FA images to binary vessel maps and the discriminator attempts to distinguish generated vs. ground-truth vessel maps. We highlight several important considerations for the proposed data generation methodology. The proposed method is validated on VAMPIRE dataset that contains high-resolution wide-field FA images and manual annotation of vessel segments. Experimental results demonstrate that the proposed method achieves an estimated ROC AUC of 0.9758.
关键词: retinal image analysis,Fluorescein angiography,deep learning,vessel detection,generative adversarial networks
更新于2025-09-09 09:28:46
-
[IEEE 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE) - Nara, Japan (2018.10.9-2018.10.12)] 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE) - Automatic Generation of Facial Expression Using Generative Adversarial Nets
摘要: With the spread of digital cameras, smart phones, and SNS, the number facial images of people have increased. Facial expression generation from a single facial image has been widely applied to the fields of entertainment and social communication. Many approaches that apply machine learning techniques have been developed. In our previous study, we developed a makeup simulator system. However, this system is incapable of changing the impression of a cosmetic face based on changes in facial expression; in addition, another challenge is that the user cannot see the impression of makeup dynamically and objectively. Therefore, in this study, we generate static facial expression images from a natural (expressionless) image by using generative adversarial networks, which is critical to the research on dynamic facial expression change. Our experimental results demonstrate that our approach achieves the best expression image.
关键词: Generative Adversarial Nets,image,Image-to-Image Translation with Conditional Adversarial Networks,facial expression
更新于2025-09-04 15:30:14
-
[Lecture Notes in Computer Science] Understanding and Interpreting Machine Learning in Medical Image Computing Applications Volume 11038 (First International Workshops, MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16-20, 2018, Proceedings) || Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks
摘要: Recently, there have been several successful deep learning approaches for automatically classifying chest X-ray images into different disease categories. However, there is not yet a comprehensive vulnerability analysis of these models against the so-called adversarial perturbations/attacks, which makes deep models more trustful in clinical practices. In this paper, we extensively analyzed the performance of two state-of-the-art classification deep networks on chest X-ray images. These two networks were attacked by three different categories (ten methods in total) of adversarial methods (both white- and black-box), namely gradient-based, score-based, and decision-based attacks. Furthermore, we modified the pooling operations in the two classification networks to measure their sensitivities against different attacks, on the specific task of chest X-ray classification.
关键词: Chest X-ray classification,Deep learning,Adversarial perturbation
更新于2025-09-04 15:30:14
-
[IEEE 2018 25th IEEE International Conference on Image Processing (ICIP) - Athens, Greece (2018.10.7-2018.10.10)] 2018 25th IEEE International Conference on Image Processing (ICIP) - Near InfraRed Imagery Colorization
摘要: This paper proposes a stacked conditional Generative Adversarial Network-based method for Near InfraRed (NIR) imagery colorization. We propose a variant architecture of Generative Adversarial Network (GAN) that uses multiple loss functions over a conditional probabilistic generative model. We show that this new architecture/loss-function yields better generalization and representation of the generated colored IR images. The proposed approach is evaluated on a large test dataset and compared to recent state of art methods using standard metrics.
关键词: Convolutional Neural Networks (CNN),Infrared Imagery colorization,Generative Adversarial Network (GAN)
更新于2025-09-04 15:30:14