- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Polarimetric Interferometric SAR Change Detection Discrimination
摘要: A coherent change detection (CCD) image, computed from a geometrically matched, temporally separated pair of complex-valued synthetic aperture radar (SAR) image sets, conveys the pixel-level equivalence between the two observations. Low-coherence values in a CCD image are typically due to either some physical change in the corresponding pixels or a low signal-to-noise observation. A CCD image does not directly convey the nature of the change that occurred to cause low coherence. In this paper, we introduce a mathematical framework for discriminating between different types of change within a CCD image. We utilize the extra degrees of freedom and information from polarimetric interferometric SAR (PolInSAR) data and PolInSAR processing techniques to define a 29-dimensional feature vector that contains information capable of discriminating between different types of change in a scene. We also propose two change-type discrimination functions that can be trained with feature vector training data and demonstrate change-type discrimination on an example image set for three different types of change. Furthermore, we also describe and characterize the performance of the two proposed change-type discrimination functions by way of receiver operating characteristic curves, confusion matrices, and pass matrices.
关键词: polarimetric interferometric synthetic aperture radar (PolInSAR),H/A/α filter,probabilistic feature fusion (PFF) model,feature vector,Coherent change detection (CCD),optimum coherence (OC),H/A/α decomposition
更新于2025-09-23 15:23:52
-
Multi-source Remote Sensing Image Registration Based on Contourlet Transform and Multiple Feature Fusion
摘要: Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration.
关键词: contourlet transform,multi-source remote sensing image registration,multi-direction gray level co-occurrence matrix,multi-scale circle Gaussian combined invariant moment,Feature fusion
更新于2025-09-23 15:23:52
-
Object Tracking Algorithm Based on Dual Color Feature Fusion with Dimension Reduction
摘要: Aiming at the problem of poor robustness and the low effectiveness of target tracking in complex scenes by using single color features, an object-tracking algorithm based on dual color feature fusion via dimension reduction is proposed, according to the Correlation Filter (CF)-based tracking framework. First, Color Name (CN) feature and Color Histogram (CH) feature extraction are respectively performed on the input image, and then the template and the candidate region are correlated by the CF-based methods, and the CH response and CN response of the target region are obtained, respectively. A self-adaptive feature fusion strategy is proposed to linearly fuse the CH response and the CN response to obtain a dual color feature response with global color distribution information and main color information. Finally, the position of the target is estimated, based on the fused response map, with the maximum of the fused response map corresponding to the estimated target position. The proposed method is based on fusion in the framework of the Staple algorithm, and dimension reduction by Principal Component Analysis (PCA) on the scale; the complexity of the algorithm is reduced, and the tracking performance is further improved. Experimental results on quantitative and qualitative evaluations on challenging benchmark sequences show that the proposed algorithm has better tracking accuracy and robustness than other state-of-the-art tracking algorithms in complex scenarios.
关键词: self-adaptive feature fusion,principal component analysis,feature fusion,correlation filter,visual tracking
更新于2025-09-23 15:22:29
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Covariance Matrix Based Feature Fusion for Scene Classification
摘要: In this paper, a covariance matrix based feature fusion (CMFF) framework is proposed to combine two low-level visual features i.e., the Gabor feature and color feature for scene classification. Generally, the proposed method consists of following three steps. Firstly, the Gabor feature and color feature are extracted from original image and stacked together. Then, a covariance matrix is extracted to fuse these two low-level visual features. Each nondiagonal entry in the covariance matrix stands for the correlation of two different feature dimensions. Finally, the obtained covariance matrix is handled by a kernel linear discriminative analysis algorithm followed with nearest neighboring classifier for label assignment. The proposed method is tested on a public 21-classes UC Merced land use dataset and compared with mid-level visual feature oriented method and the high-level feature oriented methods. The experimental results demonstrate that the proposed CMFF framework can not only improve the classification performance of the low-level visual feature (the Gabor feature and the color feature), but also can outperform the conventional mid-level visual feature oriented methods.
关键词: feature representation,Scene classification,feature fusion
更新于2025-09-23 15:22:29
-
[IEEE 2018 IEEE International Conference on Imaging Systems and Techniques (IST) - Krakow (2018.10.16-2018.10.18)] 2018 IEEE International Conference on Imaging Systems and Techniques (IST) - A Cortical Based Diagnosis System for MCI Based on sMRI Features Fusion
摘要: Alzheimer’s disease (AD) is one of the most neurodegenerative disorders that target central nervous system with statistical results of more than 5 million sufferers among the Americans. According to the literature, discovering the disease in its early stage is considered as one of the main obstacles that face the scientists. The difficulty of this diagnostic task relied on a number of reasons including the variability of the disease’s effect among its patients. This paper aims to study mild cognitive impairment (MCI), the type of impairment that found to increase the factor of achieving to AD. According to this study a cortical regions based computer-aided diagnosis (CAD) system can be presented that in turn serve the early diagnosis of AD. This goal is achieved by visualizing the personalized diagnosis of the MCI in each of the cortical regions separately. For this purpose, the proposed CAD system goes into four main stages: 1-preprocessing and cortex extraction, 2-cortex reconstruction and shape-based feature extraction, 3-feature fusing, and 4-local/regional diagnosis followed by global diagnosis step. Evaluating the proposed system shows promising results with a maximum performance of 86.30%, 88.33%, and 84.88% for accuracy, specificity, and sensitivity, respectively.
关键词: cortical regions,MCI,AD,feature fusion,sMRI
更新于2025-09-23 15:22:29
-
[IEEE 2019 15th International Conference on Emerging Technologies (ICET) - Peshawar, Pakistan (2019.12.2-2019.12.3)] 2019 15th International Conference on Emerging Technologies (ICET) - Integrated Fault-Diagnoses and Fault-Tolerant MPPT Control Scheme for a Photovoltaic System
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: joint sparse representation,feature fusion,Visual tracking
更新于2025-09-23 15:19:57
-
[IEEE 2019 IEEE 8th International Conference on Advanced Optoelectronics and Lasers (CAOL) - Sozopol, Bulgaria (2019.9.6-2019.9.8)] 2019 IEEE 8th International Conference on Advanced Optoelectronics and Lasers (CAOL) - Chaotic Communications in the Coupled Fiber Optic System
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: feature fusion,joint sparse representation,Visual tracking
更新于2025-09-23 15:19:57
-
The Spectral-Spatial Joint Learning for Change Detection in Multispectral Imagery
摘要: Change detection is one of the most important applications in the remote sensing domain. More and more attention is focused on deep neural network based change detection methods. However, many deep neural networks based methods did not take both the spectral and spatial information into account. Moreover, the underlying information of fused features is not fully explored. To address the above-mentioned problems, a Spectral-Spatial Joint Learning Network (SSJLN) is proposed. SSJLN contains three parts: spectral-spatial joint representation, feature fusion, and discrimination learning. First, the spectral-spatial joint representation is extracted from the network similar to the Siamese CNN (S-CNN). Second, the above-extracted features are fused to represent the difference information that proves to be effective for the change detection task. Third, the discrimination learning is presented to explore the underlying information of obtained fused features to better represent the discrimination. Moreover, we present a new loss function that considers both the losses of the spectral-spatial joint representation procedure and the discrimination learning procedure. The effectiveness of our proposed SSJLN is verified on four real data sets. Extensive experimental results show that our proposed SSJLN can outperform the other state-of-the-art change detection methods.
关键词: discrimination learning,feature fusion,change detection,spectral-spatial representation,multispectral imagery,Siamese CNN
更新于2025-09-19 17:15:36
-
[IEEE 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA) - Xi'an, China (2018.11.7-2018.11.10)] 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA) - Pedestrian Detection Using Regional Proposal Network with Feature Fusion
摘要: Pedestrian detection, which has broad application prospects in video security, robotics and self-driving vehicles etc., is one of the most important research fields in computer vision. Recently, deep learning methods, e.g., Region Proposal Network (RPN), have achieved major performance improvements in pedestrian detection. In order to further utilize the deep pedestrian features of RPN, this paper proposes a novel regional proposal network model based on feature fusion (RPN FeaFus) for pedestrian detection. RPN FeaFus adopts an asymmetric dual-path deep model, constructed by VGGNet and ZFNet, to extract pedestrian features in different levels, which are further combined through PCA dimension reduction and feature stacking to provide more discriminant representation. Then, the low-dimensional fusion features are adopted to detect the region proposals and train the classifier. Experimental results on three widely used pedestrian detection databases, i.e, Caltech database, Daimler database and TUD database, illuminate that RPN FeaFus gains obvious performance improvements over its baseline RPN BF, which is also competitive with the state-of-the-art methods.
关键词: region proposal network,dual-path model,Pedestrian detection,feature fusion
更新于2025-09-19 17:15:36
-
A High-Selectivity D-Band Mixed-Mode Filter Based on the Coupled Overmode Cavities
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: feature fusion,joint sparse representation,Visual tracking
更新于2025-09-19 17:13:59