- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
[IEEE 2018 25th IEEE International Conference on Image Processing (ICIP) - Athens, Greece (2018.10.7-2018.10.10)] 2018 25th IEEE International Conference on Image Processing (ICIP) - Integrating Multi-Level Convolutional Features for Correlation Filter Tracking
摘要: Discriminative correlation filters (DCFs) have drawn increasing interest in visual tracking. In particular, a few recent works treat DCFs as a special layer and adding it into a Siamese network for visual tracking. However, they adopt shallow networks to learn target representations, which lack robust semantic information in deeper layers and make these works fail to handle significant appearance changes. In this paper, we design a novel network to fuse multi-level convolutional features, each level of which characterize target from different perspectives. Then we integrate our network with the DCF layer to construct an end-to-end deep architecture for visual tracking. The overall architecture is trained end-to-end offline to adaptively learn target representations, which are not only enabled to encode high-level semantic features and low-level spatial detail features, but also closely related to correlation filters. Experiments show that our proposed tracker achieves superior performance against state-of the-art trackers.
关键词: correlation filters,visual tracking,convolutional neural networks
更新于2025-09-23 15:22:29
-
Minimum Barrier Distance-Based Object Descriptor for Visual Tracking
摘要: In most visual tracking tasks, the target is tracked by a bounding box given in the first frame. The complexity and redundancy of background information in the bounding box inevitably exist and affect tracking performance. To alleviate the influence of background, we propose a robust object descriptor for visual tracking in this paper. First, we decompose the bounding box into non-overlapping patches and extract the color and gradient histograms features for each patch. Second, we adopt the minimum barrier distance (MBD) to calculate patch weights. Specifically, we consider the boundary patches as the background seeds and calculate the MBD from each patch to the seed set as the weight of each patch since the weight calculated by MBD can represent the difference between each patch and the background more effectively. Finally, we impose the weight on the extracted feature to get the descriptor of each patch and then incorporate our MBD-based descriptor into the structured support vector machine algorithm for tracking. Experiments on two benchmark datasets demonstrate the effectiveness of the proposed approach.
关键词: minimum barrier distance,patch-based,visual tracking,patch descriptor
更新于2025-09-23 15:22:29
-
[IEEE 2018 25th IEEE International Conference on Image Processing (ICIP) - Athens, Greece (2018.10.7-2018.10.10)] 2018 25th IEEE International Conference on Image Processing (ICIP) - Fusion of Template Matching and Foreground Detection for Robust Visual Tracking
摘要: In this paper, we present an end-to-end framework for visual tracking that contains fully convolutional template matching network and fully convolutional foreground detection network. It fuses the response maps of foreground detection and template matching for robust tracking and it can inherits all the merits of them. Besides, our network don't need additional datasets to train and only object information in the first frame is needed in training stage. We conduct extensive experiments on OTB2013 and OTB2015 and our tracker achieves state-of-the-art performance in both efficiency and accuracy.
关键词: End-to-end,Matching network,Foreground detection,Visual tracking
更新于2025-09-23 15:22:29
-
Object Tracking Algorithm Based on Dual Color Feature Fusion with Dimension Reduction
摘要: Aiming at the problem of poor robustness and the low effectiveness of target tracking in complex scenes by using single color features, an object-tracking algorithm based on dual color feature fusion via dimension reduction is proposed, according to the Correlation Filter (CF)-based tracking framework. First, Color Name (CN) feature and Color Histogram (CH) feature extraction are respectively performed on the input image, and then the template and the candidate region are correlated by the CF-based methods, and the CH response and CN response of the target region are obtained, respectively. A self-adaptive feature fusion strategy is proposed to linearly fuse the CH response and the CN response to obtain a dual color feature response with global color distribution information and main color information. Finally, the position of the target is estimated, based on the fused response map, with the maximum of the fused response map corresponding to the estimated target position. The proposed method is based on fusion in the framework of the Staple algorithm, and dimension reduction by Principal Component Analysis (PCA) on the scale; the complexity of the algorithm is reduced, and the tracking performance is further improved. Experimental results on quantitative and qualitative evaluations on challenging benchmark sequences show that the proposed algorithm has better tracking accuracy and robustness than other state-of-the-art tracking algorithms in complex scenarios.
关键词: self-adaptive feature fusion,principal component analysis,feature fusion,correlation filter,visual tracking
更新于2025-09-23 15:22:29
-
[IEEE 2019 15th International Conference on Emerging Technologies (ICET) - Peshawar, Pakistan (2019.12.2-2019.12.3)] 2019 15th International Conference on Emerging Technologies (ICET) - Integrated Fault-Diagnoses and Fault-Tolerant MPPT Control Scheme for a Photovoltaic System
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: joint sparse representation,feature fusion,Visual tracking
更新于2025-09-23 15:19:57
-
[IEEE 2019 IEEE 8th International Conference on Advanced Optoelectronics and Lasers (CAOL) - Sozopol, Bulgaria (2019.9.6-2019.9.8)] 2019 IEEE 8th International Conference on Advanced Optoelectronics and Lasers (CAOL) - Chaotic Communications in the Coupled Fiber Optic System
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: feature fusion,joint sparse representation,Visual tracking
更新于2025-09-23 15:19:57
-
[IEEE 2018 24th International Conference on Pattern Recognition (ICPR) - Beijing (2018.8.20-2018.8.24)] 2018 24th International Conference on Pattern Recognition (ICPR) - Visual Tracking with Breeding Fireflies using Brightness from Background-Foreground Information
摘要: Visual target tracking involves object localization in image sequences. This is achieved by optimizing image feature similarity based objective functions in object state space. Meta-heuristic algorithms have shown promising results in solving hard optimization problems where gradients are not available. This motivated us to use Firefly algorithms in visual object tracking. The object state is represented by its bounding box parameters and the target is modeled by its color distribution. This work has two significant contributions. First, we propose a hybrid firefly algorithm where genetic operations are performed using Real-coded Genetic Algorithm (RGA). Here, the crossover operation is modified by incorporating parent velocity information. Second, the firefly brightness is computed from both foreground and background information (as opposed to only foreground). This helps in handling scale implosion and explosion problems. The proposed approach is benchmarked on challenging sequences from VOT2014 dataset and is compared against other baseline trackers and metaheuristic algorithms.
关键词: Genetic algorithm,Foreground-background information,Optimization,Visual tracking,Firefly algorithm,Object localization
更新于2025-09-19 17:15:36
-
Robust Visual Tracking Based on Adaptive Extraction and Enhancement of Correlation Filter
摘要: In recent years, correlation filter (CF)-based tracking methods have demonstrated competitive performance. However, conventional CF-based methods suffer from unwanted boundary effects because of the periodic assumption of the training and detection samples. A spatially regularized discriminative CF (SRDCF) has greatly alleviated boundary effects by proposing the spatial regularization weights, which penalize the CF coefficients during learning. However, the SRDCF utilizes a naive decaying exponential model to passively and fixedly update the CF from the previous results. Therefore, if the target meets with occlusion or is out of view, the SRDCF may encounter over-fitting to the recent polluted samples, which may lead to tracking drift and failure. In this paper, we present a novel CF-based tracking method to resolve this issue by dynamically and adaptively correcting the weights of learning CFs and fusing them together to promote a more robust tracking. Thus, if the recent samples are inaccurate in the case of occlusion or are out of view, our method will down-weight the corresponding CFs and vice versa. Moreover, in order to decrease computational complexity and ensure memory efficiency, we extract the key CFs from the previous frames to remove redundant CFs under the contiguous frame indexes constraint. Thus, we do not need to store all CFs and decrease computational burden. Benefiting from the extraction and enhancement of CF, our method improves the tracking precision on OTB-2015, VOT-2016 and UAV123 benchmarks and achieves a 56.0% relative gain in speed compared with the SRDCF. The extensive experimental results demonstrate that our method is competitive with the state-of-the-art algorithms.
关键词: sample learning,adaptive extraction and enhancement,Visual tracking,correlation filter
更新于2025-09-19 17:15:36
-
A High-Selectivity D-Band Mixed-Mode Filter Based on the Coupled Overmode Cavities
摘要: Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.
关键词: feature fusion,joint sparse representation,Visual tracking
更新于2025-09-19 17:13:59
-
A Vision-Based Approach to UAV Detection and Tracking in Cooperative Applications
摘要: This paper presents a visual-based approach that allows an Unmanned Aerial Vehicle (UAV) to detect and track a cooperative ?ying vehicle autonomously using a monocular camera. The algorithms are based on template matching and morphological ?ltering, thus being able to operate within a wide range of relative distances (i.e., from a few meters up to several tens of meters), while ensuring robustness against variations of illumination conditions, target scale and background. Furthermore, the image processing chain takes full advantage of navigation hints (i.e., relative positioning and own-ship attitude estimates) to improve the computational ef?ciency and optimize the trade-off between correct detections, false alarms and missed detections. Clearly, the required exchange of information is enabled by the cooperative nature of the formation through a reliable inter-vehicle data-link. Performance assessment is carried out by exploiting ?ight data collected during an ad hoc experimental campaign. The proposed approach is a key building block of cooperative architectures designed to improve UAV navigation performance either under nominal GNSS coverage or in GNSS-challenging environments.
关键词: autonomous navigation,morphological ?ltering,visual detection,unmanned aerial vehicles,visual tracking,template matching,cooperative UAV applications
更新于2025-09-09 09:28:46