- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Sensor Fusion and Registration of Lidar and Stereo Camera without Calibration Objects
摘要: Environment perception is an important task for intelligent vehicles applications. Typically, multiple sensors with different characteristics are employed to perceive the environment. To robustly perceive the environment, the information from the different sensors are often integrated or fused. In this article, we propose to perform the sensor fusion and registration of the LIDAR and stereo camera using the particle swarm optimization algorithm, without the aid of any external calibration objects. The proposed algorithm automatically calibrates the sensors and registers the LIDAR range image with the stereo depth image. The registered LIDAR range image functions as the disparity map for the stereo disparity estimation and results in an effective sensor fusion mechanism. Additionally, we perform the image denoising using the modified non-local means filter on the input image during the stereo disparity estimation to improve the robustness, especially at night time. To evaluate our proposed algorithm, the calibration and registration algorithm is compared with baseline algorithms on multiple datasets acquired with varying illuminations. Compared to the baseline algorithms, we show that our proposed algorithm demonstrates better accuracy. We also demonstrate that integrating the LIDAR range image within the stereo’s disparity estimation results in an improved disparity map with significant reduction in the computational complexity.
关键词: stereo camera,LIDAR,sensor fusion
更新于2025-09-23 15:23:52
-
[American Society of Agricultural and Biological Engineers 2017 Spokane, Washington July 16 - July 19, 2017 - ()] 2017 Spokane, Washington July 16 - July 19, 2017 - Design and modeling of grain impact sensor utilizing two crossed polyvinylidene fluoride films
摘要: In order to reduce the unavoidable grain losses during harvesting, the combine harvester’s operational parameters should be adjusted accordingly. So, it is important to develop a real-time sensor which can monitor the grain losses. A grain impact sensor utilizing crossed piezoelectric polyvinylidene fluoride (PVDF) films as sensitive material is described. This sensor is composed of two crossed layers of sensor unit arrays, a damping layer and a support plate. The two layers are insulated from each other but can detect the impact simultaneously. The sensor unit arrays of those two layers are perpendicular and the sensor units in each layer are independent and parallel. Each sensor unit has its independent signal processing circuit, which is composed of charge amplifier, band-pass filter, envelope detector and voltage comparator. Two signals from two layers presented a two-dimensional impact position information through multi-sensor fusion technology. The sensor can obtain the spatial distribution of grain loss accurately to reduce the error-recognition ratio. Moreover, the grain impact sensor was simulated by finite element method to obtain the best number and size of the sensor units for higher sensitivity, detection speed, stress transfer efficiency, deformation transfer efficiency.
关键词: double layers,grain impact sensor,multi-sensor fusion,grain loss detecting,PVDF film
更新于2025-09-23 15:22:29
-
Deep Belief Network for Spectral–Spatial Classification of Hyperspectral Remote Sensor Data
摘要: With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel deep belief network (DBN) hyperspectral image classification method based on multivariate optical sensors and stacked by restricted Boltzmann machines is proposed. We introduced the DBN framework to classify spatial hyperspectral sensor data on the basis of DBN. Then, the improved method (combination of spectral and spatial information) was verified. After unsupervised pretraining and supervised fine-tuning, the DBN model could successfully learn features. Additionally, we added a logistic regression layer that could classify the hyperspectral images. Moreover, the proposed training method, which fuses spectral and spatial information, was tested over the Indian Pines and Pavia University datasets. The advantages of this method over traditional methods are as follows: (1) the network has deep structure and the ability of feature extraction is stronger than traditional classifiers; (2) experimental results indicate that our method outperforms traditional classification and other deep learning approaches.
关键词: classification,feature extraction,multi-sensor fusion,remote sensors,deep learning,hyperspectral image
更新于2025-09-23 15:22:29
-
An interprojection sensor fusion approach to estimate blocked projection signal in synchronized moving grid-based CBCT system
摘要: Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the Catphan phantom were mostly recovered according to visual evaluation. The scatter related artifacts, such as cupping artifacts, were almost completely removed. Conclusions: The IPSF-SMOG is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.
关键词: moving grids,scatter correction,interpolation,sensor fusion,geometric model,SMOG,dose reduction,CBCT
更新于2025-09-23 15:22:29
-
[IEEE 2019 International Symposium on Ocean Technology (SYMPOL) - Ernakulam, India (2019.12.11-2019.12.13)] 2019 International Symposium on Ocean Technology (SYMPOL) - Performance Analysis of Underwater Wireless Optical Communication in terms of Received Optical Power
摘要: This paper presents a wearable inertial measurement system and its associated spatiotemporal gait analysis algorithm to obtain quantitative measurements and explore clinical indicators from the spatiotemporal gait patterns for patients with stroke or Parkinson’s disease. The wearable system is composed of a microcontroller, a triaxial accelerometer, a triaxial gyroscope, and an RF wireless transmission module. The spatiotemporal gait analysis algorithm, consisting of procedures of inertial signal acquisition, signal preprocessing, gait phase detection, and ankle range of motion estimation, has been developed for extracting gait features from accelerations and angular velocities. In order to estimate accurate ankle range of motion, we have integrated accelerations and angular velocities into a complementary filter for reducing the accumulation of integration error of inertial signals. All 24 participants mounted the system on their foot to walk along a straight line of 10 m at normal speed and their walking recordings were collected to validate the effectiveness of the proposed system and algorithm. Experimental results show that the proposed inertial measurement system with the designed spatiotemporal gait analysis algorithm is a promising tool for automatically analyzing spatiotemporal gait information, serving as clinical indicators for monitoring therapeutic efficacy for diagnosis of stroke or Parkinson’s disease.
关键词: stroke,Inertial sensing,complementary filter,Parkinson’s disease,gait analysis,sensor fusion
更新于2025-09-23 15:21:01
-
Target Localization and Tracking by Fusing Doppler Differentials from Cellular Emanations with a Multi-Spectral Video Tracker
摘要: We present an algorithm for fusing data from a constellation of RF sensors detecting cellular emanations with the output of a multi-spectral video tracker to localize and track a target with a specific cell phone. The RF sensors measure the Doppler shift caused by the moving cellular emanation and then Doppler differentials between all sensor pairs are calculated. The multi-spectral video tracker uses a Gaussian mixture model to detect foreground targets and SIFT features to track targets through the video sequence. The data is fused by associating the Doppler differential from the RF sensors with the theoretical Doppler differential computed from the multi-spectral tracker output. The absolute difference and the root-mean-square difference are computed to associate the Doppler differentials from the two sensor systems. Performance of the algorithm was evaluated using synthetically generated datasets of an urban scene with multiple moving vehicles. The presented fusion algorithm correctly associates the cellular emanation with the corresponding video target for low measurement uncertainty and in the presence of favorable motion patterns. For nearly all objects the fusion algorithm has high confidence in associating the emanation with the correct multi-spectral target from the most probable background target.
关键词: localization,target tracking,identification,sensor fusion
更新于2025-09-23 15:21:01
-
LIDAR–camera fusion for road detection using fully convolutional neural networks
摘要: In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches. To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches.
关键词: Deep learning,Road detection,Intelligent vehicles,Sensor fusion
更新于2025-09-23 15:21:01
-
[IEEE 2018 International Joint Conference on Neural Networks (IJCNN) - Rio de Janeiro (2018.7.8-2018.7.13)] 2018 International Joint Conference on Neural Networks (IJCNN) - DHA: Lidar and Vision data Fusion-based On Road Object Classifier
摘要: In this paper, we first extract three different kinds of high-level features from LIDAR point cloud, and combine them into the DHA (Depth, Height and Angle) channels. Integrated with the traditional RGB image from camera, we build a rich feature-based road object classifier by training a deep convolutional neural network model with six-channel (RGBDHA) data. Subsequently, this deep convolution neural network is fed by the integration of spacial and RGB information. With additional upsampled LIDAR data, the classifier reaches higher accuracy than single RGB image base methods. Several simulations on the famous autonomous vehicle benchmark of KITTI show that our fusion-based classifier outperforms RGB-based approaches about 15% and reaches average accuracy of 96%.
关键词: Deep learning,Autonomous vehicle,LIDAR,Sensor fusion
更新于2025-09-19 17:15:36
-
[IEEE 2018 29th Irish Signals and Systems Conference (ISSC) - Belfast (2018.6.21-2018.6.22)] 2018 29th Irish Signals and Systems Conference (ISSC) - Sensor Technology in Autonomous Vehicles : A review
摘要: This paper will review the main sensor technologies used to create an autonomous vehicle. Sensors are key components for all types of autonomous vehicles because they can provide to perceive the surrounding environment and therefore aid the decision-making process. This paper explains how each of these sensors work, their advantages and disadvantages and how sensor fusion techniques can be utilised to create a more optimum and efficient system for autonomous vehicles.
关键词: Sensor Fusion,Localization,Perception
更新于2025-09-19 17:15:36
-
[IEEE 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Chicago, IL, USA (2019.6.16-2019.6.21)] 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - A Novel Approach to Amine-Thiol Molecular Precursors for Fabrication of High Efficiency Thin Film CISSe/CIGSSe Devices
摘要: 3-D coordinate transformation, which is based on aligning two sets of common reference points, is frequently applied in large-scale combined measurement to unify coordinate frames and tie individual measurement systems together. However, it introduces uncertainty into the ?nal measurement results. This uncertainty must be quanti?ed to make the results complete. This paper presents a novel approach to the uncertainty analysis of 3-D coordinate transformation based on the weighted total least squares adjustment. This approach takes full account of the uncertainty characteristics of measuring instruments and is simple in calculation. In this approach, the transformation uncertainty of a point in a world frame is analyzed carefully. The simulations show that the transformation uncertainty has a distribution of concentric ellipsoids and is affected by the measurement uncertainties and layout of common points. Besides, strategies for minimizing transformation uncertainty are recommended. The experimental results from a laser tracker prove that this proposed approach is valid under normal instrument operating conditions and that these strategies are feasible and ef?cient.
关键词: Coordinate transformation,error analysis,uncertainty,large-scale metrology,position measurement,sensor fusion
更新于2025-09-19 17:13:59