- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
[IEEE 2019 International Electronics Symposium (IES) - Surabaya, Indonesia (2019.9.27-2019.9.28)] 2019 International Electronics Symposium (IES) - 2D Mapping and Localization using Laser Range Finder for Omnidirectional Mobile Robot
摘要: Simultaneous Localization and Mapping (SLAM) is a common technique to create the environment map and localize the robot position within the map. SLAM usually combines information from odometry and lidar sensor. However, odometry-based SLAM usually suffers accumulated errors due to wheel slipping. In this paper, we combined an omnidirectional-wheeled based floor odometry sensor and a 360o lidar sensor under Iterative Closest Point (ICP) SLAM platform. Our proposed technique can consistently reduce the errors caused by wheel slipping and refine the map.
关键词: omnidirectional mobile robot,lidar,SLAM,ICP,odometry
更新于2025-09-16 10:30:52
-
[IEEE 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT) - Paris, France (2019.4.23-2019.4.26)] 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT) - 3D Mapping Based on Fusion of 2D Laser and IMU Data Acquired by Unmanned Aerial Vehicle
摘要: The paper deals with the three-dimensional (3D) modelling based on data acquired from 2D laser sensor and IMU (Inertial Measurement Unit) attached to the UAV (Unmanned Aerial Vehicle). The used multi-sensor unit produces 2D scans and provides information about Euler angles or quaternions. These angles are used to describe the UAV orientation in 3D space, more precisely the orientation of the laser sensor. In order to generate 3D occupancy map it is necessary to rotate the laser sensor around its axis yielding the 2D scans being mapped on 3D space using quaternions. Inertial sensor and Sweep LiDAR laser measurements are transmitted to the single board computer Odroid XU4 (SBC). The data fusion was performed under ROS (Robot Operating System) installed on the SBC, producing 3D space points. These points are transmitted over the network to the central computer on which the UAV localization and mapping processes are done within the ROS. The effectiveness of the proposed system for 3D modelling of the UAV environment is verified by experiment.
关键词: 2D laser,Octree,ICP,UAV,ROS,3D mapping,IMU,SLAM
更新于2025-09-16 10:30:52
-
[IEEE 2019 Chinese Control And Decision Conference (CCDC) - Nanchang, China (2019.6.3-2019.6.5)] 2019 Chinese Control And Decision Conference (CCDC) - Experimental Research on Feature Extraction of Laser SLAM Based on Artificial Landmarks
摘要: Simultaneous Localization and Mapping (SLAM) is the core issue in the field of mobile robots. Laser SLAM is one of the widely used solutions for the application of SLAM in engineering. Laser SLAM estimates the change of the robot pose using the features of the environment. However, it is difficult to extract natural features in a highly similar environment. Therefore, artificial landmarks are needed to be placed in the environment when necessary. In this paper, the reflective characteristics of the artificial landmark are analyzed experimentally, and two feature parameters for the extraction of the artificial landmark are proposed based on their reflective characteristics. One of them is the intensity of the reflected light received by the laser ranger, and the other is the number of the laser points that the artificial landmark can continuously reflect back in each scan. Through the statistical analysis of the laser data collected in the experiment, the specific values of these two parameters are determined, and the detection range of the laser ranger to effectively extract the landmark under the selected parameters is measured experimentally. These results provide experimental support for the deployment of the landmarks in the environment.
关键词: Artificial Landmarks,Feature Extraction,Laser SLAM
更新于2025-09-12 10:27:22
-
[ACM Press the 2019 4th International Conference - Shenzhen, China (2019.07.19-2019.07.21)] Proceedings of the 2019 4th International Conference on Automation, Control and Robotics Engineering - CACRE2019 - Research on 2D-SLAM of Indoor Mobile Robot based on Laser Radar
摘要: Nowadays simultaneous localization and mapping (SLAM) of indoor mobile robots in unknown environment is very popular in robot research. Laser radar is widely used in SLAM research. In this paper, three 2D-SLAM algorithms based on laser radar in the robot operating system (ROS) were compared and evaluated, namely Gmapping, Hector-SLAM and Cartographer. It firstly built a mobile robot experimental platform based on ROS in the real environment. In order to reflect the ability of building maps by three SLAM algorithms, experiments were carried out in simple corridor and laboratory with many obstacles respectively. Meanwhile, ten points in the real environment were selected to measure the distance on maps and the real distance obtained by laser range finder for comparison and error analysis. Finally, according to the experimental results, strengths and weaknesses of each SLAM algorithm were discussed. It is concluded that Gmapping has the highest mapping accuracy in simple small scene environment while Hector-slam is more suitable for a long corridor environment, and Cartographer has more advantages in complex environment.
关键词: Gmmapping,Cartographer,Hector,SLAM,Laser radar
更新于2025-09-12 10:27:22
-
[IEEE 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) - Hong Kong, China (2019.7.8-2019.7.12)] 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) - A Visual SLAM System with Laser Assisted Optimization
摘要: Current robots mainly perform their own positioning and construction of the environment through laser radar or visual sensors. The positioning method of the visual method is not as accurate as the laser method, but it can obtain more environmental information and complete the reconstruction of the surrounding environment. In this paper, combining the advantages of the two methods, pass the robot coordinate transformation obtained by the laser part into the visual part and then convert it to the initial value of the pose optimization in the visual tracking thread. It is more accurate than the initial value obtained by using the reference key frame and motion model. It can improve the positioning accuracy and improve the tracking loss to some extent. Finally, a more accurate environment point cloud map is constructed for subsequent work.
关键词: robot positioning,visual sensors,sensor fusion,laser radar,SLAM
更新于2025-09-11 14:15:04
-
Mobile Laser Scanning for Estimating Tree Stem Diameter Using Segmentation and Tree Spine Calibration
摘要: Mobile laser scanning (MLS) could make forest inventories more efficient, by using algorithms that automatically derive tree stem center positions and stem diameters. In this work we present a novel method for calibration of the position for laser returns based on tree spines derived from laser data. A first calibration of positions was made for sequential laser scans and further calibrations of laser returns were possible after segmentation, in which laser returns were associated to individual tree stems. The segmentation made it possible to model tree stem spines (i.e., center line of tree stems). Assumptions of coherent tree spine positions were used for correcting laser return positions on the tree stems, thereby improving estimation of stem profiles (i.e., stem diameters at different heights from the ground level). The method was validated on six 20-m radius field plots. Stem diameters were estimated with a Root-Mean-Square-Error (RMSE) of 1 cm for safely linked trees (maximum link distance of 0.5 m) and with a restriction of a minimum amount of data from height intervals for supporting circle estimates. The accuracy was high for plot level estimates of basal area-weighted mean stem diameter (relative RMSE 3.4%) and basal area (relative RMSE 8.5%) because of little influence of small trees (i.e., aggregation of individual trees). The spine calibration made it possible to derive 3D stem profiles also from 3D laser data calculated from sensor positions with large errors because of disturbed below canopy signals from global navigation satellite systems.
关键词: SLAM,tree map,forest inventory,MLS,personal laser scanner
更新于2025-09-11 14:15:04
-
[IEEE 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) - PATRAS, Greece (2019.7.15-2019.7.17)] 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) - Enchancing SLAM method for mapping and tracking using a low cost laser scanner
摘要: This paper presents a SLAM technique that does not use odometer information. It is based on HECTOR SLAM method from Technische Universitat of Darmstadt, but using a different hardware from the proposed and finally without the use of IMU device. The method is based on modified settings of the HECTOR SLAM method and manages to optimize the method based on COTS hardware.
关键词: laser scanner,SLAM,Robotic Vision,ROS
更新于2025-09-11 14:15:04
-
[IEEE 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) - Nanjing, China (2018.8.27-2018.8.31)] 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) - EMoVI-SLAM: Embedded Monocular Visual Inertial SLAM with Scale Update for Large Scale Mapping and Localization
摘要: In recent researches, monocular simultaneous localization and mapping (SLAM) remains a well-known technique for ego-motion tracking however it significantly suffers from scale drift. Depth estimation in a monocular vision system, which is yet a challenging factor, is relevant to this drift issue and hence monocular SLAM remains unsuitable for large scale mapping and localization. This paper presents a novel solution, a wearable and embedded EMoVI-SLAM system, to resolve scale drift through multi-sensor fusion architecture for integrating visual and inertial data, using monocular SLAM as basis of a visual framework. Firstly, the unknown scale parameter in a monocular vision system is addressed based on the IMU measurements, meanwhile gravity direction and gyroscope bias is initialized. Secondly, the estimated pose from monocular visual sensor and the IMU sensor is fused together using Unscented Kalman Filter (UKF). Furthermore, to minimize scale drift, the scale is re-computed after IMU bias errors exceeds the safe threshold limit. Finally, the experiments are carried out by mounting embedded SLAM system on a head-gear in two different test-environments for indoor and outdoor large-scale motion as well as on EuRoC dataset. Experiment results shows that proposed algorithm performs better than the state of the art visual inertial SLAM systems.
关键词: scale drift,monocular SLAM,UKF,visual-inertial fusion,large-scale mapping
更新于2025-09-10 09:29:36
-
HW/SW co-design of a visual SLAM application
摘要: Vision-based advanced driver assistance systems (ADAS), appeared in the 2000s, are increasingly integrated on-board mass-produced vehicles, as off-the-shelf low-cost cameras are now available. But ADAS implement most of the time-specific and basic functionalities such as lane departure or control of the distance to other vehicles. Integrating accurate localization and mapping functionalities meeting the constraints of ADAS (high-throughput, low-consumption, and small-design footprint) would pave the way towards obstacle detection, identification and tracking on-board vehicles at potential high speed. While the SLAM problem has been widely addressed by the robotics community, very few embedded operational implementations can be found, and they do not meet the ADAS-related constraints. In this paper, we implement the first 3D monocular EKF-SLAM chain on a heterogeneous architecture, on a single System on Chip (SoC), meeting these constraints. In order to do so, we picked up a standard co-design method (Shaout et al. Specification and modeling of hw/sw co-design for heterogeneous embedded systems, 2009) and adapted it to the implementation of potentially any of such complex processing chains. The refined method encompasses a hardware-in-the-loop approach allowing to progressively integrate hardware accelerators on the basis of a systematic rule. We also have designed original hardware accelerators for all the image processing functions involved, and for some algebraic operations involved in the filtering process.
关键词: SLAM,ADAS,FPGA,Co-design,Machine-vision
更新于2025-09-09 09:28:46
-
[IEEE 2018 24th International Conference on Pattern Recognition (ICPR) - Beijing, China (2018.8.20-2018.8.24)] 2018 24th International Conference on Pattern Recognition (ICPR) - Real-time Vehicle Localization and Tracking Using Monocular Panomorph Panoramic Vision
摘要: This paper presents a feasibility analysis of the ORB-SLAM [1] for real-time vehicle localization and tracking using a monocular visual camera providing 360? panoramic views. This method described in [1] was initially designed and developed for conventional cameras, making use of a method for detection and tracking visual features and estimating the camera trajectory while reconstructing the environment. The accuracy of the tracking depends on the ability of this method to robustly detect and match sufficient visual features. This work aims to extend this method to large monocular round views using fish-eye-like cameras allowing an increase of visual features with the aim of improving localization robustness. The main challenge in using a standard fish-eye camera for generating panoramic views is the reduction of visual performance due to a potential higher distortion and lower spatial resolution compared to that using a standard camera lens. The objective of this research is to perform a feasibility analysis of a method combining a camera equipped with a panomorph lens to generate real-time panoramic views at minimal distortion and ORB-SLAM to robustly detect and track visual features for real-time camera localization and tracking. A quantitative evaluation is performed on a vehicle driving in an outdoor natural scene with the monocular panomorph camera mounted on-front and without any other additional sensors. The results with analysis and a concluding summary are included as well.
关键词: panoramic vision,visual SLAM,localization and tracking,panomorph
更新于2025-09-09 09:28:46