修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

13 条数据
?? 中文(中国)
  • [IEEE 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC) - Maui, HI, USA (2018.11.4-2018.11.7)] 2018 21st International Conference on Intelligent Transportation Systems (ITSC) - Vehicle Detection and Localization using 3D LIDAR Point Cloud and Image Semantic Segmentation

    摘要: This paper presents a real-time approach to detect and localize surrounding vehicles in urban driving scenes. We propose a multimodal fusion framework that processes both 3D LIDAR point cloud and RGB image to obtain robust vehicle position and size in a Bird's Eye View (BEV). Semantic segmentation from RGB images is obtained using our efficient Convolutional Neural Network (CNN) architecture called ERFNet. Our proposal takes advantage of accurate depth information provided by LIDAR and detailed semantic information processed from a camera. The method has been tested using the KITTI object detection benchmark. Experiments show that our approach outperforms or is on par with other state-of-the-art proposals but our CNN was trained in another dataset, showing a good generalization capability to any domain, a key point for autonomous driving.

    关键词: localization,ERFNet,image semantic segmentation,KITTI,autonomous driving,vehicle detection,CNN,point cloud,multimodal fusion,3D LIDAR

    更新于2025-09-23 15:22:29

  • [IEEE 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS) - BALI, Indonesia (2018.11.1-2018.11.3)] 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS) - Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices

    摘要: Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.

    关键词: low latency,video coding,virtual reality (VR),autonomous driving

    更新于2025-09-23 15:22:29

  • [IEEE 2019 IEEE Intelligent Transportation Systems Conference - ITSC - Auckland, New Zealand (2019.10.27-2019.10.30)] 2019 IEEE Intelligent Transportation Systems Conference (ITSC) - Graph-based Map-Aided Localization using Cadastral Maps as Virtual Laser Scans

    摘要: Environment-based localization algorithms, such as laser odometry, can achieve a remarkable accuracy on the local scale. For autonomous driving, however, it is mandatory to combine these estimates with global information to overcome large scale drift. Our approach uses freely accessible cadastral plans (building footprints) together with 2D laser information and odometry in a graph-based approach to realize real-time global localization. The main contributions of our work reside in the way we create a virtual laser scan from cadastral plans, and that we consider the observation integrity by identifying corridor-like environment configurations (ambiguous positioning along the longitudinal axis). Besides, we evaluate our approach on a vehicle in two urban scenarios. We present a comparison of the obtained precision using different relevant combinations of the proposed contributions and show that we can reach an average positioning accuracy of 55cm at best without requiring a first passage of an equipped vehicle to build a map.

    关键词: laser scan,autonomous driving,localization,cadastral maps,graph-based

    更新于2025-09-16 10:30:52

  • [IEEE 2019 IEEE Intelligent Vehicles Symposium (IV) - Paris, France (2019.6.9-2019.6.12)] 2019 IEEE Intelligent Vehicles Symposium (IV) - A decision-making architecture for automated driving without detailed prior maps

    摘要: Autonomous driving requires general methods to generalize unpredictable situations and reason in complex scenarios where safety is critical and the vehicle must react in a reliable manner. In this sense, digital maps are a crucial component for relating the location of the vehicle and identifying the different road features. In this work, we present a decision-making architecture which does not require detailed prior maps. Instead, OSM is used to plan a global route and an automatically generated driving corridors, which are adapted using a proposed vision-based algorithm. Moreover, a grid-based approach is also applied to consider the localization uncertainty. Those self-generated driving corridors are used by the local planner to plan the trajectories the vehicle will follow. Our approach integrates global, local and HMI components to provide the required functionalities for autonomous driving in a general manner.

    关键词: grid-based approach,decision-making architecture,Autonomous driving,vision-based algorithm,OpenStreetMap

    更新于2025-09-12 10:27:22

  • [IEEE 2018 IEEE Intelligent Vehicles Symposium (IV) - Changshu (2018.6.26-2018.6.30)] 2018 IEEE Intelligent Vehicles Symposium (IV) - Vision-based Semantic Mapping and Localization for Autonomous Indoor Parking

    摘要: In this paper, we proposed a novel and practical solution for the real-time indoor localization of autonomous driving in parking lots. High-level landmarks, the parking slots, are extracted and enriched with labels to avoid the aliasing of low-level visual features. We then proposed a robust method for detecting incorrect data associations between parking slots and further extended the optimization framework by dynamically eliminating suboptimal data associations. Visual fiducial markers are introduced to improve the overall precision. As a result, a semantic map of the parking lot can be established fully automatically and robustly. We experimented the performance of real-time localization based on the map using our autonomous driving platform TiEV, and the average accuracy of 0.3m track tracing can be achieved at a speed of 10kph.

    关键词: parking slots,autonomous driving,indoor localization,visual fiducial markers,semantic mapping

    更新于2025-09-10 09:29:36

  • [IEEE 2018 IEEE 3rd Optoelectronics Global Conference (OGC) - Shenzhen, China (2018.9.4-2018.9.7)] 2018 IEEE 3rd Optoelectronics Global Conference (OGC) - Towards Autonomous Driving Technology: A Method to Enhance Visibility in Fog Based on Low-Position Road Lighting

    摘要: To see through the thick fog is an important challenge in the field of autonomous driving technology. This experiment developed a new lighting method to improve visibility under thick fog based on the principle of visual acuity in biology of vision. The experiment was conducted in a fog equipment that is 1.5 meters * 1.2 meters * 4.0 meters in volume, with an accurate ratio of 1:15 compared with realistic road. The experiment increased the visual distance under thick fog from 1.5 meters to 3.0 meters, increased by one time. The research results are of great significance to the development of autonomous driving technology.

    关键词: low-position road lighting,visibility enhancement,autonomous driving technology,foggy weather

    更新于2025-09-09 09:28:46

  • Aerial LaneNet: Lane-Marking Semantic Segmentation in Aerial Imagery Using Wavelet-Enhanced Cost-Sensitive Symmetric Fully Convolutional Neural Networks

    摘要: The knowledge about the placement and appearance of lane markings is a prerequisite for the creation of maps with high precision, necessary for autonomous driving, infrastructure monitoring, lanewise traffic management, and urban planning. Lane markings are one of the important components of such maps. Lane markings convey the rules of roads to drivers. While these rules are learned by humans, an autonomous driving vehicle should be taught to learn them to localize itself. Therefore, accurate and reliable lane-marking semantic segmentation in the imagery of roads and highways is needed to achieve such goals. We use airborne imagery that can capture a large area in a short period of time by introducing an aerial lane marking data set. In this paper, we propose a symmetric fully convolutional neural network enhanced by wavelet transform in order to automatically carry out lane-marking segmentation in aerial imagery. Due to a heavily unbalanced problem in terms of a number of lane-marking pixels compared with background pixels, we use a customized loss function as well as a new type of data augmentation step. We achieve a high accuracy in pixelwise localization of lane markings compared with the state-of-the-art methods without using the third-party information. In this paper, we introduce the first high-quality data set used within our experiments, which contains a broad range of situations and classes of lane markings representative of today’s transportation systems. This data set will be publicly available, and hence, it can be used as the benchmark data set for future algorithms within this domain.

    关键词: Aerial imagery,wavelet transform,autonomous driving,traffic monitoring,remote sensing,fully convolutional neural networks (FCNNs),lane-marking segmentation,infrastructure monitoring,mapping

    更新于2025-09-09 09:28:46

  • Automatic Glare Detection via Photometric, Geometric, and Global Positioning Information

    摘要: Glare due to sunlight, moonlight, or other light sources can be a serious impediment during autonomous or manual driving. Automatically detecting the presence, location, and severity of such glare can be of critical importance for an autonomous driving system, which may then give greater priority to other sensors or cues/parts of the scene. We present an algorithm for automatic real-time glare detection that uses a combination of: (1) the intensity, saturation, and local contrast of the input frame; (2) shape detection; and (3) solar azimuth and elevation computed based on the position and heading information from the GPS (used under daylight conditions). These data are used to generate a glare occurrence map that indicates the center location(s) and extent(s) of the glare region(s). Testing on a variety of daytime and nighttime scenes demonstrates that the proposed system is effective at glare detection and is capable of real-time operation.

    关键词: autonomous driving,photometric,glare detection,geometric,GPS

    更新于2025-09-09 09:28:46

  • [IEEE 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC) - Maui, HI, USA (2018.11.4-2018.11.7)] 2018 21st International Conference on Intelligent Transportation Systems (ITSC) - Camera-Based Semantic Enhanced Vehicle Segmentation for Planar LIDAR

    摘要: Vehicle segmentation is an important step in perception for autonomous driving vehicles, providing object-level environmental understanding. Its performance directly affects other functions in the autonomous driving car, including Decision-Making and Trajectory Planning. However, this task is challenging for planar LIDAR due to its limited vertical field of view (FOV) and quality of points. In addition, directly estimating 3D location, dimensions and heading of vehicles from an image is difficult due to the limited depth information of a monocular camera. We propose a method that fuses a vision-based instance segmentation algorithm and LIDAR-based segmentation algorithm to achieve an accurate 2D bird's-eye view object segmentation. This method combines the advantages of both camera and LIDAR sensor: the camera helps to prevent over-segmentation in LIDAR, and LIDAR segmentation removes false positive areas in the interest regions in the vision results. A modified T-linkage RANSAC is applied to further remove outliers. A better segmentation also results in a better orientation estimation. We achieved a promising improvement in average absolute heading error and 2D IOU on both a reduced-resolution KITTI dataset and our Cadillac SRX planar LIDAR dataset.

    关键词: autonomous driving,Vehicle segmentation,T-linkage RANSAC,fusion,camera,semantic segmentation,LIDAR

    更新于2025-09-04 15:30:14

  • [IEEE 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC) - Maui, HI, USA (2018.11.4-2018.11.7)] 2018 21st International Conference on Intelligent Transportation Systems (ITSC) - Automatic Vector-based Road Structure Mapping Using Multi-beam LiDAR

    摘要: In this paper, we studied a SLAM method for vector-based road structure mapping using multi-beam LiDAR. We proposed to use the polyline as the primary mapping element instead of grid cell or point cloud, because the vector-based representation is precise and lightweight, and it can directly generate vector-based High-Definition (HD) driving map as demanded by autonomous driving systems. We explored: 1) the extraction and vectorization of road structures based on local probabilistic fusion. 2) the efficient vector-based matching between frames of road structures. 3) the loop closure and optimization based on the pose-graph. In this study, we took a specific road structure, the road boundary, as an example. We applied the proposed matching method in three different scenes and achieved the average absolute matching error of 0.07m. We further applied the vector-based mapping system to a road with the length of 860 meters and achieved an average global accuracy of 0.466m without the aid of GPS.

    关键词: vector-based mapping,autonomous driving,multi-beam LiDAR,road boundary,SLAM

    更新于2025-09-04 15:30:14