修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

262 条数据
?? 中文(中国)
  • Deep hybrid scattering image learning

    摘要: A well-trained deep neural network is shown to gain capability of simultaneously restoring two kinds of images, which are completely destroyed by two distinct scattering medias respectively. The network, based on the U-net architecture, can be trained by blended dataset of speckles-reference images pairs. We experimentally demonstrate the power of the network in reconstructing images which are strongly di?used by glass di?user or multi-mode ?ber. The learning model further shows good generalization ability to reconstruct images that are distinguished from the training dataset. Our work facilitates the study of optical transmission and expands machine learning’s application in optics.

    关键词: Image processing,di?ractive optics.,machine learning

    更新于2025-09-23 15:23:52

  • Research on feature point extraction and matching machine learning method based on light field imaging

    摘要: At present, there are many methods to realize the matching of specified images with features, and the basic components include image feature point detection, feature description, and image matching. Based on this background, this article has done different research and exploration around these three aspects. The image feature point detection method is firstly studied, which commonly include image edge information-based feature detection method, corner information-based detection method, and various interest operators. However, all of the traditional detection methods are involved in problems of large computation burden and time consumption. In order to solve this problem, a feature detection method based on image grayscale information-FAST operator is used in this paper, which is combined with decision tree theory to effectively improve the speed of extracting image feature points. Then, the feature point description method BRIEF operator is studied, which is a local expression of detected image feature points based on descriptors. Since the descriptor does not have rotation invariance, the detection operator is endowed by a direction that is proposed in this paper, and then the local feature description is conducted on the feature descriptor to generate a binary string array containing direction information. Finally, the feature matching machine learning method is analyzed, and the nearest search method is used to find the nearest feature point pair in Euclidean distance, of which the calculation burden is small. The simulation results show that the proposed nearest neighbor search and matching machine learning algorithm has higher matching accuracy and faster calculation speed compared with the classical feature matching algorithm, which has great advantages in processing a large number of array images captured by the light field camera.

    关键词: Nearest neighbor search,Light field imaging,Image matching,Machine learning

    更新于2025-09-23 15:23:52

  • [IEEE 2018 International conference on Computing, Electronic and Electrical Engineering (ICE Cube) - Quetta, Pakistan (2018.11.12-2018.11.13)] 2018 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube) - Mathematical Modeling of Photonic Crystal based Optical Filters using Machine Learning

    摘要: In this paper, we present a new approach to design photonic crystal based optical filters using machine learning based mathematical model. The presented optical filter device finds its application in near infrared spectral range. The design and spectral response of the filter can be predicted using the proposed mathematical model which can considerably reduce simulation time and efforts. The numerical simulation of the optical filter device along with its spectral results and mathematical modeling are described.

    关键词: machine learning in optics,machine learning,optical filters,guided-mode resonance,Dielectric photonic crystals,low-index contrast,FDTD

    更新于2025-09-23 15:22:29

  • A review of image-based automatic facial landmark identification techniques

    摘要: The accurate identification of landmarks within facial images is an important step in the completion of a number of higher-order computer vision tasks such as facial recognition and facial expression analysis. While being an intuitive and simple task for human vision, it has taken decades of research, an increase in the availability of quality data sets, and a dramatic improvement in computational processing power to achieve near-human accuracy in landmark localisation. The intent of this paper is to provide a review of the current facial landmarking literature, outlining the significant progress that has been made in the field from classical generative methods to more modern techniques such as sophisticated deep neural network architectures. This review considers a generalised facial landmarking problem and provides experimental examples for each stage in the process, reporting repeatable benchmarks across a number of publicly available datasets and linking the results of these examples to the recently reported performance in the literature.

    关键词: Vision,Landmarking,Face,Registration,Survey,Image,Review,Artificial neural networks,Deep learning,Machine learning

    更新于2025-09-23 15:22:29

  • [IEEE 2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE) - Vilnius, Lithuania (2018.11.8-2018.11.10)] 2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE) - Deep Neural Network-based Feature Descriptor for Retinal Image Registration

    摘要: Feature description is an important step in image registration workflow. Discriminative power of feature descriptors affects feature matching performance and overall results of image registration. Deep Neural Network-based (DNN) feature descriptors are emerging trend in image registration tasks, often performing equally or better than hand-crafted ones. However, there are no learned local feature descriptors, specifically trained for human retinal image registration. In this paper we propose DNN-based feature descriptor that was trained on retinal image patches and compare it to well-known hand-crafted feature descriptors. Training dataset of image patches was compiled from nine online datasets of eye fundus images. Learned feature descriptor was compared to other descriptors using Fundus Image Registration dataset (FIRE), measuring amount of correctly matched ground truth points (Rank-1 metric) after feature description. We compare the performance of various feature descriptors applied for retinal image feature matching.

    关键词: artificial neural networks,biomedical imaging,machine learning,image registration,retinal images,feature descriptors

    更新于2025-09-23 15:22:29

  • [IEEE 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP) - Stuttgart, Germany (2018.11.20-2018.11.22)] 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP) - Generative models for direct generation of CNC toolpaths

    摘要: Today, numerical controls (CNC) are the standard for the control of machine tools and industrial robots in production and enable highly flexible and efficient production, especially for frequently changing production tasks. A numerical control has discrete inputs and outputs. Within the NC channel, however, it is necessary to analytically describe curves for the calculation of the position setpoints and the jerk limitation. The resulting change between discrete and continuous description forms and the considerable restrictions in the parallelisation of the interpolation of continuous curves within the NC channel lead to a performance overhead that limits the performance of the NC channel with regard to the calculation of new position setpoints. This can lead to a drop in production speed and thus to longer production times. To solve this problem, we propose a new approach in this paper. This is based on the use of deep generative models and allows the direct generation of interpolated toolpaths without calculation of continuous curves and subsequent discretization. The generative models are being trained to create curves of certain types such as linear and parabolic curves or splines directly as discrete point sequences. This approach is very well feasible with regard to its parallelization and reduces the computing effort within the NC channel. First results with straight lines and parabolic curves show the feasibility of this new approach for the generation of CNC toolpaths.

    关键词: machine learning,computerized numerical control,interpolation,CNC,generative adversarial networks

    更新于2025-09-23 15:22:29

  • [IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Feature Design for Classification from Tomosar Data

    摘要: While previous work primarily focused on using Tomographic Synthetic Aperture Radar (TomoSAR) data to analyze the 3D structure of the imaged scene, we study its potential for the generation of semantic land cover maps in a supervised framework. We extract different features from the covariance matrices of a tomographic image stack as well as from the tomograms computed by tomographic focusing. To assess the impact of our approach, we compare our results to classification maps obtained from a fully polarimetric image. We show that it is possible to outperform classification results from polarimetric data by carefully designing hand-crafted features which can be extracted either from multi-baseline single polarization covariance matrices or from tomograms obtained after tomographic focusing. Our experiments show a significant gain in the classification accuracy, especially on challenging classes such as heterogeneous city and road.

    关键词: machine learning,Synthetic Aperture Radar,feature extraction,tomography

    更新于2025-09-23 15:22:29

  • [IEEE NAECON 2018 - IEEE National Aerospace and Electronics Conference - Dayton, OH, USA (2018.7.23-2018.7.26)] NAECON 2018 - IEEE National Aerospace and Electronics Conference - Onboard Image Processing for Small Satellites

    摘要: In general, the computational ability of spacecraft and satellites has lagged behind terrestrial computers by several generations. Moore's Law turns the supercomputers of yesterday into the laptops of today, but space computing remains relatively underpowered due to the harsh radiation environment and low risk-tolerance of most space missions. Space missions are generally low risk because of the high cost of components and launch. However, launch costs are drastically decreasing and innovations such as CubeSats are changing the risk equation. By accepting more risk and utilizing commercial of the shelf (COTS) parts, it is possible to cheaply build and launch extremely capable computing platforms into space. High performance satellites will be required for advanced interplanetary exploration due to latency challenges. The long transmission times between planets means satellites or robotic explorers need onboard processing to perform tasks in real-time. This paper explores one possible application that could be hosted onboard the next generation of high performance satellites, performing object classification on satellite imagery. Automation of satellite imagery processing is currently performed by servers or workstations on Earth, but this paper will show that those algorithms can be moved onboard satellites by using COTS components. First traditional computer vision techniques such as edge detection and sliding windows are used to detect possible objects on the open ocean. Then a modern neural network architecture is used to classify the object as a ship or not. This application is implemented on a Nvidia Jetson TX2 and measurements of the application's power use confirm that it fits within the Size Weight and Power (SWAP) requirements of SmallSats and possibly even CubeSats.

    关键词: Satellite Imagery,Machine Learning,Neural Networks,Onboard Processing

    更新于2025-09-23 15:22:29

  • Wind Speed Extrapolation using Machine Learning Methods and LiDAR Measurements

    摘要: Accurate wind energy assessments require wind speed (WS) at the hub height. The cost of WS measurements grows enormously with height. This paper utilizes deep neural network (DNN) algorithm for the extrapolation of the WS to higher heights based on measured values at lower heights. LiDAR measurements at lower heights are used for training the system and at higher heights for performance analysis. These measurements are made at 10, 20, . . . , and 120 m heights. First, the measured WS values at 10–40 m were used to extrapolate values up to 120 m. In the second scenario, the WS at 10–50 m were used to extrapolate values up to 120 m. This continued until the last scenario, in which the WS at 10–100 m were used to estimate values at 110 and 120 m. A relationship between heights of measurements and the accuracy of the WS estimation at hub height is presented. The WS extrapolated using the present approach is compared with the measured values and with local wind shear exponent (LWSE)-based extrapolated WS. Furthermore, to analyze the performance of the DNN relative to other machine learning methods, we compared its performance with that of classical feedforward artificial neural networks trained using a genetic algorithm to find the initial weights and the Levemberg–Marquardt (LM) method (GANN) for training. The mean absolute percent error between measured and extrapolated WS at height 120 m based on measurements between 10–50 m using DNN, GANN, and LWSE are 9.65%, 12.77%, and 9.79%, respectively.

    关键词: wind speed profile,renewable energy,machine learning,Extrapolation

    更新于2025-09-23 15:22:29

  • [IEEE 2018 26th European Signal Processing Conference (EUSIPCO) - Rome (2018.9.3-2018.9.7)] 2018 26th European Signal Processing Conference (EUSIPCO) - Wavelet-Based Classification of Transient Signals for Gravitational Wave Detectors

    摘要: The detection of gravitational waves opened a new window on the cosmos. The Advanced LIGO and Advanced Virgo interferometers will probe a larger volume of Universe and discover new gravitational wave emitters. Characterizing these detectors is of primary importance in order to recognize the main sources of noise and optimize the sensitivity of the searches. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. In this paper we present a classification method for short transient signals based on a Wavelet decomposition and de-noising and a classification of the extracted features based on XGBoost algorithm. Although the results show the accuracy is lower than that obtained with the use of deep learning, this method which extracts features while detecting signals in real time, can be configured as a fast classification system.

    关键词: machine learning classification,signal processing,wavelet decomposition

    更新于2025-09-23 15:22:29