修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

262 条数据
?? 中文(中国)
  • Snow Loss Prediction for Photovoltaic Farms Using Computational Intelligence Techniques

    摘要: With the recent widespread deployment of Photovoltaic (PV) panels in the northern snow-prone areas, performance analysis of these panels is getting much more importance. Partial or full reduction in energy yield due to snow accumulation on the surface of PV panels, which is referred to as snow loss, reduces their operational efficiency. Developing intelligent algorithms to accurately predict the future snow loss of PV farms is addressed in this article for the first time. The article proposes daily snow loss prediction models using machine learning algorithms solely based on meteorological data. The algorithms include regression trees, gradient boosted trees, random forest, feed-forward and recurrent artificial neural networks, and support vector machines. The prediction models are built based on the snow loss of a PV farm located in Ontario, Canada which is calculated using a 3-stage model and hourly data records over a 4-year period. The stages of the aforementioned model consist of: stage I: yield determination, stage II: power loss calculation, and stage III: snow loss extraction. The functionality of the proposed prediction models is validated over the historical data and the optimal hyperparameters are selected for each model to achieve the best results. Among all the models, gradient boosted trees obtained the minimum prediction error and thus the best performance. The results achieved prove the effectiveness of the proposed models for the prediction of daily snow loss of PV farms.

    关键词: snow loss,Intelligent prediction,snowfall,photovoltaic (PV) farm,machine learning

    更新于2025-09-23 15:21:01

  • An Ensemble Learner-Based Bagging Model Using Past Output Data for Photovoltaic Forecasting

    摘要: As the world is aware, the trend of generating energy sources has been changing from conventional fossil fuels to sustainable energy. In order to reduce greenhouse gas emissions, the ratio of renewable energy sources should be increased, and solar and wind power, typically, are driving this energy change. However, renewable energy sources highly depend on weather conditions and have intermittent generation characteristics, thus embedding uncertainty and variability. As a result, it can cause variability and uncertainty in the power system, and accurate prediction of renewable energy output is essential to address this. To solve this issue, much research has studied prediction models, and machine learning is one of the typical methods. In this paper, we used a bagging model to predict solar energy output. Bagging generally uses a decision tree as a base learner. However, to improve forecasting accuracy, we proposed a bagging model using an ensemble model as a base learner and adding past output data as new features. We set base learners as ensemble models, such as random forest, XGBoost, and LightGBMs. Also, we used past output data as new features. Results showed that the ensemble learner-based bagging model using past data features performed more accurately than the bagging model using a single model learner with default features.

    关键词: ensemble,decision tree,bagging,Light GBM,lagged data,machine learning,random forest,XGBoost,photovoltaic power forecasting

    更新于2025-09-23 15:21:01

  • Low-Power Optical Sensor for Traffic Detection

    摘要: 4-D-computed tomography (4DCT) provides not only a new dimension of patient-specific information for radiation therapy planning and treatment, but also a challenging scale of data volume to process and analyze. Manual analysis using existing 3-D tools is unable to keep up with vastly increased 4-D data volume, automated processing and analysis are thus needed to process 4DCT data effectively and efficiently. In this paper, we applied ideas and algorithms from image/signal processing, computer vision, and machine learning to 4DCT lung data so that lungs can be reliably segmented in a fully automated manner, lung features can be visualized and measured on the fly via user interactions, and data quality classifications can be computed in a robust manner. Comparisons of our results with an established treatment planning system and calculation by experts demonstrated negligible discrepancies (within ±2%) for volume assessment but one to two orders of magnitude performance enhancement. An empirical Fourier-analysis-based quality measure-delivered performances closely emulating human experts. Three machine learners are inspected to justify the viability of machine learning techniques used to robustly identify data quality of 4DCT images in the scalable manner. The resultant system provides a toolkit that speeds up 4-D tasks in the clinic and facilitates clinical research to improve current clinical practice.

    关键词: classification algorithms,machine learning algorithms,image analysis,Biomedical image processing,data visualization,computed tomography,morphological operations

    更新于2025-09-23 15:21:01

  • [IEEE 2019 IEEE 7th Workshop on Wide Bandgap Power Devices and Applications (WiPDA) - Raleigh, NC, USA (2019.10.29-2019.10.31)] 2019 IEEE 7th Workshop on Wide Bandgap Power Devices and Applications (WiPDA) - Implementation and Characterization of Point Field Detectors for Current Mismatch Measurements in Paralleled GaN HEMTs

    摘要: In-vehicle speech-based interaction between a driver and a driving agent should be performed without affecting the driving behavior. A driving agent provides information to the driver and helps his/her driving behavior and non-driving-related tasks, e.g., selecting music and giving weather information. In this paper, we focus on a method for determining utterance timings when a driving agent provides non-driving-related information. If a driving agent provides a driver with non-driving-related information at an inappropriate moment, it will distract his/her driving behavior and deteriorate his/her safety driving. To solve or to mitigate the problem, we propose a novel method for determining the utterance timing of a driving agent on the basis of a double articulation analyzer, which is an unsupervised nonparametric Bayesian machine learning method for detecting contextual change points. To verify the effectiveness of the method, we conduct two experiments. One is an experiment on a short circuit around a park in an urban area, and the other is an experiment on a long course in a town. The results show that the proposed method enables a driving agent to avoid inappropriate timing better than baseline methods.

    关键词: Driving agent,machine learning,driving data,driver distraction,nonparametric Bayes

    更新于2025-09-23 15:21:01

  • Autonomous Tuning and Charge-State Detection of Gate-Defined Quantum Dots

    摘要: Defining quantum dots in semiconductor-based heterostructures is an essential step in initializing solid-state qubits. With growing device complexity and increasing number of functional devices required for measurements, a manual approach to finding suitable gate voltages to confine electrons electrostatically is impractical. Here, we implement a two-stage device characterization and dot-tuning process, which first determines whether devices are functional and then attempts to tune the functional devices to the single or double quantum-dot regime. We show that automating well-established manual-tuning procedures and replacing the experimenter’s decisions by supervised machine learning is sufficient to tune double quantum dots in multiple devices without premeasured input or manual intervention. The quality of measurement results and charge states are assessed by four binary classifiers trained with experimental data, reflecting real device behavior. We compare and optimize eight models and different data preprocessing techniques for each of the classifiers to achieve reliable autonomous tuning, an essential step towards scalable quantum systems in quantum-dot-based qubit architectures.

    关键词: semiconductor qubits,autonomous tuning,machine learning,quantum dots

    更新于2025-09-23 15:21:01

  • Galliuma??Borona??Phosphide ($$\hbox {GaBP}_{2}$$): a new IIIa??V semiconductor for photovoltaics

    摘要: Using machine learning (ML) approach, we unearthed a new III–V semiconducting material having an optimal bandgap for high-efficient photovoltaics with the chemical composition of Gallium–Boron–Phosphide (GaBP2, space group: Pna21). ML predictions are further validated by state-of-the-art ab initio density functional theory simulations. The stoichiometric Heyd–Scuseria–Ernzerhof bandgap of GaBP2 is noted to be 1.65 eV, a close ideal value (1.4–1.5 eV) to reach the theoretical Queisser–Shockley limit. The calculated electron mobility is similar to that of silicon. Unlike perovskites, the newly discovered material is thermally, dynamically and mechanically stable. Above all the chemical composition of GaBP2 is non-toxic and relatively earth abundant, making it a new generation of PV material. Using ML, we showed that with a minimal set of features, the bandgap of III–III–V and II–IV–V semiconductor can be predicted up to an RMSE of less than 0.4 eV. We have presented a set of scaling laws, which can be used to estimate the bandgap of new III–III–V and II–IV–V semiconductor, with three different crystal phases, within an RMSE of 0.4 eV.

    关键词: Gallium–Boron–Phosphide,photovoltaics,GaBP2,III–V semiconductor,density functional theory,machine learning

    更新于2025-09-23 15:21:01

  • [IEEE 2019 29th Australasian Universities Power Engineering Conference (AUPEC) - Nadi, Fiji (2019.11.26-2019.11.29)] 2019 29th Australasian Universities Power Engineering Conference (AUPEC) - Adaptive Boosting and Bootstrapped Aggregation based Ensemble Machine Learning Methods for Photovoltaic Systems Output Current Prediction

    摘要: Photovoltaics output current prediction received great deal of attention in recent years, due to the high penetration level of PV utilization. The intermittent nature of PV systems, in addition to the fast-varying irradiance levels, provoked the need for fast, accurate and reliable forecasting techniques. Machine Learning (ML) methods have been proven to effectively solve regression-based prediction problems. ML methods that utilize multiple models to construct decision trees are called Ensemble Machine Learning (EML) algorithms. This paper presents a comparison study of two EML methods namely; AdaBoost and Random Forest for photovoltaics application. A dataset of fast varying environmental conditions has been employed and the terminal current of the experimental setup has been augmented based on the mathematical model and the use of an evolutionary algorithm. The mathematical model has been examined for several irradiance and temperature levels and adjusted based on the manufacturer datasheet. Random Forest overall absolute error distribution had the lowest mean and standard deviation. Results shows the superior performance of Random Forest over AdaBoost in terms of absolute error, on the contrary, AdaBoost absolute error distribution is scattered with larger quartiles limits. Random Forest overall absolute error distribution had the lowest mean of 0.27% with a standard deviation of 0.91%, however, AdaBoost absolute error mean was as high as 34.5% with a standard deviation of 15.8% relative to the mathematical model. Accurate predictions can be integrated in an EML based maximum power point tracking (MPPT) scheme.

    关键词: ensemble machine learning,adaptive boosting,photovoltaics,regression decision trees,single diode model

    更新于2025-09-23 15:21:01

  • Multi-Spectral Water Index (MuWI): A Native 10-m Multi-Spectral Water Index for Accurate Water Mapping on Sentinel-2

    摘要: Accurate water mapping depends largely on the water index. However, most previously widely-adopted water index methods are developed from 30-m resolution Landsat imagery, with low-albedo commission error (e.g., shadow misclassified as water) and threshold instability being identified as the primary issues. Besides, since the shortwave-infrared (SWIR) spectral band (band 11) on Sentinel-2 is 20 m spatial resolution, current SWIR-included water index methods usually produce water maps at 20 m resolution instead of the highest 10 m resolution of Sentinel-2 bands, which limits the ability of Sentinel-2 to detect surface water at finer scales. This study aims to develop a water index from Sentinel-2 that improves native resolution and accuracy of water mapping at the same time. Support Vector Machine (SVM) is used to exploit the 10-m spectral bands among Sentinel-2 bands of three resolutions (10-m; 20-m; 60-m). The new Multi-Spectral Water Index (MuWI), consisting of the complete version and the revised version (MuWI-C and MuWI-R), is designed as the combination of normalized differences for threshold stability. The proposed method is assessed on coincident Sentinel-2 and sub-meter images covering a variety of water types. When compared to previous water indexes, results show that both versions of MuWI enable to produce native 10-m resolution water maps with higher classification accuracies (p-value < 0.01). Commission and omission errors are also significantly reduced particularly in terms of shadow and sunglint. Consistent accuracy over complex water mapping scenarios is obtained by MuWI due to high threshold stability. Overall, the proposed MuWI method is applicable to accurate water mapping with improved spatial resolution and accuracy, which possibly facilitates water mapping and its related studies and applications on growing Sentinel-2 images.

    关键词: MNDWI,OSH,SVM,AWEI,water mapping,water classification,shadow,NDWI,Sentinel-2,MuWI,Landsat,water index,multi-spectral water index,sunglint,machine learning

    更新于2025-09-23 15:21:01

  • Automated Tuning of Double Quantum Dots into Specific Charge States Using Neural Networks

    摘要: While quantum dots are at the forefront of quantum-device technology, the tuning of multidot systems requires a lengthy experimental process as multiple parameters need to be accurately controlled. This process becomes increasingly time-consuming and difficult to perform manually as the devices become more complex and the number of tuning parameters grows. In this work, we present a crucial step toward automated tuning of quantum-dot qubits. We introduce an algorithm driven by machine learning that uses a small number of coarse-grained measurements as its input and tunes the quantum-dot system into a preselected charge state. We train and test our algorithm on a GaAs double-quantum-dot device and we consistently arrive at the desired state or its immediate neighborhood.

    关键词: neural networks,automated tuning,charge states,machine learning,quantum dots

    更新于2025-09-23 15:21:01

  • [Lecture Notes in Computer Science] Pattern Recognition and Computer Vision Volume 11256 (First Chinese Conference, PRCV 2018, Guangzhou, China, November 23-26, 2018, Proceedings, Part I) || Damage Online Inspection in Large-Aperture Final Optics

    摘要: Under the condition of inhomogeneous total internal reflection illumination, a novel approach based on machine learning is proposed to solve the problem of damage online inspection in large-aperture final optics. The damage online inspection mainly includes three problems: automatic classification of true and false laser-induced damage (LID), automatic classification of input and exit surface LID and size measurement of the LID. We first use the local area signal-to-noise ratio (LASNR) algorithm to segment all the candidate sites in the image, then use kernel-based extreme learning machine (K-ELM) to distinguish the true and false damage sites from the candidate sites, propose autoencoder-based extreme learning machine (A-ELM) to distinguish the input and exit surface damage sites from the true damage sites, and finally propose hierarchical kernel extreme learning machine (HK-ELM) to predict the damage size. The experimental results show that the method proposed in this paper has a better performance than traditional methods. The accuracy rate is 97.46% in the classification of true and false damage; the accuracy rate is 97.66% in the classification of input and exit surface damage; the mean relative error of the predicted size is within 10%. So the proposed method meets the technical requirements for the damage online inspection.

    关键词: Size measurement,Damage online inspection,Classification,Laser-induced damage,Machine learning

    更新于2025-09-23 15:21:01