修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

10 条数据
?? 中文(中国)
  • High-Speed Neuromorphic Reservoir Computing Based on a Semiconductor Nanolaser with Optical Feedback Under Electrical Modulation

    摘要: A high-speed neuromorphic reservoir computing system based on a semiconductor nanolaser with optical feedback (SNL-based RC) under electrical modulation is proposed for the first time and demonstrated numerically. A Santa-Fe chaotic time series prediction task is employed to quantify the prediction performance of the SNL-based RC system. The effects of the Purcell cavity-enhanced spontaneous emission factor F and the spontaneous emission coupling factor β on the proposed RC system are analyzed extensively. It is found that, in general, increased F and β extend the range of good prediction performance of the SNL-based RC system. Moreover, the influences of bias current and feedback phase are also considered. Due to the ultra-short photon lifetime in SNL, the information processing rate of the SNL-based RC system reaches 10Gpbs. The proposed high-speed SNL-based RC system in this paper provides theoretical guidelines for the design of RC-based integrated neuromorphic photonic systems.

    关键词: spontaneous emission coupling factor,Purcell factor,reservoir computing,Semiconductor nanolaser,delay systems

    更新于2025-09-23 15:21:01

  • Parallel information processing using a reservoir computing system based on mutually coupled semiconductor lasers

    摘要: Via the nonlinear channel equalization and the Santa-Fe time series prediction, the parallel processing capability of a reservoir computing (RC) system based on two mutually coupled semiconductor lasers is demonstrated numerically. The results show that, for parallel processing the prediction tasks of two Santa-Fe time series with rates of 0.25 GSa/s, the minimum prediction errors are 3.8 × 10?5 and 4.4 × 10?5, respectively. For parallel processing two nonlinear channel equalization tasks, the minimum symbol error rates (SERs) are 3.3 × 10?4 for both tasks. For parallel processing a nonlinear channel equalization and a Santa-Fe time series prediction, the minimum SER is 6.7 × 10?4 for nonlinear channel equalization, and the minimum prediction error is 4.6 × 10?5 for Santa-Fe time series prediction.

    关键词: Santa-Fe time series prediction,parallel processing,reservoir computing,nonlinear channel equalization,semiconductor lasers

    更新于2025-09-19 17:13:59

  • Machine learning algorithms for predicting the amplitude of chaotic laser pulses

    摘要: Forecasting the dynamics of chaotic systems from the analysis of their output signals is a challenging problem with applications in most fields of modern science. In this work, we use a laser model to compare the performance of several machine learning algorithms for forecasting the amplitude of upcoming emitted chaotic pulses. We simulate the dynamics of an optically injected semiconductor laser that presents a rich variety of dynamical regimes when changing the parameters. We focus on a particular dynamical regime that can show ultrahigh intensity pulses, reminiscent of rogue waves. We compare the goodness of the forecast for several popular methods in machine learning, namely, deep learning, support vector machine, nearest neighbors, and reservoir computing. Finally, we analyze how their performance for predicting the height of the next optical pulse depends on the amount of noise and the length of the time series used for training.

    关键词: chaotic systems,laser pulses,reservoir computing,deep learning,forecasting,support vector machine,machine learning,nearest neighbors

    更新于2025-09-16 10:30:52

  • Cross-predicting the dynamics of an optically injected single-mode semiconductor laser using reservoir computing

    摘要: In real-world dynamical systems, technical limitations may prevent complete access to their dynamical variables. Such a lack of information may cause significant problems, especially when monitoring or controlling the dynamics of the system is required or when decisions need to be taken based on the dynamical state of the system. Cross-predicting the missing data is, therefore, of considerable interest. Here, we use a machine learning algorithm based on reservoir computing to perform cross-prediction of unknown variables of a chaotic dynamical laser system. In particular, we chose a realistic model of an optically injected single-mode semiconductor laser. While the intensity of the laser can often be acquired easily, measuring the phase of the electric field and the carriers in real time, although possible, requires a more demanding experimental scheme. We demonstrate that the dynamics of two of the three dynamical variables describing the state of the laser can be reconstructed accurately from the knowledge of only one variable, if our algorithm has been trained beforehand with all three variables for a limited period of time. We analyze the accuracy of the method depending on the parameters of the laser system and the reservoir. Finally, we test the robustness of the cross-prediction method when adding noise to the time series. The suggested reservoir computing state observer might be used in many applications, including reconstructing time series, recovering lost time series data and testing data encryption security in cryptography based on chaotic synchronization of lasers.

    关键词: reservoir computing,chaotic dynamics,cross-prediction,machine learning,semiconductor laser

    更新于2025-09-16 10:30:52

  • Distributed Kerr Non-linearity in a Coherent All-Optical Fiber-Ring Reservoir Computer

    摘要: We investigate, both numerically and experimentally, the usefulness of a distributed non-linearity in a passive coherent photonic reservoir computer. This computing system is based on a passive coherent optical fiber-ring cavity in which part of the non-linearities are realized by the Kerr non-linearity. Linear coherent reservoirs can solve difficult tasks but are aided by non-linear components in their input and/or output layer. Here, we compare the impact of non-linear transformations of information in the reservoirs input layer, its bulk—the fiber-ring cavity—and its readout layer. For the injection of data into the reservoir, we compare a linear input mapping to the non-linear transfer function of a Mach Zehnder modulator. For the reservoir bulk, we quantify the impact of the optical Kerr effect. For the readout layer we compare a linear output to a quadratic output implemented by a photodiode. We find that optical non-linearities in the reservoir itself, such as the optical Kerr non-linearity studied in the present work, enhance the task solving capability of the reservoir. This suggests that such non-linearities will play a key role in future coherent all-optical reservoir computers.

    关键词: Kerr,fiber-ring,distributed non-linearity,photonic,coherent,reservoir computing,passive

    更新于2025-09-16 10:30:52

  • Reservoir computing and decision making using laser dynamics for photonic accelerator

    摘要: The photonic accelerator is a new paradigm of photonic technologies for artificial intelligence, wherein the systems can accelerate information processing in electronic computing. Schemes for reservoir computing and decision making are promising examples of the photonic accelerator. In this work, we describe recent advances in the architectures of reservoir computing and decision making based on complex photonics, wherein the physics of light, including ultrafast dynamics in semiconductor lasers with optical feedback, plays a crucial role in the photonic accelerator.

    关键词: reservoir computing,optical feedback,photonic accelerator,decision making,semiconductor lasers

    更新于2025-09-16 10:30:52

  • Task-Independent Computational Abilities of Semiconductor Lasers with Delayed Optical Feedback for Reservoir Computing

    摘要: Reservoir computing has rekindled neuromorphic computing in photonics. One of the simplest technological implementations of reservoir computing consists of a semiconductor laser with delayed optical feedback. In this delay-based scheme, virtual nodes are distributed in time with a certain node distance and form a time-multiplexed network. The information processing performance of a semiconductor laser-based reservoir computing (RC) system is usually analysed by way of testing the laser-based reservoir computer on specific benchmark tasks. In this work, we will illustrate the optimal performance of the system on a chaotic time-series prediction benchmark. However, the goal is to analyse the reservoir’s performance in a task-independent way. This is done by calculating the computational capacity, a measure for the total number of independent calculations that the system can handle. We focus on the dependence of the computational capacity on the specifics of the masking procedure. We find that the computational capacity depends strongly on the virtual node distance with an optimal node spacing of 30 ps. In addition, we show that the computational capacity can be further increased by allowing for a well chosen mismatch between delay and input data sample time.

    关键词: reservoir computing,neuromorphic computing,delay,feedback,semiconductor laser

    更新于2025-09-12 10:27:22

  • [IEEE 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC) - Munich, Germany (2019.6.23-2019.6.27)] 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC) - Optical Neural Network by Disordered Tumor Spheroids

    摘要: Optical neuromorphic computing processes information at the speed of light, but requires a careful design and fabrication of the deep layers, which strongly hampers the development of large-scale photonic learning machines [1,2]. New paradigms, as reservoir computing [3], suggest that brain-inspired complex systems such as disordered and biological materials may realize artificial neural networks with thousands of computational nodes trained only at the input and at the readout. Here we use real-brain cells for realizing a bio-inspired optical neural network able to extract information about cancer morphodynamics and chemotherapy that is inaccessible to imaging methods [4]. Specifically, we consider glioblastoma tumor spheroids as three-dimensional deep computational reservoirs with thousands of cells acting as wave-mixing nodes for the input light beam. These tumor models are largely used in oncology and are a promising platform for studying complex cell-to-cell interactions and anti-cancer therapeutics. In our hybrid bio/photonic scheme, the tumor model cellular layers are the diffractive deep layers of the optical neural network [Fig. 1(a)]. By exploiting structured light propagation in the disordered assembly [5], we show that the random neural network is a universal optical interpolant able to perform programmed functions in the transmission plane. Through external stimuli on the tumour brain cells – either of thermal or chemical nature – we control the internal weights of the living reservoir and its functionality. Once trained, the response of the living optical neural network follows subcellular cancer morphodynamics, not detected by more invasive and destructive optical imaging. In Fig. 1(b) we demonstrate morphodynamics sensing by inducing hyperthermia with an infrared pump laser. Moreover, we track cellular processes in the tumour model beyond the simple unconstrained growth; in Fig. 1(c) the network output allows to quantify the effect of chemotherapy inhibiting tumour growth. In this case, we realize a non-invasive smart probe for cytotoxicity assay, which is at least one order of magnitude more sensitive with respect to conventional imaging. Our random and hybrid photonic/living system is a novel artificial machine for computing and for the real-time investigation of tumour dynamics.

    关键词: optical neural network,cancer morphodynamics,disordered tumor spheroids,reservoir computing,neuromorphic computing,chemotherapy

    更新于2025-09-11 14:15:04

  • [IEEE 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC) - Munich, Germany (2019.6.23-2019.6.27)] 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC) - Phase Noise Robustness of a Coherent Spatially Parallel Optical Reservoir

    摘要: Reservoir computing is a machine learning technique that is particularly adapted to process time dependent information. It is also very well suitable to experimental implementations, and in particular photonic implementations. In the present work we propose strategies to improve the phase noise robustness of a spatially parallel reservoir computer based on a coherent photonic cavity without active stabilization. The system, described in [4], is currently under development in our laboratory. It is based on a linear Fabry-Pérot resonator, with neurons encoded as a grid of 9 by 9 focused spots on the input mirror plane, and a nonlinear readout operating at 0.5GHz. The coupling between neurons is realized by a Spatial Light Modulator (SLM) placed at the back plane of the cavity and is optimized for each task. We present numerical results obtained for the 4-level channel equalization task, and we find similar results for the NARMA10 task.

    关键词: Fabry-Pérot resonator,Spatial Light Modulator,phase noise robustness,Reservoir computing,photonic implementations

    更新于2025-09-11 14:15:04

  • [IEEE 2018 24th International Conference on Pattern Recognition (ICPR) - Beijing, China (2018.8.20-2018.8.24)] 2018 24th International Conference on Pattern Recognition (ICPR) - Reservoir Computing with Untrained Convolutional Neural Networks for Image Recognition

    摘要: Reservoir computing has attracted much attention for its easy training process as well as its ability to deal with temporal data. A reservoir computing system consists of a reservoir part represented as a sparsely connected recurrent neural network and a readout part represented as a simple regression model. In machine learning tasks, the reservoir part is fixed and only the readout part is trained. Although reservoir computing has been mainly applied to time series prediction and recognition, it can be applied to image recognition as well by considering an image data as a sequence of pixel values. However, to achieve a high performance in image recognition with raw image data, a large-scale reservoir including a large number of neurons is required. This is a bottleneck in terms of computer memory and computational cost. To overcome this bottleneck, we propose a new method which combines reservoir computing with untrained convolutional neural networks. We use an untrained convolutional neural network to transform raw image data into a set of smaller feature maps in a preprocessing step of the reservoir computing. We demonstrate that our method achieves a high classification accuracy in an image recognition task with a much smaller number of trainable parameters compared with a previous study.

    关键词: Reservoir computing,Image recognition,Untrained networks,Convolutional neural networks

    更新于2025-09-09 09:28:46