修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

10 条数据
?? 中文(中国)
  • [IEEE 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Chicago, IL, USA (2019.6.16-2019.6.21)] 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Observation of Dislocations in Graded Buffer Layers of IMM Single Junction InGaAs Solar Cells by Two-Photon Excitation Photoluminescence

    摘要: In binary classi?cation, two-way confusion matrices, with corresponding measures, such as sensitivity and speci?city, have become so ubiquitous that those who review results may not realize there are other and more realistic ways to visualize data. This is, particularly, true when risk and reward considerations are important. The approach suggested here proposes that classi?cation need not offer a conclusion on every instance within a data set. If an algorithm ?nds instances (e.g., patient cases in a medical data set) in which attributes pertaining to a patient’s disease offer zero to nil information, there should be no classi?cation offered. From the physician’s perspective, disclosure of nil information should be welcome because it might prevent potentially harmful treatment. It follows from this that the developer of a classi?er can provide summary results amendable for helping the consumer decide whether or not it is prudent to pass or act (commission versus omission). It is not always about balancing sensitivity and speci?city in all cases, but optimizing action on some cases. The explanation is centered on John Kelly’s link of gambling with Shannon information theory. In addition, Graham’s margin of safety, Bernoulli’s utiles, and Hippocratic Oath are important. An example problem is provided using a Netherlands Cancer Institute breast cancer data set. Recurrence score, a popular molecular-based assay for breast cancer prognosis, was found to have an uninformative zone. The uninformative subset had been grouped with positive results to garner higher sensitivity. Yet, because of a positive result, patients might be advised to undergo potentially harmful treatment in the absence of useful information.

    关键词: data compression,cancer,clinical diagnosis,sensitivity and speci?city,Data analysis,genetic expression,entropy

    更新于2025-09-23 15:19:57

  • [IEEE 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE) - Nara, Japan (2018.10.9-2018.10.12)] 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE) - Accelerating Data Loading for Photo-Based Augmented Reality on Web Browser

    摘要: A photo-based augmented reality (Photo AR) system is an AR system that uses photographic images instead of computer graphics of three-dimensional models. Photo AR can display realistic AR views to users, but it requires large disk space due to its many images. Although all images can be significantly compressed into one video format file, the decoding and memory storage process are time-consuming. In this study, we propose a method to accelerate data loading for Photo AR operating on a web browser in a mobile device. The experimental results demonstrate the effectiveness of our method.

    关键词: Web browser,Data compression,Photo AR,Augmented Reality

    更新于2025-09-23 15:19:57

  • Adaptive particle representation of fluorescence microscopy images

    摘要: Modern microscopes create a data deluge with gigabytes of data generated each second, and terabytes per day. Storing and processing this data is a severe bottleneck, not fully alleviated by data compression. We argue that this is because images are processed as grids of pixels. To address this, we propose a content-adaptive representation of fluorescence microscopy images, the Adaptive Particle Representation (APR). The APR replaces pixels with particles positioned according to image content. The APR overcomes storage bottlenecks, as data compression does, but additionally overcomes memory and processing bottlenecks. Using noisy 3D images, we show that the APR adaptively represents the content of an image while maintaining image quality and that it enables orders of magnitude benefits across a range of image processing tasks. The APR provides a simple and efficient content-aware representation of fluorescence microscopy images.

    关键词: fluorescence microscopy,content-aware representation,image processing,data compression,Adaptive Particle Representation

    更新于2025-09-23 15:19:57

  • [IEEE 2018 25th IEEE International Conference on Image Processing (ICIP) - Athens (2018.10.7-2018.10.10)] 2018 25th IEEE International Conference on Image Processing (ICIP) - A Fast Palette Reordering Technique Based on GPU-Optimized Genetic Algorithms

    摘要: Color re-indexing is one of main approaches for improving the loss-less compression of color indexed images. Zero-order entropy reduction of indexes matrix is the key to obtain high compression ratio. However, obtaining the optimal re-indexed palette is a challenging problem that cannot be solved by brute-force approaches. In this paper we propose a novel re-indexing approach where the Travelling Salesman Problem is solved through Ant Colony Optimization. Our method is proved to achieve high quality results by outperforming state-of-art ones in terms of compression gain. Additionally, we exploit clustering and GPU computing to make our solution extremely fast.

    关键词: Ant Colony Optimization,Data compression,Color,Entropy,Image coding

    更新于2025-09-19 17:15:36

  • Scalable Hardware-Based On-Board Processing for Run-Time Adaptive Lossless Hyperspectral Compression

    摘要: Hyperspectral data processing is a computationally intensive task that is usually performed in high-performance computing clusters. However, in remote sensing scenarios, where communications are expensive, a compression stage is required at the edge of data acquisition before transmitting information to ground stations for further processing. Moreover, hyperspectral image compressors need to meet minimum performance and energy-efficiency levels to cope with the real-time requirements imposed by the sensors and the available power budget. Hence, they are usually implemented as dedicated hardware accelerators in expensive space-grade electronic devices. In recent years though, these devices have started to coexist with low-cost commercial alternatives in which unconventional techniques, such as run-time hardware reconfiguration are evaluated within research-oriented space missions (e.g., CubeSats). In this paper, a run-time reconfigurable implementation of a low-complexity lossless hyperspectral compressor (i.e., CCSDS 123) on a commercial off-the-shelf device is presented. The proposed approach leverages an FPGA-based on-board processing architecture with a data-parallel execution model to transparently manage a configurable number of resource-efficient hardware cores, dynamically adapting both throughput and energy efficiency. The experimental results show that this solution is competitive when compared with the current state-of-the-art hyperspectral compressors and that the impact of the parallelization scheme on the compression rate is acceptable when considering the improvements in terms of performance and energy consumption. Moreover, scalability tests prove that run-time adaptation of the compression throughput and energy efficiency can be achieved by modifying the number of hardware accelerators, a feature that can be useful in space scenarios, where requirements change over time (e.g., communication bandwidth or power budget).

    关键词: dynamic and partial reconfiguration,FPGAs,Data compression,high-performance embedded computing,on-board processing,hyperspectral images

    更新于2025-09-19 17:15:36

  • [IEEE 2019 19th International Conference on Advanced Robotics (ICAR) - Belo Horizonte, Brazil (2019.12.2-2019.12.6)] 2019 19th International Conference on Advanced Robotics (ICAR) - Closed-Loop Control of a Magnetically Actuated Fiber-Coupled Laser for Computer-Assisted Laser Microsurgery

    摘要: In binary classi?cation, two-way confusion matrices, with corresponding measures, such as sensitivity and speci?city, have become so ubiquitous that those who review results may not realize there are other and more realistic ways to visualize data. This is, particularly, true when risk and reward considerations are important. The approach suggested here proposes that classi?cation need not offer a conclusion on every instance within a data set. If an algorithm ?nds instances (e.g., patient cases in a medical data set) in which attributes pertaining to a patient’s disease offer zero to nil information, there should be no classi?cation offered. From the physician’s perspective, disclosure of nil information should be welcome because it might prevent potentially harmful treatment. It follows from this that the developer of a classi?er can provide summary results amendable for helping the consumer decide whether or not it is prudent to pass or act (commission versus omission). It is not always about balancing sensitivity and speci?city in all cases, but optimizing action on some cases. The explanation is centered on John Kelly’s link of gambling with Shannon information theory. In addition, Graham’s margin of safety, Bernoulli’s utiles, and Hippocratic Oath are important. An example problem is provided using a Netherlands Cancer Institute breast cancer data set. Recurrence score, a popular molecular-based assay for breast cancer prognosis, was found to have an uninformative zone. The uninformative subset had been grouped with positive results to garner higher sensitivity. Yet, because of a positive result, patients might be advised to undergo potentially harmful treatment in the absence of useful information.

    关键词: data compression,cancer,clinical diagnosis,sensitivity and speci?city,Data analysis,genetic expression,entropy

    更新于2025-09-19 17:13:59

  • [IEEE 2019 Photonics North (PN) - Quebec City, QC, Canada (2019.5.21-2019.5.23)] 2019 Photonics North (PN) - In-band pumped composite Nd:YVO/Nd:GVO laser

    摘要: In binary classi?cation, two-way confusion matrices, with corresponding measures, such as sensitivity and speci?city, have become so ubiquitous that those who review results may not realize there are other and more realistic ways to visualize data. This is, particularly, true when risk and reward considerations are important. The approach suggested here proposes that classi?cation need not offer a conclusion on every instance within a data set. If an algorithm ?nds instances (e.g., patient cases in a medical data set) in which attributes pertaining to a patient’s disease offer zero to nil information, there should be no classi?cation offered. From the physician’s perspective, disclosure of nil information should be welcome because it might prevent potentially harmful treatment. It follows from this that the developer of a classi?er can provide summary results amendable for helping the consumer decide whether or not it is prudent to pass or act (commission versus omission). It is not always about balancing sensitivity and speci?city in all cases, but optimizing action on some cases. The explanation is centered on John Kelly’s link of gambling with Shannon information theory. In addition, Graham’s margin of safety, Bernoulli’s utiles, and Hippocratic Oath are important. An example problem is provided using a Netherlands Cancer Institute breast cancer data set. Recurrence score, a popular molecular-based assay for breast cancer prognosis, was found to have an uninformative zone. The uninformative subset had been grouped with positive results to garner higher sensitivity. Yet, because of a positive result, patients might be advised to undergo potentially harmful treatment in the absence of useful information.

    关键词: data compression,cancer,clinical diagnosis,sensitivity and speci?city,Data analysis,genetic expression,entropy

    更新于2025-09-19 17:13:59

  • A Global Maximum Power Point Tracking Algorithm for Photovoltaic Systems Under Partially Shaded Conditions Using Modified Maximum Power Trapezium Method

    摘要: This paper is about how the SP theory of intelligence and its realization in the SP machine (both outlined in this paper) may help in the design of the brains of autonomous robots, meaning robots that do not depend on external intelligence or power supplies, are mobile, and have human-like versatility and adaptability in intelligence. This paper addresses three main problems: 1) how to increase the computational and energy efficiency of computers and to reduce their size and weight; 2) how to achieve human-like versatility in intelligence; and 3) likewise for human-like adaptability in intelligence. Regarding the first problem, the SP system has the potential for substantial gains in computational efficiency, with corresponding cuts in energy consumption and the bulkiness of computers: 1) by reducing the size of data to be processed; 2) by exploiting statistical information that the system gathers as an integral part of how it works; and 3) via a new version of Donald Hebb’s concept of a cell assembly. Toward human-like versatility in intelligence, the SP system has strengths in unsupervised learning, natural language processing, pattern recognition, information retrieval, several kinds of reasoning, planning, problem solving, and more, with seamless integration among structures and functions. The SP system’s strengths in unsupervised learning and other aspects of intelligence may help in achieving human-like adaptability in intelligence via: 1) one-trial learning; 2) learning of natural language; 3) learning to see; 4) building 3-D models of objects and of a robot’s surroundings; 5) learning regularities in the workings of a robot and in the robot’s environment; 6) exploration and play; 7) learning major skills; and 8) learning via demonstration. Also discussed are how the SP system may process parallel streams of information, generalization of knowledge, correction of over-generalizations, learning from dirty data, how to cut the cost of learning, and reinforcements and motivations.

    关键词: data compression,pattern recognition,robots,unsupervised learning,Artificial intelligence,cognitive science

    更新于2025-09-19 17:13:59

  • AIP Conference Proceedings [AIP Publishing LLC NEW OPERATIONAL TECHNOLOGIES (NEWOT’2015): Proceedings of the 5th International Scientific Conference ?New Operational Technologies? - Tomsk, Russia (29–30 September 2015)] - X-ray microtomographic scanners

    摘要: The article studies the operating procedures of an X-ray microtomographic scanner and the module of reconstruction and analysis 3D-image of a test sample in particular. An algorithm for 3D-image reconstruction based on image shadow projections and mathematical methods of the processing are described. Chapter 1 describes the basic principles of X-ray tomography and general procedures of the device developed. Chapters 2 and 3 are devoted to the problem of resources saving by the system during the X-ray tomography procedure, which is achieved by preprocessing of the initial shadow projections. Preprocessing includes background noise removing from the images, which reduces the amount of shadow projections in general and increases the efficiency of the group shadow projections compression. In conclusion, the main applications of X-ray tomography are presented.

    关键词: non-destructive testing,shadow projections,3D-image reconstruction,data compression,X-ray microtomography

    更新于2025-09-10 09:29:36

  • [Mathematics and Visualization] Imaging, Vision and Learning Based on Optimization and PDEs (IVLOPDE, Bergen, Norway, August 29 – September 2, 2016) || Relaxed Optimisation for Tensor Principal Component Analysis and Applications to Recognition, Compression and Retrieval of Volumetric Shapes

    摘要: The mathematical and computational backgrounds of pattern recognition are the geometries in Hilbert space used for functional analysis and the applied linear algebra used for numerical analysis, respectively. Organs, cells and microstructures in cells dealt with in biomedical image analysis are volumetric data. We are required to process and analyse these data as volumetric data without embedding into higher-dimensional vector spaces from the viewpoint of object-oriented data analysis. Therefore, sampled values of volumetric data are expressed as three-way array data. The aim of the paper is to develop relaxed closed forms for tensor principal component analysis (PCA) for the recognition, classification, compression and retrieval of volumetric data. Tensor PCA derives the tensor Karhunen-Loève transform, which compresses volumetric data, such as organs, cells in organs and microstructures in cells, preserving both the geometric and statistical properties of objects and spatial textures in the space.

    关键词: pattern recognition,Karhunen-Loève transform,tensor principal component analysis,data compression,volumetric data

    更新于2025-09-09 09:28:46