- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Accelerating single molecule localization microscopy through parallel processing on a high-performance computing cluster
摘要: Super-resolved microscopy techniques have revolutionized the ability to study biological structures below the diffraction limit. Single molecule localization microscopy (SMLM) techniques are widely used because they are relatively straightforward to implement and can be realized at relatively low cost, e.g. compared to laser scanning microscopy techniques. However, while the data analysis can be readily undertaken using open source or other software tools, large SMLM data volumes and the complexity of the algorithms used often lead to long image data processing times that can hinder the iterative optimization of experiments. There is increasing interest in high throughput SMLM, but its further development and application is inhibited by the data processing challenges. We present here a widely applicable approach to accelerating SMLM data processing via a parallelized implementation of ThunderSTORM on a high-performance computing (HPC) cluster and quantify the speed advantage for a four-node cluster (with 24 cores and 128 GB RAM per node) compared to a high specification (28 cores, 128 GB RAM, SSD-enabled) desktop workstation. This data processing speed can be readily scaled by accessing more HPC resources. Our approach is not specific to ThunderSTORM and can be adapted for a wide range of SMLM software.
关键词: super-resolved microscopy,high-performance computing,Automated image analysis
更新于2025-11-21 11:24:58
-
A Direct Time Parallel Solver by Diagonalization for the Wave Equation
摘要: With the advent of very large scale parallel computers, it has become more and more important to also use the time direction for parallelization when solving evolution problems. While there are many successful algorithms for diffusive problems, only some of them are also effective for hyperbolic problems. We present here a mathematical analysis of a new method based on the diagonalization of the time stepping matrix proposed by Maday and R?nquist in 2007. Like many time-parallelization methods, at first this does not seem to be a very promising approach: the matrix is essentially triangular, or, for equidistant time steps, actually a Jordan block, and thus not diagonalizable. If one chooses however different time steps, diagonalization is possible, and one has to trade off between the accuracy due to necessarily having different time steps, and numerical errors in the diagonalization process of these almost nondiagonalizable matrices. We present for the first time such a diagonalization technique for the Newmark scheme for solving wave equations, and derive a mathematically rigorous optimization strategy for the choice of the parameters in the special case when the Newmark scheme becomes Crank–Nicolson. Our analysis shows that small to medium scale time parallelization is possible with this approach. We illustrate our results with numerical experiments for model wave equations in various dimensions and also an industrial test case for the elasticity equations with variable coefficients.
关键词: time parallelism,high performance computing,direct solver
更新于2025-09-23 15:22:29
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia, Spain (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Automated Analysis of Remotely Sensed Images Using the Unicore Workflow Management System
摘要: The progress of remote sensing technologies leads to increased supply of high-resolution image data. However, solutions for processing large volumes of data are lagging behind: desktop computers cannot cope anymore with the requirements of macro-scale remote sensing applications; therefore, parallel methods running in High-Performance Computing (HPC) environments are essential. Managing an HPC processing pipeline is non-trivial for a scientist, especially when the computing environment is heterogeneous and the set of tasks has complex dependencies. This paper proposes an end-to-end scientific workflow approach based on the UNICORE workflow management system for automating the full chain of Support Vector Machine (SVM)-based classification of remotely sensed images. The high-level nature of UNICORE workflows allows to deal with heterogeneity of HPC computing environments and offers powerful workflow operations such as needed for parameter sweeps. As a result, the remote sensing workflow of SVM-based classification becomes re-usable across different computing environments, thus increasing usability and reducing efforts for a scientist.
关键词: High-Performance Computing (HPC),Remote Sensing,Scientific Workflows,UNICORE,Support Vector Machine (SVM)
更新于2025-09-23 15:21:21
-
[IEEE 2019 International Workshop on Fiber Optics in Access Networks (FOAN) - Sarajevo, Bosnia and Herzegovina (2019.9.2-2019.9.4)] 2019 International Workshop on Fiber Optics in Access Networks (FOAN) - How Dubai is Becoming a Smart City?
摘要: Quantitative retrieval is a growing area in remote sensing due to the rapid development of remote instruments and retrieval algorithms. The aerosol optical depth (AOD) is a significant optical property of aerosol which is involved in further applications such as the atmospheric correction of remotely sensed surface features, monitoring of volcanic eruptions or forest fires, air quality, and even climate changes from satellite data. The AOD retrieval can be computationally expensive as a result of huge amounts of remote sensing data and compute-intensive algorithms. In this paper, we present two efficient implementations of an AOD retrieval algorithm from the moderate resolution imaging spectroradiometer (MODIS) satellite data. Here, we have employed two different high performance computing architectures: multicore processors and a graphics processing unit (GPU). The compute unified device architecture C (CUDA-C) has been used for the GPU implementation for NVIDIA’s graphic cards and open multiprocessing (OpenMP) for thread-parallelism in the multicore implementation. We observe for the GPU accelerator, a maximal overall speedup of 68.x for the studied data, whereas the multicore processor achieves a reasonable 7.x speedup. Additionally, for the largest benchmark input dataset, the GPU implementation also shows a great advantage in terms of energy efficiency with an overall consumption of 3.15 kJ compared to 58.09 kJ on a CPU with 1 thread and 38.39 kJ with 16 threads. Furthermore, the retrieval accuracy of all implementations has been checked and analyzed. Altogether, using the GPU accelerator shows great advantages for an application in AOD retrieval in both performance and energy efficiency metrics. Nevertheless, the multicore processor provides the easier programmability for the majority of today’s programmers. Our work exploits the parallel implementations, the performance, and the energy efficiency features of GPU accelerators and multicore processors. With this paper, we attempt to give suggestions to geoscientists demanding for efficient desktop solutions.
关键词: High performance computing (HPC),OpenMP,quantitative remote sensing retrieval,graphics processing unit (GPU),Aerosol optical depth (AOD)
更新于2025-09-19 17:13:59
-
[IEEE IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Valencia, Spain (2018.7.22-2018.7.27)] IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium - Scaling Support Vector Machines Towards Exascale Computing for Classification of Large-Scale High-Resolution Remote Sensing Images
摘要: Progress in sensor technology leads to an ever-increasing amount of remote sensing data which needs to be classified in order to extract information. This big amount of data requires parallel processing by running parallel implementations of classification algorithms, such as Support Vector Machines (SVMs), on High-Performance Computing (HPC) clusters. Tomorrow’s supercomputers will be able to provide exascale computing performance by using specialised hardware accelerators. However, existing software processing chains need to be adapted to make use of the best fitting accelerators. To address this problem, a mapping of an SVM remote sensing classification chain to the Dynamical Exascale Entry Platform (DEEP), a European pre-exascale platform, is presented. It will allow to scale SVM-based classifications onto tomorrow’s hardware towards exascale performance.
关键词: Exascale Computing,High-Performance Computing (HPC),Hardware Accelerators,Remote Sensing,Support Vector Machines (SVMs)
更新于2025-09-10 09:29:36