- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
3D auto-context-based locality adaptive multi-modality GANs for PET synthesis
摘要: Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1×1×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.
关键词: Image synthesis,Positron emission topography (PET),Locality adaptive fusion,Generative adversarial networks (GANs),Multi-modality
更新于2025-09-23 15:21:01
-
[IEEE TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON) - Kochi, India (2019.10.17-2019.10.20)] TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON) - Use of Novel Hybrid Plasmonic Nanoparticle Complexes to Increase the Efficiency of Thin-film Solar Cells
摘要: To utilize the synergy between computed tomography (CT) and magnetic resonance imaging (MRI) data sets from an object at the same time, an edge-guided dual-modality image reconstruction approach is proposed. The key is to establish a knowledge-based connection between these two data sets for the tight fusion of different imaging modalities. Our scheme consists of four inter-related elements: 1) segmentation; 2) initial guess generation; 3) CT image reconstruction; and 4) MRI image reconstruction. Our experiments show that, aided by the image obtained from one imaging modality, even with highly under-sampled data, we can better reconstruct the image of the other modality. This approach can be potentially useful for a simultaneous CT-MRI system.
关键词: l1-norm minimization,image reconstruction,CT-MRI system,multi-modality imaging
更新于2025-09-23 15:19:57
-
Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
摘要: Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
关键词: partially overlapped images,multi-modality,manifold learning,medical image registration
更新于2025-09-19 17:15:36
-
[Lecture Notes in Computer Science] Computer Vision – ECCV 2018 Workshops Volume 11133 (Munich, Germany, September 8-14, 2018, Proceedings, Part V) || Multi-modal Spectral Image Super-Resolution
摘要: Recent advances have shown the great power of deep convolutional neural networks (CNN) to learn the relationship between low and high-resolution image patches. However, these methods only take a single-scale image as input and require large amount of data to train without the risk of overfitting. In this paper, we tackle the problem of multi-modal spectral image super-resolution while constraining ourselves to a small dataset. We propose the use of different modalities to improve the performance of neural networks on the spectral super-resolution problem. First, we use multiple downscaled versions of the same image to infer a better high-resolution image for training, we refer to these inputs as a multi-scale modality. Furthermore, color images are usually taken at a higher resolution than spectral images, so we make use of color images as another modality to improve the super-resolution network. By combining both modalities, we build a pipeline that learns to super-resolve using multi-scale spectral inputs guided by a color image. Finally, we validate our method and show that it is economic in terms of parameters and computation time, while still producing state-of-the-art results (Code at https://github.com/IVRL/Multi-Modal-Spectral-Image-Super-Resolution).
关键词: Image completion,Spectral reconstruction,Spectral image super-resolution,Multi-modality,Residual learning
更新于2025-09-19 17:15:36
-
Optimal combined proton-photon therapy schemes based on the standard BED model
摘要: This paper investigates the potential of combined proton-photon therapy treatments in radiation oncology, with a special emphasis on fractionation. Several combined modality models, with and without fractionation, are discussed, and conditions under which combined modality treatments are of added value are demonstrated analytically and numerically. The combined modality optimal fractionation problem with multiple normal tissues is formulated based on the Biologically Effective Dose (BED) model and tested on real patient data. Results indicate that for several patients a combined modality treatment gives better results in terms of biological dose (up to 14.8% improvement) than single modality proton treatments. For several other patients, a combined modality treatment is found that offers an alternative to the optimal single modality proton treatment, being only marginally worse but using significantly fewer proton fractions, putting less pressure on the limited availability of proton slots. Overall, these results indicate that combined modality treatments can be a viable option, which is expected to become more important as proton therapy centers are spreading but the proton therapy price tag remains high.
关键词: biologically effective dose (BED),proton therapy,optimization,intensity-modulated radiation therapy (IMRT),multi-modality treatment
更新于2025-09-19 17:15:36
-
Human Umbilical Cord Wharton's Jelly-derived Mesenchymal Stem Cells Labeled with Mn2+ and Gd3+ Co-doped CuInS2-ZnS Nanocrystals for Multi-modality Imaging in Tumor Mice Model
摘要: Mesenchymal stem cells (MSCs) therapy has recently received profound interest as a targeting-platform in cancer-theranostics due to inherent tumor-homing abilities. However, the terminal tracking of MSCs-engraftment by fluorescent in situ hybridization, immuno-histochemistry and flow-cytometry techniques to translate into clinics are still challenging due to dearth of inherent MSC-specific markers and FDA-approval for genetic-modifications of MSCs. To address this challenge, a cost-effective non-invasive imaging technology based on multi-functional nanocrystals (NCs) with enhanced-detection sensitivity, spatial-temporal resolution, deep-tissue diagnosis is needed to be developed to track the transplanted stem cells. A hassle-free labeling of human umbilical-cord Wharton’s-Jelly (WJ)-derived MSCs with Mn2+ and Gd3+ co-doped CuInS2-ZnS (CIS-ZMGS) NCs has been demonstrated in 2 h without requiring electroporation process or transfection agents. It has been found that, WJ-MSCs labeling did not affect their multi-lineage differentiation (adipocyte, osteocyte, chondrocyte), immuno-phenotypes (CD44+, CD105+, CD90+), protein (β-actin, vimentin, CD73, α-SMCA) and gene-expressions. Interestingly, CIS-ZMGS-NCs labeled WJ-MSCs exhibit near-infrared fluorescence (NIR) with quantum yield (QY) of 84%, radiant intensity ~3.999 x 1011 (p/sec/cm2/sr)/(μW/cm2), magnetic relaxivity (longitudinal r1=2.26 mM-1s-1, transverse r2=16.47 mM-1s-1) and X-ray attenuation (78 HU) potential for early non-invasive multi-modality imaging of a subcutaneous-melanoma in B16F10-tumor-bearing C57BL/6-mice in 6 h. The ex vivo imaging and inductively-coupled plasma mass-spectroscopy (ICP-MS) analyses of excised organs along with confocal-microscopy and immuno-fluorescence of tumor results also significantly confirmed positive-tropism of CIS-ZMGS-NCs labeled WJ-MSCs in the tumor-environment. Hence, we propose the magneto-fluorescent CIS-ZMGS-NCs labeled WJ-MSCs as a next-generation nano-bioprobe of three commonly used imaging-modalities for stem cells-assisted anti-cancer therapy and tracking tissue/organ regenerations.
关键词: multi-modality imaging,stem cell labeling,CuInS2-ZnS,in vivo tracking,microwave refluxing
更新于2025-09-12 10:27:22
-
[IEEE 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM) - Xi'an (2018.9.13-2018.9.16)] 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM) - A Deep-Learning Based Multi-Modality Sensor Calibration Method for USV
摘要: The automatic obstacle avoidance and other tasks of the unmanned surface vehicle rely on the fusion of multi-modality onboard sensors. The accurate calibration method is the foundation of sensor fusion. This paper proposes an online calibration method based on the deep learning for visual sensor and depth sensor. Through an end-to-end network, we combine feature extraction, feature matching and global optimization process of sensor calibration. After initial training, the network can continuously calibrate multi-modality sensors. It solves the challenges under USV operating environment. In the simulation environment and realistic environment, we conducted a fast online calibration of the camera, LIDAR and depth camera, which showed the effectiveness of the algorithm.
关键词: multi-modality sensor,calibration,USV,deep learning
更新于2025-09-04 15:30:14
-
[IEEE 2018 26th European Signal Processing Conference (EUSIPCO) - Roma, Italy (2018.9.3-2018.9.7)] 2018 26th European Signal Processing Conference (EUSIPCO) - Performance Evaluation of N O- Reference Image Quality Metrics for Visible Wavelength Iris Biometric Images
摘要: Image quality assessment plays an important role in iris recognition systems because the system performance is affected by low quality iris images. With the development of electronic color imaging, there are more and more researches about visible wavelength (VW) iris recognition. Compared to the near infrared iris images, using VW iris images acquired under unconstrained imaging conditions is a more challenging task for the iris recognition system. However, the number of quality assessment methods for VW iris images is limited. Therefore, it is interested to investigate whether existing no-reference image quality metrics (IQMs) which are designed for natural images can assess the quality of VW iris images. In this paper, we evaluate the performance of 15 selected no-reference IQMs on VW iris biometrics. The experimental results show that several IQMs can assess iris sample quality according to the system performance.
关键词: image quality assessment,visible wavelength iris,image based attributes,performance evaluation,multi-modality
更新于2025-09-04 15:30:14