研究目的
To synthesize high-quality FDG PET images from low-dose PET images with accompanying MRI images to reduce radiation exposure while maintaining image quality.
研究成果
The proposed 3D auto-context-based locality adaptive multi-modality GANs model effectively synthesizes high-quality PET images from low-dose PET and multimodal MRI images, outperforming traditional multi-modality fusion methods and state-of-the-art PET estimation approaches. The method shows promise for reducing radiation exposure in PET scans while maintaining diagnostic image quality.
研究不足
The study is limited by the small number of training images available, the simulation of only healthy brains in phantom data, and the current model's inability to handle missing modalities. Future work aims to involve more subjects, simulate lesions, and integrate image transformation and multi-modality fusion procedures into deep neural networks.