Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks
Background and objective:
Retinal imaging has two major modalities, traditional fundus photography (TFP) and ultra-widefield fundus photography (UWFP). This study demonstrates the feasibility of a state-of-the-art deep learning-based domain transfer from UWFP to TFP.
A cycle-consistent generative adversarial network (CycleGAN) was used to automatically translate the UWFP to the TFP domain. The model was based on an unpaired dataset including anonymized 451 UWFP and 745 TFP images. To apply CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. After automated image registration and masking dark frames, the generator and discriminator networks were trained. Additional twelve publicly available paired TFP and UWFP images were used to calculate the intensity histograms and structural similarity (SSIM) indices.
We observed that all UWFP images were successfully translated into TFP-style images by CycleGAN, and the main structural information of the retina and optic nerve was retained. The model did not generate fake features in the output images. Average histograms demonstrated that the intensity distribution of the generated output images provided a good match to the ground truth images, with an average SSIM level of 0.802.
Our approach enables automated synthesis of TFP images directly from UWFP without a manual pre-conditioning process. The generated TFP images might be useful for clinicians in investigating posterior pole and for researchers in integrating TFP and UWFP databases. This is also likely to save scan time and will be more cost-effective for patients by avoiding additional examinations for an accurate diagnosis.
링크 연결 : https://www.sciencedirect.com/science/article/abs/pii/S0169260720315947
출처 : The Lancet Digital Health