Synthesizing Micro-CT from CT of the inner ear with 3D-conditional GANs
Cochlear implants are surgically implanted neural prosthetic devices that are used to treat severe hearing loss. These devices are programmed post-implantation, and estimations of patient-specific neural activation patterns can help audiologists with programming. Recently, we developed physics-based electro-anatomical models to estimate patient-specific neural stimulation patterns. A high-resolution tissue classification map of the cochlea, where each tissue type corresponds with a different electrical conductance, is a vital input for these models. However, creating the tissue classification map requires a high-resolution micro-CT image that cannot be acquired in vivo. To overcome this issue, we aim to develop a deep-learning based approach to synthesize high-resolution micro-CT from low resolution clinical CT. For this purpose, we implemented two approaches. The first approach is to use a 3D-conditional generative adversarial network (cGAN) to generate the high-resolution micro-CT from clinical CT, followed by thresholding the resulting synthetic micro-CT to produce the tissue classification map. The second approach is to implement a cGAN-based multitask model to generate both micro-CT and tissue classification map simultaneously. The approaches were evaluated with leave-K-out strategy with a dataset of CT-micro-CT pairs of 6 cochlea specimens. In the single task model, our technique results in mean Dice score of 0.92, 0.56, and 0.84 for air, soft tissue, and bone classes, respectively. In the multitask model, our technique results in mean Dice score of 0.95, 0.66, and 0.87 for air, soft tissue, and bone classes, respectively. This indicates the promise of deep learning image synthesis methods for creating the high-resolution tissue classification map for patient-specific electro-anatomical models.