MACHINE LEARNING-BASED TECHNIQUES FOR AUTOMATING IMAGE-GUIDED COCHLEAR IMPLANT PROGRAMMING
Cochlear Implants (CIs) are neural prosthetics that are used to treat patients with profound hearing loss. In surgery, an electrode array is threaded into the cochlea. Post-operatively, the CI needs to be programmed. In general, CIs lead to substantial hearing improvements but a non-negligible number of recipients only get marginal hearing benefit. Studies have shown that hearing outcomes with CIs are correlated with the electrode-anatomy spatial relationship. In recent years, an image-guided cochlear implant programming (IGCIP) system is developed. IGCIP permits obtaining the electrode-anatomy spatial relationship and recommends patient-specific CI configurations for audiologists via intra-cochlear anatomy segmentation and electrode localization techniques. Though effective, IGCIP has several steps that still require human intervention, which could be a long and tedious process or could only be done by specialists who are well-trained. Also, though some techniques in IGCIP are in use, we are interested to improve their accuracy by employing the state-of-the-art approaches, for example, deep learning. In this dissertation, I will present my works that contribute to resolving the above issues. They are: (1) a series of efforts towards fully automatically documenting head CT content in terms of inner ears it includes and initializing image registration via localizing a set of landmarks, using random forests and deep neural networks, (2) a two-level training scheme of the 3d deep networks to segment the inner ear anatomy with limited ground truth training data, and (3) an automatic electrode configuration selection technique for CI programming based on a template matching technique.