3D Supervised Learning for CT Hematoma Segmentation via Transfer Learning from a 2D Trained Network
Supervised learning approaches are the most widely used models in medical image processing and tend to offer the highest level of accuracy when paired data and labels are available. However, manual labeling process is time-consuming and requires expert knowledge. Hence, the process of generating training data has become a severe bottleneck. Here, we explore feedback-driven augmentation of training data in the context of hematoma segmentation for computed tomography (CT) imaging of traumatic brain injury. Briefly, a previously published 2D patched-based segmentation model was trained on 33 manually labeled CT scans. This model was applied to 11477 scans from 4033 patients that were retrospectively acquired in deidentified from consecutive trauma patients. Each of the resulting 11477 predictions was visually inspected for quality assurance and manually scored as either (0) a good segmentation with hematoma (n=1199), (1) a reasonable quality segmentation with some errors (n=2475), (2) a failed segmentation (n=2340), (3) a good segmentation without hematoma (n=4995), or (4) invalid data. 2400 good segmentation scans with predicted masks were used a training data along with the original 33 patients to train a 3D model using 3D-Unet. The mean Dice Similarity Coefficient (DSC) obtained from the model was 0.729 on the same testing set as 2D Network. In summary, feedback of successful automated results presents an efficient manner to transfer a 2D model to 3D and increase robustness of supervised deep learning algorithms at substantially reduced manual effort (here in 35 hours for quality assurance versus an estimated 11477 hours for full tracing of 11477 scans).