The Deep Aion Project: Exploring How Different Temporal Representations Can Benefit Deep Longitudinal Models
Lenert, Matthew Charles
Many predictive models deployed in healthcare settings abstract away or ignore the temporal nature of disease and the healthcare process. Those that do model time face a myriad of development decisions. One such decision is the representation of time. Some researchers have found performance advantages to regularly spaced inputs for deep longitudinal models such as the recurrent neural network, but the best representation of time is an open question. We studied how different temporal representations affect the predictive performance of the LSTM and Attention Encoder deep neural architectures. We generated artificial data using a longitudinal mixed-effects statistical model. The use of this statistical model enabled us to produce different data sets with varying temporal parameters such as feature collinearity, sampling scheme, and outcome type (link function). We also varied how time itself was represented as a feature. Our experiments on artificial data not only helped us determine if there was a universally dominant temporal representation, but was able to provide some insights as to the data characteristics that might lead one representation to be favored over another. We evaluated our theoretical findings by making a prediction of the best temporal representation for two well benchmarked learning problems (24-hour in-hospital mortality and ICU discharge prediction) using real intensive care unit data from Beth Israel Deaconess’ MIMIC III dataset. These results not only provide useful insights for model builders, but also reinforces our systematic approach to the experimental design. We performed a few more explanatory experiments based on the the results of the MIMIC III learning problems to better explain why one temporal representation made such an impactful difference on performance in comparison to the others for the 24-hour in-hospital mortality prediction problem.