Show simple item record

Performance Drift of Clinical Prediction Models: Impact of modeling methods on prospective model performance

dc.creatorDavis, Sharon Elizabeth
dc.date.accessioned2020-08-22T00:08:33Z
dc.date.available2019-04-05
dc.date.issued2017-04-05
dc.identifier.urihttps://etd.library.vanderbilt.edu/etd-03272017-091807
dc.identifier.urihttp://hdl.handle.net/1803/11533
dc.description.abstractIntegrating personalized risk predictions into clinical decision support requires well-calibrated models, yet model accuracy deteriorates as patient populations shift. Understanding the influence of modeling methods on performance drift is essential for designing updating protocols. Using national cohorts of Department of Veterans Affairs hospital admissions, we compared the temporal performance of seven regression and machine learning models for hospital-acquired acute kidney injury and 30-day mortality after admission. All modeling methods were robust in terms of discrimination and experienced deteriorating calibration. Random forest and neural network models experienced lower levels of calibration drift than regressions. The L-2 penalized logistic regression for mortality demonstrated drift similar to the random forest. Increasing overprediction by all models correlated with declining event rates. Diverging patterns of calibration drift among acute kidney injury models coincided with predictor-outcome association changes. The mortality models revealed reduced susceptibility of random forest, neural network, and L-2 penalized logistic regression models to case mix-driven calibration drift. These findings support the advancement of clinical predictive analytics and lay a foundation for systems to maintain model accuracy. As calibration drift impacted each method, all clinical prediction models should be routinely reassessed and updated as needed. Regression models have a greater need for frequent evaluation and updating than machine learning models, highlighting the importance of tailoring updating protocols to variations in the susceptibility of models to patient population shifts. While the suite of best practices remains to be developed, modeling methods will be an essential component in determining when and how models are updated.
dc.format.mimetypeapplication/pdf
dc.subjectClinical prediction
dc.subjectcalibration drift
dc.subjectmachine learning
dc.titlePerformance Drift of Clinical Prediction Models: Impact of modeling methods on prospective model performance
dc.typethesis
dc.contributor.committeeMemberThomas A Lasko
dc.contributor.committeeMemberGuanhua Chen
dc.type.materialtext
thesis.degree.nameMS
thesis.degree.levelthesis
thesis.degree.disciplineBiomedical Informatics
thesis.degree.grantorVanderbilt University
local.embargo.terms2019-04-05
local.embargo.lift2019-04-05
dc.contributor.committeeChairMichael E Matheny


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record