Show simple item record

Information Retrieval in Clinical Chart Reviews

dc.creatorYe, Cheng
dc.date.accessioned2020-08-24T11:49:09Z
dc.date.available2020-03-26
dc.date.issued2019-03-26
dc.identifier.urihttps://etd.library.vanderbilt.edu/etd-03242019-195208
dc.identifier.urihttp://hdl.handle.net/1803/15393
dc.description.abstractMedical researchers rely on chart reviews, in which a user manually goes through a large number of electronic medical records (EMRs), to search for evidence to answer a specific medical question. Unfortunately, scrolling through vast amounts of clinical text to produce labels is time-consuming and expensive. For example, at Vanderbilt University Medical Center, it currently costs $109 per hour for a service that pays a nurse to review patient charts and produce labels. Therefore, specific methods are needed to i) reduce the cost of doing chart reviews and ii) to support medical researchers to identify relevant text within medical notes more efficiently. First, to reduce the cost of doing chart reviews, we developed the VBOSSA crowdsourcing platform that protects patient privacy and maintains a professional clinical crowd including medical students, nursing students and faculty from the Vanderbilt University Medical Center. With the support of the VBOSSA, medical researchers have saved over 700 hours of manual chart review with relatively accurate results (average accuracy of 86%) and average cost around $20 per hour. Second, to boost the efficiency of crowd workers in retrieving information from unstructured medical notes, we developed a Google-style EMR search engine, which provides high-quality query recommendation and automatically refines query while the user is doing a search and reviewing documents. Underpinning the EMR search engine are three novel approaches to: (1) Extract clinically similar terms from multiple EMR-based word embeddings; (2) Represent the medical contexts of clinical terms in a usage vector space and then leverage the usage vector space to better learn the users’ preferred similar terms; (3) Propose two novel ranking metrics, negative guarantee ratio(NGR) and critical document, based on the user experience analysis in chart reviews. The EMR search engine was systematically evaluated and achieved high performance in different information retrieval tasks, user studies, timing studies, and query recommendation tasks. We also evaluated different ranking and learning-to-rank methods using the NGR and critical document ranking metrics and discuss future directions in developing high-quality ranking methods to support chart reviews.
dc.format.mimetypeapplication/pdf
dc.subjectquery expansion
dc.subjectclinical similar terms
dc.subjectmedical usage contexts
dc.subjectvector space model
dc.subjectsearch engines
dc.subjectelectronic medical records (EMR)
dc.titleInformation Retrieval in Clinical Chart Reviews
dc.typedissertation
dc.contributor.committeeMemberYevgeniy Vorobeychik
dc.contributor.committeeMemberYou Chen
dc.contributor.committeeMemberMaithilee Kunda
dc.contributor.committeeMemberBradley Malin
dc.type.materialtext
thesis.degree.namePHD
thesis.degree.leveldissertation
thesis.degree.disciplineComputer Science
thesis.degree.grantorVanderbilt University
local.embargo.terms2020-03-26
local.embargo.lift2020-03-26
dc.contributor.committeeChairDaniel Fabbri


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record