Information Retrieval in Clinical Chart Reviews
Medical researchers rely on chart reviews, in which a user manually goes through a large number of electronic medical records (EMRs), to search for evidence to answer a specific medical question. Unfortunately, scrolling through vast amounts of clinical text to produce labels is time-consuming and expensive. For example, at Vanderbilt University Medical Center, it currently costs $109 per hour for a service that pays a nurse to review patient charts and produce labels. Therefore, specific methods are needed to i) reduce the cost of doing chart reviews and ii) to support medical researchers to identify relevant text within medical notes more efficiently. First, to reduce the cost of doing chart reviews, we developed the VBOSSA crowdsourcing platform that protects patient privacy and maintains a professional clinical crowd including medical students, nursing students and faculty from the Vanderbilt University Medical Center. With the support of the VBOSSA, medical researchers have saved over 700 hours of manual chart review with relatively accurate results (average accuracy of 86%) and average cost around $20 per hour. Second, to boost the efficiency of crowd workers in retrieving information from unstructured medical notes, we developed a Google-style EMR search engine, which provides high-quality query recommendation and automatically refines query while the user is doing a search and reviewing documents. Underpinning the EMR search engine are three novel approaches to: (1) Extract clinically similar terms from multiple EMR-based word embeddings; (2) Represent the medical contexts of clinical terms in a usage vector space and then leverage the usage vector space to better learn the users’ preferred similar terms; (3) Propose two novel ranking metrics, negative guarantee ratio(NGR) and critical document, based on the user experience analysis in chart reviews. The EMR search engine was systematically evaluated and achieved high performance in different information retrieval tasks, user studies, timing studies, and query recommendation tasks. We also evaluated different ranking and learning-to-rank methods using the NGR and critical document ranking metrics and discuss future directions in developing high-quality ranking methods to support chart reviews.