• About
    • Login
    View Item 
    •   Institutional Repository Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations
    • View Item
    •   Institutional Repository Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Institutional RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Information Retrieval in Clinical Chart Reviews

    Ye, Cheng
    : https://etd.library.vanderbilt.edu/etd-03242019-195208
    http://hdl.handle.net/1803/15393
    : 2019-03-26

    Abstract

    Medical researchers rely on chart reviews, in which a user manually goes through a large number of electronic medical records (EMRs), to search for evidence to answer a specific medical question. Unfortunately, scrolling through vast amounts of clinical text to produce labels is time-consuming and expensive. For example, at Vanderbilt University Medical Center, it currently costs $109 per hour for a service that pays a nurse to review patient charts and produce labels. Therefore, specific methods are needed to i) reduce the cost of doing chart reviews and ii) to support medical researchers to identify relevant text within medical notes more efficiently. First, to reduce the cost of doing chart reviews, we developed the VBOSSA crowdsourcing platform that protects patient privacy and maintains a professional clinical crowd including medical students, nursing students and faculty from the Vanderbilt University Medical Center. With the support of the VBOSSA, medical researchers have saved over 700 hours of manual chart review with relatively accurate results (average accuracy of 86%) and average cost around $20 per hour. Second, to boost the efficiency of crowd workers in retrieving information from unstructured medical notes, we developed a Google-style EMR search engine, which provides high-quality query recommendation and automatically refines query while the user is doing a search and reviewing documents. Underpinning the EMR search engine are three novel approaches to: (1) Extract clinically similar terms from multiple EMR-based word embeddings; (2) Represent the medical contexts of clinical terms in a usage vector space and then leverage the usage vector space to better learn the users’ preferred similar terms; (3) Propose two novel ranking metrics, negative guarantee ratio(NGR) and critical document, based on the user experience analysis in chart reviews. The EMR search engine was systematically evaluated and achieved high performance in different information retrieval tasks, user studies, timing studies, and query recommendation tasks. We also evaluated different ranking and learning-to-rank methods using the NGR and critical document ranking metrics and discuss future directions in developing high-quality ranking methods to support chart reviews.
    Show full item record

    Files in this item

    Icon
    Name:
    YE.pdf
    Size:
    10.31Mb
    Format:
    PDF
    View/Open

    This item appears in the following collection(s):

    • Electronic Theses and Dissertations

    Connect with Vanderbilt Libraries

    Your Vanderbilt

    • Alumni
    • Current Students
    • Faculty & Staff
    • International Students
    • Media
    • Parents & Family
    • Prospective Students
    • Researchers
    • Sports Fans
    • Visitors & Neighbors

    Support the Jean and Alexander Heard Libraries

    Support the Library...Give Now

    Gifts to the Libraries support the learning and research needs of the entire Vanderbilt community. Learn more about giving to the Libraries.

    Become a Friend of the Libraries

    Quick Links

    • Hours
    • About
    • Employment
    • Staff Directory
    • Accessibility Services
    • Contact
    • Vanderbilt Home
    • Privacy Policy