• About
    • Login
    View Item 
    •   Institutional Repository Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations
    • View Item
    •   Institutional Repository Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Institutional RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDepartmentThis CollectionBy Issue DateAuthorsTitlesSubjectsDepartment

    My Account

    LoginRegister

    Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions

    Burruss, Matthew
    0000-0003-4180-995X
    : http://hdl.handle.net/1803/10082
    : 2020-03-25

    Abstract

    The ability of deep neural networks (DNNs) to achieve state-of-the-art performance on complicated tasks has increased their adoption in safety-critical cyber physical systems (CPS). However, DNNs are susceptible to a variety of attacks, including 1) adversarial attacks where imperceptible perturbations of the input can induce mispredictions, 2) data poisoning attacks where the training data is manipulated to encode a malicious backdoor key, and 3) physically realizable point anomalies that exploit corner cases of the model. Simple two-layer radial basis function (RBF) networks are known to exhibit low confidence on point anomalies and adversarial images; however, until recently deeper RBFs have been difficult to train on complicated data. This paper extends recent advancements in deep RBFs and presents novel methods to address major security threats of DNNs. First, we show that in a self-driving task a deep RBF version of NVIDIA's DAVE-II architecture can reliably detect physically realizable point anomalies and assess its sensitivity. Second, we show that deep RBFs are less susceptible to data poisoning attacks and describe a novel algorithm to clean sparsely poisoned data without relying on a verified, clean data set. Finally, we train a deep RBF based on the InceptionV3 architecture and evaluate its robustness on a range of adversarial attacks, including FGSM, I-FGSM, Deepfool, Carlini & Wagner, and Projected Gradient Descent. Most importantly, we demonstrate that deep RBFs should replace traditional classifiers in CPS tasks due to their robustness against a variety of security threats and their ability to simultaneously achieve production-grade performance.
    Show full item record

    Files in this item

    Thumbnail
    Name:
    BURRUSS-THESIS-2020.pdf
    Size:
    6.081Mb
    Format:
    PDF
    View/Open

    This item appears in the following collection(s):

    • Electronic Theses and Dissertations

    Connect with Vanderbilt Libraries

    Your Vanderbilt

    • Alumni
    • Current Students
    • Faculty & Staff
    • International Students
    • Media
    • Parents & Family
    • Prospective Students
    • Researchers
    • Sports Fans
    • Visitors & Neighbors

    Support the Jean and Alexander Heard Libraries

    Support the Library...Give Now

    Gifts to the Libraries support the learning and research needs of the entire Vanderbilt community. Learn more about giving to the Libraries.

    Become a Friend of the Libraries

    Quick Links

    • Hours
    • About
    • Employment
    • Staff Directory
    • Accessibility Services
    • Contact
    • Vanderbilt Home
    • Privacy Policy