Show simple item record

Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions

dc.contributor.advisorDubey, Abhishek
dc.creatorBurruss, Matthew
dc.date.accessioned2020-06-30T23:52:45Z
dc.date.available2020-06-30T23:52:45Z
dc.date.created2020-05
dc.date.issued2020-03-25
dc.date.submittedMay 2020
dc.identifier.urihttp://hdl.handle.net/1803/10082
dc.description.abstractThe ability of deep neural networks (DNNs) to achieve state-of-the-art performance on complicated tasks has increased their adoption in safety-critical cyber physical systems (CPS). However, DNNs are susceptible to a variety of attacks, including 1) adversarial attacks where imperceptible perturbations of the input can induce mispredictions, 2) data poisoning attacks where the training data is manipulated to encode a malicious backdoor key, and 3) physically realizable point anomalies that exploit corner cases of the model. Simple two-layer radial basis function (RBF) networks are known to exhibit low confidence on point anomalies and adversarial images; however, until recently deeper RBFs have been difficult to train on complicated data. This paper extends recent advancements in deep RBFs and presents novel methods to address major security threats of DNNs. First, we show that in a self-driving task a deep RBF version of NVIDIA's DAVE-II architecture can reliably detect physically realizable point anomalies and assess its sensitivity. Second, we show that deep RBFs are less susceptible to data poisoning attacks and describe a novel algorithm to clean sparsely poisoned data without relying on a verified, clean data set. Finally, we train a deep RBF based on the InceptionV3 architecture and evaluate its robustness on a range of adversarial attacks, including FGSM, I-FGSM, Deepfool, Carlini & Wagner, and Projected Gradient Descent. Most importantly, we demonstrate that deep RBFs should replace traditional classifiers in CPS tasks due to their robustness against a variety of security threats and their ability to simultaneously achieve production-grade performance.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectNeural Network Robustness, Anomalies, Adversarial Attacks, Poisoning Attacks, Radial Basis Functions
dc.titleEnhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions
dc.typeThesis
dc.date.updated2020-06-30T23:52:45Z
dc.type.materialtext
thesis.degree.nameMS
thesis.degree.levelMasters
thesis.degree.disciplineComputer Science
thesis.degree.grantorVanderbilt University
dc.creator.orcid0000-0003-4180-995X


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record