Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions
The ability of deep neural networks (DNNs) to achieve state-of-the-art performance on complicated tasks has increased their adoption in safety-critical cyber physical systems (CPS). However, DNNs are susceptible to a variety of attacks, including 1) adversarial attacks where imperceptible perturbations of the input can induce mispredictions, 2) data poisoning attacks where the training data is manipulated to encode a malicious backdoor key, and 3) physically realizable point anomalies that exploit corner cases of the model. Simple two-layer radial basis function (RBF) networks are known to exhibit low confidence on point anomalies and adversarial images; however, until recently deeper RBFs have been difficult to train on complicated data. This paper extends recent advancements in deep RBFs and presents novel methods to address major security threats of DNNs. First, we show that in a self-driving task a deep RBF version of NVIDIA's DAVE-II architecture can reliably detect physically realizable point anomalies and assess its sensitivity. Second, we show that deep RBFs are less susceptible to data poisoning attacks and describe a novel algorithm to clean sparsely poisoned data without relying on a verified, clean data set. Finally, we train a deep RBF based on the InceptionV3 architecture and evaluate its robustness on a range of adversarial attacks, including FGSM, I-FGSM, Deepfool, Carlini & Wagner, and Projected Gradient Descent. Most importantly, we demonstrate that deep RBFs should replace traditional classifiers in CPS tasks due to their robustness against a variety of security threats and their ability to simultaneously achieve production-grade performance.