Show simple item record

Verification of Learning-enabled Cyber-Physical Systems

dc.contributor.advisorJOHNSON, TAYLOR
dc.creatorTran, Dung
dc.date.accessioned2020-09-15T23:36:24Z
dc.date.available2020-09-15T23:36:24Z
dc.date.created2020-08
dc.date.issued2020-07-24
dc.date.submittedAugust 2020
dc.identifier.urihttp://hdl.handle.net/1803/15957
dc.description.abstractDeep Neural Networks (DNNs) have been increasingly applied in safety-critical applications such as self-driving cars, unmanned underwater vehicles (UUV), and medical image diagnostics recently. The ability to learn sophisticated features and functions makes DNNs outstanding compared to other techniques in solving complicated tasks. At the same time, due to high nonlinearity, DNNs behaviors are generally unpredictable. Additionally, they are also vulnerable to adversarial attacks, where slightly changing in the input can cause a dramatic change in the output of a network. Since the failure of DNNs-based components in safety-critical systems can cause the loss of human lives, there is an urgent need for methods and tools to verify/certificate the safety and robustness of such systems. In this thesis, I propose a formal framework for verifying the safety and robustness of DNNs and learning-enabled Cyber-Physical Systems (CPS) using reachability analysis. The crux of our approach is a collection of reachability algorithms performing on novel geometrical set data structures that represent the bounded uncertainty of the input of a DNN or the initial conditions of a learning-enabled CPS efficiently. The proposed reachability algorithms construct a reachable set containing all possible outputs corresponding to the bounded uncertainty of a system. The reachable set is then used to reason about the safety or robustness of the systems. Our framework has been implemented in NNV, a neural network verification tool written in Matlab and Python that is gaining much attention from the research community and industry. Our generic framework has been tacked the following challenging subjects successfully: 1) safety and robustness verification of deep feed-forward neural networks (FNNs), 2) safety verification of neural-network-based control systems, 3) robustness verification of deep image classification convolutional neural networks (CNNs), 4) robustness verification of deep semantic segmentation networks (SSNs).
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectDeep Neural Networks, Verification, Reachability Analysis, Cyber-Physical Systems
dc.titleVerification of Learning-enabled Cyber-Physical Systems
dc.typeThesis
dc.date.updated2020-09-15T23:36:24Z
dc.type.materialtext
thesis.degree.namePhD
thesis.degree.levelDoctoral
thesis.degree.disciplineComputer Science
thesis.degree.grantorVanderbilt University Graduate School
dc.creator.orcid0000-0001-6946-9526


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record