Resilient Decision Making in Adversarial and Uncertain Environment
Benefiting from the recent progress of AI research, computerized supports in making decisions have been deployed to a large variety of computer systems supporting decision making, which received research interests in the area of decision support programs, systems, methods, and techniques. However, recent cases started to reveal the decision support vulnerabilities, where human intervention becomes infeasible to handle problems such as failures or deliberate attacks. On the other hand, such situations are not despairing as tremendous approaches addressing resilience and robustness have been explored by the theoretical side, which leads to massive success in many areas. This dissertation aims to bridge this gap to improve the resilience of decision support systems in several different cases. We first consider the scenario of decentralized traffic light control problems and develop a cloud computing framework addressing simulation-based optimizations driven by real-time data. We further study the multiple autonomous vehicles path planning with motion uncertainty, model the interactions among strategic agents via a game-theoretical framework, and investigate the gap between centralized and decentralized control scheme. Then, we focus on the areas with machine learning enhanced decision support systems (medical imaging and anomaly detection). The vulnerabilities of such systems are revealed and addressed by our resilient algorithm. Finally, we present a game-theoretical approach addressing the threaten of adversarial examples in machine learning enhanced systems by calculating the optimal randomization scheme over multiple learning models.