Show simple item record

DEEP REINFORCEMENT LEARNING FOR ADAPTIVE CONTROL IN ROBOTICS

dc.contributor.advisorBiswas, Gautam
dc.contributor.advisorQuinnones-Grueiro, Marcos
dc.creatorBhan, Luke
dc.date.accessioned2022-05-19T17:47:09Z
dc.date.available2022-05-19T17:47:09Z
dc.date.created2022-05
dc.date.issued2022-03-28
dc.date.submittedMay 2022
dc.identifier.urihttp://hdl.handle.net/1803/17433
dc.description.abstractAdaptive control of robotic systems is challenging due to nonlinear system dynamics and time-varying parameters that govern the system’s behavior. Naturally, deep reinforcement learning (DRL) is a suitable choice for the control of non-linear systems that lack reliable physics-based models. However, DRL algorithms require extensive data to learn an optimal control policy. Furthermore, optimal policies struggle to generalize across unknown environments commonly encountered in real world tasks. In this work, I propose a set of DRL-based architectures ranging from policy blending to data-driven model predictive control (MPC). With these approaches, I achieve enhanced sample efficiency and successfully generalize across different parameterized environments. Given the augmented training times and widespread task generalization, I am able to complete robotic control tasks involving a parameter varying robotic arm as well as UAV flights subject to complex motor faults.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectReinforcement Learning
dc.subjectRobotics
dc.titleDEEP REINFORCEMENT LEARNING FOR ADAPTIVE CONTROL IN ROBOTICS
dc.typeThesis
dc.date.updated2022-05-19T17:47:09Z
dc.type.materialtext
thesis.degree.nameMS
thesis.degree.levelMasters
thesis.degree.disciplineComputer Science
thesis.degree.grantorVanderbilt University Graduate School
dc.creator.orcid0000-0002-6734-8314


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record