Show simple item record

A computational neuroscience model with application to robot perceptual learning

dc.creatorTugcu, Mert
dc.date.accessioned2020-08-22T20:41:24Z
dc.date.available2008-08-01
dc.date.issued2007-08-01
dc.identifier.urihttps://etd.library.vanderbilt.edu/etd-08012007-163119
dc.identifier.urihttp://hdl.handle.net/1803/13770
dc.description.abstractIn robotics, one important objective is the ability to teach the robot new skills and have it reason about the current tasks at hand without explicit programming. Indeed, this idea is central to open-ended development, developmental robotics and autonomous mental development. One approach to this issue is to have the robot learn from its own past experience, which would help the robot adapt to changing environments. However, in a learning process, a critical issue to both robots and biological creatures is efficient use of the limited resources available for survival. A robot, operating in a complex, unstructured environment, will encounter many percepts, and typically most of them are not relevant to the current task. This suggests the need for a capability to focus attention on the smaller number of items that are relevant to the task. Thus, prefrontal cortex working memory models may be a good fit for learning to associate perception and action, and perhaps other concepts as well, in order to perform a task. Many of the systems in the literature have only crude perceptual capabilities and as a result, the environments are usually very simplified by modifications, such as by using artificial percepts. Such systems may fail in complex, uncontrolled environments, especially under changing lighting conditions. Thus, successful task execution strongly depends on a reliable perceptual system in these types of environments. In this work, a novel implementation of a perceptual system, which operates on an extremely high dimensional feature space, is combined with a biologically inspired working memory model. The perceptual system does not rely on any parametric techniques (i.e., computing eigenvectors, covariance matrices, etc.) and the computational cost does not depend strongly on the number of dimensions. Only vision is used, as the main sensory input for the system. The resulting system initially learns basic behaviors and skills, which in turn, are used to learn more complex behaviors. The success of the system is demonstrated with a vision guided navigation task in a complex, noisy, and unmodified environment.
dc.format.mimetypeapplication/pdf
dc.subjectComputer Vision
dc.subjectPerceptual Learning
dc.subjectWorking Memory
dc.subjectDevelopmental Robotics
dc.subjectMachine Learning
dc.subjectComputational neuroscience
dc.titleA computational neuroscience model with application to robot perceptual learning
dc.typedissertation
dc.contributor.committeeMemberProf. Kazuhiko Kawamura
dc.contributor.committeeMemberDr. David C. Noelle
dc.contributor.committeeMemberDr. Richard A. Peters
dc.contributor.committeeMemberDr. Nilanjan Sarkar
dc.type.materialtext
thesis.degree.namePHD
thesis.degree.leveldissertation
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorVanderbilt University
local.embargo.terms2008-08-01
local.embargo.lift2008-08-01
dc.contributor.committeeChairProf. D. Mitchell Wilkes


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record