Electronic Theses and Dissertations

Identifier

1178

Date

2014

Date of Award

7-7-2014

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Major

Computer Science

Committee Chair

Stan Franklin

Committee Member

Dipankar Dasgupta

Committee Member

Lan Wang

Committee Member

Vasile Rus

Abstract

A comprehensive systems-level cognitive architecture attempts to provide a blueprint for generally capable intelligent software agents or cognitive robots. While such architectures can be conceived and studied at a high level of abstraction, this work focuses primarily on some of the low-level algorithms underlying the architecture. For instance, one might study logical reasoning, decision-making, etc., ignoring or making simplifying assumptions about the perceptual processes producing the representations involved in the higher-level processes. In contrast, critical aspects of cognitive architectures lie in the low-level details of the traditionally identified, abstract modules and processes. Natural selection has imbued biological agents with motivations driving them to act for survival and reproduction. Likewise, artificial agents also require motivations to act in a goal-directed manner. In this context, I present a motivational extension to the LIDAcognitive architecture integrated within LIDA's cognitive cycle at a fundamental level. This motivational extension provides a repertoire of motivational capacities including alarms, feelings, affective valence, incentive salience, emotion, appraisal, reinforcement learning, and model-free and model-based learning. ALIDA-based agent implementing the proposed motivational extension replicates a reinforcer devaluation experiment testing its ability to learn, and later revise, the reward predicting attributes of stimuli that drive its behavior. Intelligent software agents must also autonomously navigate complex, dynamic, uncertain environments with bounded resources. In my view, this requires that they continually update a hierarchical, dynamic, uncertain internal model of their current situation, via approximate Bayesian inference, incorporating both the sensory data and a generative model of its causes. To explicate my approach, I identify perceptual principles for cognitive architectures influencing perceptual representation, perceptual inference, and the associated learning processes. Guided by these, I propose a predictive coding extension to the HTMCortical Learning Algorithms, termed PC-CLA, as a potential foundational building block for the systems-level LIDAcognitive architecture. PC-CLAfleshes out LIDA's internal representations, memory, learning and attentional processes; and takes an initial step towards the comprehensive use of distributed and probabilistic (uncertain) representation throughout the architecture. I conclude with reports on a batter of new tests of the original CLAas well as proof-of-concept tests of PC-CLA.

Comments

Data is provided by the student.

Library Comment

dissertation or thesis originally submitted to the local University of Memphis Electronic Theses & dissertation (ETD) Repository.

Share

COinS