Training a neurocontrol for talking heads

Abstract

Talking heads are anthropomorphic representations of a software agent used to facilitate interaction with human users. Talking heads have been commonly programmed and controlled by ontologies designed according to intuitive and heuristic considerations that may have little to do with the applications at hand, and are thus probably not truly expressive, meaningful or ergonomic to human users. Here we present preliminary results in the design and training of an autonomous neural control that is capable of generating facial expressions that convey meaningful emotional content to users in the context of tutoring sessions on a particular domain (computer literacy) on a continuous scale of negative, neutral, and positive feedback. The ultimate goal of the project is to have the control autonomously synchronize the movements of facial features in lips, eyes and eyebrows in order to produce facial animations that are not only valid and meaningful to untrained human users, but also can easily interface with semantic processing modules of larger agents that operate in real-time, such as tutoring systems.

Publication Title

Proceedings of the International Joint Conference on Neural Networks

This document is currently not available here.

Share

COinS