Neural net generation of facial displays in talking heads
Abstract
Anthropomorphic representations of software agents (talking heads) are being used to facilitate interaction with human users. They are commonly programmed and controlled by ontologies designed according to intuitive and heuristic considerations. Here we describe successful training of a recurrent neural net capable of controlling emotional displays in talking heads, on a continuous scale of negative, neutral, and positive feedback, that are meaningful to users in the context of tutoring sessions on a particular domain (computer literacy). The neurocontrol is a recurrent neural network that autonomously synchronizes the movements of facial features in lips, eyes, eyebrows, and gaze in order to produce facial displays that are valid and meaningful to untrained human users. The control is modular and can be easily integrated with semantic processing modules of larger agents that operate in real-time, such as tutoring and face recognition systems.
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Recommended Citation
Garzon, M., & Rajaya, K. (2003). Neural net generation of facial displays in talking heads. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2686, 750-757. https://doi.org/10.1007/3-540-44868-3_95