A neurocontrol for automatic reconstruction of facial displays
Anthropomorphic representations of software agents (avatars) are being used as user interfaces in order to enhance communication bandwidth and interaction with human users. They have been traditionally programmed and controlled by ontologies designed according to intuitive and heuristic considerations. More recently, recurrent neural nets have been trained as neurocontrollers for emotional displays in avatars, on a continuous scale of negative, neutral, and positive feedback, that are meaningful to users in the context of tutoring sessions on a particular domain (computer literacy). We report on a new neurocontrol developed as a recurrent network to autonomously and dynamically generate and synchronize the movements of facial features such as lips, eyes, eyebrows, and gaze in order to produce facial displays that convey high information content on nonverbal behavior to untrained human users. The neurocontrol is modular and can be easily integrated with semantic processing modules of larger agents that operate in real-time, such as videoconference systems, tutoring systems, and more generally, user interfaces coupled with affective computing modules for naturalistic communication. A novel technique, cascade inversion, provides an alternative to backpropagation through time where the latter may fail to learn recurrent neural nets from previously learned modules playing a role in the final solution. © 2010 IEEE.
Proceedings of the International Joint Conference on Neural Networks
Garzon, M., & Sivakumar, B. (2010). A neurocontrol for automatic reconstruction of facial displays. Proceedings of the International Joint Conference on Neural Networks https://doi.org/10.1109/IJCNN.2010.5596799