Intent Prediction in Human-Human Interactions

Abstract

The human ability to infer others' intent is innate and crucial to development. Machines ought to acquire this ability for seamless interaction with humans. In this article, we propose an agent model for predicting the intent of actors in human-human interactions. This requires simultaneous generation and recognition of an interaction at any time, for which end-to-end models are scarce. The proposed agent actively samples its environment via a sequence of glimpses. At each sampling instant, the model infers the observation class and completes the partially observed body motion. It learns the sequence of body locations to sample by jointly minimizing the classification and generation errors. The model is evaluated on videos of two-skeleton interactions under two settings: (first person) one skeleton is the modeled agent and the other skeleton's joint movements constitute its visual observation, and (third person) an audience is the modeled agent and the two interacting skeletons' joint movements constitute its visual observation. Three methods for implementing the attention mechanism are analyzed using benchmark datasets. One of them, where attention is driven by sensory prediction error, achieves the highest classification accuracy in both settings by sampling less than 50% of the skeleton joints, while also being the most efficient in terms of model size. This is the first known attention-based agent to learn end-to-end from two-person interactions for intent prediction, with high accuracy and efficiency.

Publication Title

IEEE Transactions on Human-Machine Systems

Share

COinS