Automatic robot programming by visual demonstration of task execution


We propose a novel approach to program a robot by demonstrating the task multiple number of times in front of a vision system. The system analyzes the visual data captured during the demonstration of the task by a human operator and produces manipulator level commands, so that the robot can replicate the task. Here we integrate human dexterity with sensory data using computer vision techniques in a single platform. This paper describes a fast and efficient algorithm for the vision system to process a binocular video sequence to obtain the trajectories of each component of the end-effector. The concept of trajectory bundle is introduced to avoid singularities and to obtain an optimal path.

Publication Title

International Conference on Advanced Robotics, Proceedings, ICAR

This document is currently not available here.