Automatic generation of robot program code: Learning from perceptual data

Abstract

We propose a novel approach to program a robot by demonstrating the task multiple number of times in front of a vision system. Here we integrate human dexterity with sensory data using computer vision techniques in a single platform. A simultaneous feature detection and tracking framework is used to track various features (finger tips and the wrist joint). A Kalman filter does the tracking by predicting the tentative feature location and a HOS-based data clustering algorithm extracts the feature. Color information of the features are used for establishing correspondences. A fast, efficient and robust algorithm for the vision system thus developed process a binocular video sequence to obtain the trajectories and the orientation information of the end effector. The concept of a trajectory bundle is introduced to avoid singularities and to obtain an optimal path.

Publication Title

Proceedings of the IEEE International Conference on Computer Vision

This document is currently not available here.

Share

COinS