Learning and generalization of task-parameterized skills through few human demonstrations

In the field of robotics, the demand for adaptable skills capable of effectively handling diverse situations has surpassed the reliance on repetitive tasks. To enhance the generalization of motion policies, task-parameterized learning has emerged as a valuable approach, as it encodes pertinent contextual information in task parameters, facilitating the flexible execution of tasks. This process requires the collection of multiple demonstrations in various situations. Generate a set of diverse situations covering all possible cases is a complex task. Due to this, the search for training with fewer demonstrations becomes highly desirable. In this article, we present a novel algorithm focused on generating synthetic information to facilitate the generalization of parameterized tasks. This algorithm makes use of Kinesthetic Fast Marching Learning, a Learning from Demonstration (LfD) algorithm that allows obtaining optimal movement paths based on velocity fields. The algorithm enables autonomous data generation, producing demonstrations that are equal to or more optimal than those generated by users themselves. Evaluation is done through a metric based on Wasserstein distance that takes into account the probabilistic data of the generated paths. The algorithm has been evaluated through tests in simulated environments, comparing its efficiency against two widely used algorithms (where it has shown greater efficiency in generalization and the generation of more optimal paths) and with real-world tests in a task-oriented environment (sweeping) carried out by the ADAM robot.