Robot learning frameworks, such as Programming by Demonstration, are based on learning tasks from sets of user demonstrations. These frameworks, in their naïve implementation, assume that all the data from the user demonstrations has been correctly sensed and can be relevant to the task. Analogous to feature selection, which is the process of selecting a subset of relevant features for use in model construction, this paper presents a demonstration selection process, which is additionally
applied for feature selection for further data filtering. The demonstration and feature selection process presented is called Dissimilarity Mapping Filtering (DMF). DMF involves three steps: obtaining a measurement of dissimilarity (e.g. Dynamic Time Warping, etc.), reducing dimensions through a mapping algorithm (e.g. sum of dissimilarities, Multidimensional Scaling, etc.) and a filtering method (z-score based, DBSCAN, etc.). As a demonstration selector, DMF discards outlying demonstrations in terms of all the features considered simultaneously. As a feature selector, DMF discards features that present high inconsistency among demonstrations. We apply DMF to our Continuous Goal-Directed Actions (CGDA) robot learning framework presented in previous works.