In recent years, the role of social robots is
gaining popularity in our society but still learning from
humans is a challenging problem that needs to be addressed. This paper presents an experiment where, after
teaching poses to a robot, a group of users are asked several questions whose answers are used to create feature
filters in the robotâs learning space. We study how the
answers to different types of questions affect the learning accuracy of a social robot when it is trained to recognize human poses. We considered three types of questions: âFree Speech Queriesâ, âYes/No Queriesâ, and
âRank Queriesâ, building a feature filter for each type
of question. Besides, we provide another filter to help
the robot to reduce the effects of inaccurate answers:
the Extended Filter. We compare the performance of a
robot that learned the same poses with Active Learning
(using the four feature filters) versus Passive Learning
(without filters). Our results show that, despite the fact
that Active Learning can improve the robotâs learning
accuracy, there are some cases where this approach, using the feature filters, achieves significant worse results
than Passive Learning if the user provides inaccurate
feedback when asked. However, the Extended Filter has
proven to maintain the benefits of Active Learning even
when the user answers are not accurate.