Analyzing the Impact of Different Feature Queries in Active Learning for Social Robots

Download: BibTeX | Plain Text

Description

In recent years, the role of social robots is
gaining popularity in our society but still learning from
humans is a challenging problem that needs to be addressed. This paper presents an experiment where, after
teaching poses to a robot, a group of users are asked several questions whose answers are used to create feature
filters in the robot’s learning space. We study how the
answers to different types of questions affect the learning accuracy of a social robot when it is trained to recognize human poses. We considered three types of questions: “Free Speech Queries”, “Yes/No Queries”, and
“Rank Queries”, building a feature filter for each type
of question. Besides, we provide another filter to help
the robot to reduce the effects of inaccurate answers:
the Extended Filter. We compare the performance of a
robot that learned the same poses with Active Learning
(using the four feature filters) versus Passive Learning
(without filters). Our results show that, despite the fact
that Active Learning can improve the robot’s learning
accuracy, there are some cases where this approach, using the feature filters, achieves significant worse results
than Passive Learning if the user provides inaccurate
feedback when asked. However, the Extended Filter has
proven to maintain the benefits of Active Learning even
when the user answers are not accurate. 

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.