How Much Should a Robot Trust the User Feedback? Analyzing the Impact of Verbal Answers in Active Learning

External link: springer link

Description

This paper assesses how the accuracy in user’s answers influence the learning of a social robot when it is trained to recognize poses 
using Active Learning. We study the performance of a robot trained to 
recognize the same poses actively and passively and we show that, sometimes, the user might give simplistic answers producing a negative impact 
on the robot’s learning. To reduce this effect, we provide a method based 
on lowering the trust in the user’s responses. We conduct experiments 
with 24 users, indicating that our method maintains the benefits of AL 
even when the user answers are not accurate. With this method the robot 
incorporates domain knowledge from the users, mitigating the impact of
low quality answers.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.