Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/roboticslab/wp-includes/functions.php on line 6121
Speeding-Up Action Learning in a Social Robot With Dyna-Q+: A Bioinspired Probabilistic Model Approach - RoboticsLab

Speeding-Up Action Learning in a Social Robot With Dyna-Q+: A Bioinspired Probabilistic Model Approach

Download: BibTeX | Plain Text

Description

Robotic systems that are developed for social and dynamic environments require adaptive
mechanisms to successfully operate. Consequently, learning from rewards has provided meaningful results in
applications involving human-robot interaction. In those cases where the robot’s state space and the number
of actions is extensive, dimensionality becomes intractable and this drastically slows down the learning
process. This effect is specially notorious in one-step temporal difference methods because just one update
is performed per robot-environment interaction. In this paper, we prove how the action-based learning of a
social robot can be improved by combining classical temporal difference reinforcement learning methods,
such as Q-learning or Q(λ), with a probabilistic model of the environment. This architecture, which we
have called Dyna, allows the robot to simultaneously act and plan using the experience obtained during real
human-robot interactions. Principally, Dyna improves classical algorithms in terms of convergence speed and
stability, which strengthens the learning process. Hence, in this work we have embedded a Dyna architecture
in our social robot, Mini, to endow it with the ability to autonomously maintain an optimal internal state while
living in a dynamic environment.