Robots that are devised for assisting and interacting with humans are
becoming fundamental in many applications, including in healthcare,
education, and entertainment. For these robots, the capacity to exhibit
affective states plays a crucial role in creating emotional bonding with
the user. In this work, we present an affective architecture that
grounds biological foundations to shape the affective state of the Mini
social robot in terms of mood and emotion blending. The affective state
depends upon the perception of stimuli in the environment, which
influence how the robot behaves and affectively communicates with other
peers. According to research in neuroscience, mood typically rules our
affective state in the long run, while emotions do it in the short term,
although both processes can overlap. Consequently, the model that is
presented in this manuscript deals with emotion and mood blending
towards expressing the robotâs internal state to the users. Thus, the
primary novelty of our affective model is the expression of: (i) mood,
(ii) punctual emotional reactions to stimuli, and (iii) the decay that
mood and emotion undergo with time. The system evaluation explored
whether users can correctly perceive the mood and emotions that the
robot is expressing. In an online survey, users evaluated the robotâs
expressions showing different moods and emotions. The results reveal
that users could correctly perceive the robotâs mood and emotion.
However, emotions were more easily recognized, probably because they are
more intense affective states and mainly arise as a stimuli reaction.
To conclude the manuscript, a case study shows how our model modulates
Miniâs expressiveness depending on its affective state during a
human-robot interaction scenario.