Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/roboticslab/wp-includes/functions.php on line 6121
Visual human-robot interaction - RoboticsLab

Visual human-robot interaction

eye

Description


Cognitive
Emotive
Visual System


“The intelligent
glance acts in the following way:  it anticipates,
 it prevents,
it uses information already known, it
recognizes,
it interprets”

 (Jose A. Marina, ?La Tª de la
inteligencia
creadora?)

Why we call it Cognitive vision?
Because in the human vision the knowledge plays a important role, we
anticipate, we use previously acquired information, we use information
from the knowledge we have of the environment, we recognize, we
interprete. Human vision is a
intelligent vision not limited by the data obtained from the physical
reaction after a visual stimulus, it also incorporates a serie of
mechanisms that try
to assure that all the knowledge, as much as the internal (of the own
observer)
as the knowledge of the environment, is used .

Why we call it Emotive vision?
Because the affective states play a very important role in many aspects
of the human activity and above all in the interaction with others. The
fact of including emotional assessment in an artificial vision system
adds additional information that can explain behaviors that could not
be understood without the affective factor. Therefore if we want to
equip the robot with the communication skills of humans the vision
system will need to consider the incorporation of a mechanism of visual
assessment of emotions.

Entries:
Facial Emotion Recognition and Adaptative Postural Reaction by a Humanoid based on Neural Evolution
International Journal of Advanced Computer Science. num. 10 , vol. 3 , pages: 481 – 493 , 2013
J.G. Bueno M. González-Fierro L. Moreno
Teaching Human Poses Interactively to a Social Robot
Sensors . num. 9 , vol. 13 , pages: 12406 – 12430 , 2013
V. Gonzalez Pacheco M. Malfaz M.A. Salichs
Maggie: A Social Robot as a Gaming Platform
International Journal of Social Robotics. num. 4 , vol. 3 , pages: 371 – 381 , 2011
A. Ramey V. Gonzalez Pacheco F. Alonso A. Castro-Gonzalez M.A. Salichs

Entries:
Facial gesture recognition and postural interaction using neural evolution algorithm and active appearance models
Robocity2030 9th Workshop. Robots colaborativos e interacción humano-robot, 2011, Madrid, Spain
J.G. Bueno M. González-Fierro L. Moreno
Playzones : A robust detector of game boards for playing visual games with robots
Robot 2011 – III Workshop de Robótica : Robótica Experimental, Sevilla, Spain
A. Ramey M.A. Salichs

Entries:
Robots personales y asistenciales
chapter: ASIBOT: robot portátil de asistencia a discapacitados. Concepto, arquitectura de control y evaluación clínica pages: 127 – 144. Universidad Carlos III de Madrid , ISBN: 978-84-691-3824, 2008
R. Pacheco R. Correal A. Gimenez S. Martinez A. Jardon R. Barber M.A. Salichs
Robots Personales y Asistenciales
chapter: Desarrollo de un sistema de detecci¶on de caras y gestos para el robot personal Maggie pages: 77 – 96. Universidad Carlos III de Madrid , ISBN: 978-84-691-3824, 2008