Vision-based people detection using depth information for social robots. An experimental evaluation

Download: BibTeX | Plain Text

Description

Robots are starting to be applied to areas where they have to share the space with humans. In particular, social robots and people will coexist closely because the former are intended to interact with the latter. In this context, it is crucial that robots are aware of the presence of people around them. Traditionally, people detection has been performed using a flow of 2D images. However, in nature, animals’ sight perceives their surroundings using color and depth information. In this work, we present new people detectors that make use of the data provided by depth sensors and RGB images to deal with the characteristics of human-robot interaction scenarios. These people detectors are based on previous works using 2D images, and existing people detectors from different areas. The disparity of the input and output data used by these types of algorithms usually complicates their integration into robot control architectures. We propose a common interface that can be used by any people detector resulting in numerous advantages. Several people detectors using depth information and the common interface have been implemented and evaluated. The results show a great diversity among the different algorithms. Each one has a particular domain of use, which is reflected in the results. A clever combination of several algorithms appears as a promising solution to achieve a flexible, reliable people detector.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.