Perceiving when, where and how a robot is touched is an important aspect towards a natural Human-Robot Interaction (HRI). To date, several technologies are used in Social Robotics to determine the area where a touch is performed, in some cases using many sensors. Moreover, most approaches do not tackle the kind of touch performed.
In this paper, we introduce a novel technique based on audio analysis, and machine learning techniques. This presents a proof of concept aimed to provide some advantages regarding the state-of-the-art touch technologies for HRI: cost-efficiency since only a few microphones can cover the robot shell completely; robustness as microphones are not affected by electromagnetic interference or by external sounds; and accuracy taking into account the preliminary results.