This paper presents a multi-modal interface for interaction between people with physical disabilities and an assistive robot. This interaction is performed through a dialogue mechanism and augmented 3D vision glasses to provide visual assistance to an end user commanding an assistive robot to perform Daily Life Activities (DLAs). The augmented 3D vision glasses may provide augmented reality vision of menus and information dialogues over the view of the real world, or in a simulator environment for laboratory tests and user evaluation. The actual dialogue is implemented as a finite state machine, and includes possibilities of Automatic Speech Recognition (ASR), and a Text-to-Speech (TTS) converter. The final study focuses on studying the effectiveness of these visual and auditory aids for enabling the end user to command the assistive robot ASIBOT to perform a given task.
DOI: 10.1007/978-3-319-03413-3_15.