The development of assistive robots for elderly and disabled people is currently an active field of research in the robotics community. An important part of making these systems usable is to allow for multimodal Human-Robot Interaction
(HRI). However, the overall human-machine system is complex. The user and the robot are operating in a closed loop and both are potentially capable of adapting to the other. The work presented here has attempted to approach the problem from three different perspectives, investigating methods for analyzing, implementing, and testing an enabling multimodal interface for the ASIBOT assistive robot. It was proposed to use principles from Information Theory as the basis for the analysis, with the goal of increasing the information capacity of the human-machine channel. Multimodality was identified as one possible approach for achieving this. Methods for performing information fusion and machine learning that might be of interest for the implementation were identified. It was speculated that reinforcement learning could serve as an on-line adaptive component in the interface. Finally, the use of standard movement models and tasks as the basis for testing multimodal HRI was discussed and linked to typical tasks for assistive robots.