For the development of usable assistive robots for elderly and disabled an important part is to allow for multimodal Human-Robot Interaction (HRI). However, the overall human-machine system is complex. The user and the robot are operating in a closed loop and both are potentially capable of adapting to the other. The work presented here has attempted to approach the problem from three different perspectives, investigating methods for analyzing, implementing, and testing an enabling multimodal interface for the ASIBOT. The main purpose of multimodality is then here to reduce control inaccurate interpretation of user intentions. A reliable human centered SW architecture allows high level commands from the different modalities. That is, the user will simultaneously coordinate different modalities so as to make his/her intention clearer to the system. Safety and dependability will be the underlying evaluation criteria for new mechanical designs, actuation, and control architectures to deliver performance.