Nowadays, robots and humans coexist in real settings where robots need to interact autonomously making their own decisions. Many applications require that robots adapt their behavior to different users and remember each userâs preferences to engage them in the interaction. To this end, we propose a decision making system for social robots that drives their actions taking into account the user and the robotâs state. This system is based on bio-inspired concepts, such as motivations, drives and wellbeing, that facilitate the rise of natural behaviors to ease the acceptance of the robot by the users. The system has been designed to promote the human-robot interaction by using drives and motivations related with social aspects, such as the usersâ satisfaction or the need of social interaction. Furthermore, the changes of state produced by the usersâ exogenous actions have been modeled as transitional states that are considered when the next robotâs action has to be selected. Our system has been evaluated considering two different user profiles. In the proposed system, userâs preferences are considered and alter the homeostatic process that controls the decision making system. As a result, using reinforcement learning algorithms and considering the robotâs wellbeing as the reward function, the social robot Mini has learned from scratch two different policies of action, one for each user, that fit the usersâ preferences. The robot learned behaviors that maximize its wellbeing as well as keep the users engaged in the interactions.