The future of robotics strives to embed robots more and more to human environments every day. One of the tasks in which the robots show a potential application is in the monitoring and surveillance of houses. To achieve this goal, they need to gather information from the environment in a similar way humans do, since the idea is that humans could interact with them the same way a person interacts with another. This paper presents an object detection algorithm focused on semantic information, strives to use current trends in robotics and provides flexibility, so it can be exported to other environments. Semantic navigation enables modeling the environment to an abstraction level close to the one used by humans, facilitating the interaction between the robot and the user, allowing better tasks communication and receiving more complete information from the environment, in addition to increasing the autonomy of the robot. The methods proposed try to recognize objects based on contours and descriptors, and both of them are combined to overcome their deficiencies. Finally, to prove it is accurate and efficient, the code is tested on a real robot.