The future
of robotics predicts that robots will integrate themselves more every day with
human beings and their environments. To achieve this integration, robots need to
acquire information about the environment and its objects. There is a big need
for algorithms to provide robots with these sort of skills, from the location
where objects are needed to accomplish a task up to where these objects are
considered as information about the environment. This paper presents a way to
provide mobile robots with the ability-skill to detect objects for semantic
navigation. This paper aims to use current trends in robotics and at the same
time, that can be exported to other platforms. Two methods to detect objects
are proposed, contour detection and a descriptor based technique, and both of
them are combined to overcome their respective limitations. Finally, the code
is tested on a real robot, to prove its accuracy and efficiency.