Nowadays, image treatment is one of the biggest barriers in robotics. Some aspects that are simply a priori for individuals (like object discrimination), are not so obvious when treating them using computers.
The problem arises when associating points belonging to the same object. Our brain does this process automatically, but it is not so clear when we try to do it digitally. The machine cannot identify the environment as a set of elements but as a set of unrelated pixels.
Following this line of argument, we want to design algorithm that find/identify/recognize the human hand and segmentate it. Once the segmentation is done, we discriminate among its physiological elements (palm and fingers), and the hand can be represented graphically in real-time when gesturing.
The final goal is to teach the robot to manipulate objects without human interaction.