The aim of this project is to search ways of approach and manipulation of 3D objects by robotic manipulator using vision and force control. The grasp and approach can be determined according to the shape of the object and the free space to reach it.
This research line is being performed in collaboration with the LISIF laboratory of the University Pierre & Marie Curie of Paris, as part of an Integrated Action.
The increasing need for remote intervention of man requires the development of robotic systems that are capable performing tasks with total autonomy and high dexterity. Current remotely controlled applications or those performed by autonomous robotic systems are concentrated at hostile situations such as radioactive environments, sub-sea or space but also can be found in fields closer to everyday life such as manufacturing or security applications. New applications cover chirurgical operations in tele-medicine and extend to assistive activities helping disabled persons. The Robots are then required to resolve operations in environments which may not be adapted, but crowded with mobile and fixed obstacles. The objective therefore is to equip modern robots with the possibility to reach their desired position, identify the target object, grasp it and manipulate it as required according to the task in hand. There is no doubt that there are several robotic systems developed in recent years with similar philosophy; however, the work of integration is still far from reaching. If these robotic systems are called upon to operate in the real world, and consequently to interact actively with an environment which is neither structured nor easy to model, then a greater effort should be made to equip them with vision systems. The problematic is then to develop algorithms and strategies for the grasping and manipulation of 3D objects using visual information, provided by a multi camera system in charge of the perception of the scene.Since the robot is expected to come in contact with its environment, the force exerted at the end-effector is also of major importance to achieve an optimal interaction. It is therefore clear that a hybrid vision/force/position control is required to achieve a correct interaction with the environment and ensure an optimal manipulation of object by the robotic system.