iCub, humanoid robot, Italian Institute of Technology (IIT), Genova, Italy, RobotCub Consortium, Europe

Administrator

Administrator
Staff member
Last edited:

iCub - the robot child

Uploaded on May 13, 2009

IIT interview by Euronews regarding the Robotcub project.


iCub - Humanoid Platform

Uploaded on Dec 6, 2011

The iCub is the humanoid robot developed as part of the EU project RobotCub and subsequently adopted by more than 20 laboratories worldwide. It can see and hear, it has the sense of proprioception and movement.

Credits: Laura Taverna, Matteo Tamboli, Vadim Tikhanoff, Carlo Ciliberto, Ugo Pattacini, Lorenzo Natale, Francesco Nori, Francesco Becchi, Giorgio Metta, Giulio Sandini. Robotics, Brain & Cognitive Sciences - Italian Institue of Technology


All gestures you can 2.0

Published on Nov 5, 2013

This video shows a new version of the memory game "All gestures you can",
where a human challenges the humanoid robot iCub. "All gestures you can" is a real-time memory game: the goal is to perform the longest sequence of hand gestures that you or the opponent can remember. This game is based on a Gesture Recognition System, that exploits 3D features based on motion and appearance; these features are then enriched with a sparse coding stage and classified by Linear SVMs. In the previous version we used Kinect to obtain 3D information; in this case instead, we rely directly on the stereo vision of the iCub. Furthermore, we are now able to train new gestures in real-time, via a single demonstration.
 

Enhancing Software module reusability using port plug-ins: an iCub Experiment

Published on Apr 17, 2014

Systematically developing high--quality reusable software components is a difficult task and requires careful design to find a proper balance between potential reuse, functionalities and ease of implementation. Extendibility is an important property for software which helps to reduce cost of development and significantly boosts its reusability. This work introduces an approach to enhance components reusability by extending their functionalities using plug-ins at the level of the connection points (ports). Application dependent functionalities such as data monitoring and arbitration can be implemented using a conventional scripting language and plugged into the ports of components. The main advantage of our approach is that it avoids to introduce application dependent modifications to existing components, thus reducing development time and fostering the development of simpler and therefore more reusable components. Another advantage of our approach is that it reduces communication and deployment overheads because extra functionalities can be added without introducing additional modules. The video demonstrates the port plug-in in a clean the table scenario on the iCub Humanoid robot.
 

Execution of Pushing Action with Semantic Event Chains: iCub First Integration

Published on Apr 17, 2014

Here we present the first integration of the framework for manipulation execution based on the so called "Semantic Event Chain" on the iCub robot. The Semantic Event Chain is an abstract description of relations between the objects in the scene. It captures the change of those relations during a manipulation and thereby provides the decisive temporal anchor points by which a manipulation is critically defined.
 

The 10th iCub birthday

Published on May 26, 2014

The iCub is a humanoid robot shaped as a four years old child. It is available as an open system platform following the GPL license. The iCub was originally designed by a consortium of 11 partners guided by the Italian Institute of Technology, with background ranging from engineering to neurophysiology and developmental psychology, within the RobotCub Integrated Project funded by European Commission through its Cognitive Systems and Robotics Unit. The iCub can crawl on all fours and sit up. Its hands allow dexterous manipulation and its head and eyes are fully articulated. It has visual, vestibular, auditory, and tactile sensory capabilities. In the past few years the community of researchers working on the iCub grew at a constant pace. Today there are more than 25 iCub platforms available worldwide. Simultaneously the platform evolved significantly in terms of its sensors and actuators. Thanks to the recent improvements the iCub is now equipped with: whole-body distributed tactile and force/torque sensing, series elastic actuators for compliant walking experiments (the COMAN actuators) and movable head with microphones, speaker, actuated eyes, eyelids and lips for speech and human-robot interaction studies. The key advantage of the iCub is that it is an integrated platform that allows the study of complex tasks that require speech and auditory perception, vision as well as proper coordination and integration of sensory data and control. We believe that this, more than ever, requires that researchers with different expertise join forces and start working together. Having platforms with compatible software and hardware is clearly a unique opportunity for collaboration.
 

iCub balancing by controlling external forces: CoDyCo project results

Published on May 29, 2014

iCub, the humanoid robot of the Italian Institute of Techology, can stand and balance even when interacting with people. Thanks to the artificial skin, which equips the robot with 4000 sensitive contact points, the iCub's control system measures the external forces and properly regulate these interactions in order to keep the balance.

These new capacities will be pivotal when iCub will cohabitate with human beings in domestic environments. The results have been achieved by the researches working at the Italian Institute of Technology and, in particular, by those funded by the European Project CoDyCo, coordinated by Dr. Francesco Nori.
 

3D Estimation and Fully Automated Learning of Eye-Hand Coordination in Humanoid Robots

Published on Oct 20, 2014

This work deals with the problem of 3D estimation and eye-hand calibration in humanoid robots. Using the iCub humanoid robot, we developed a fully automatic procedure based on optimization techniques that does not require any human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images. We demonstrate the usefulness and the effectiveness of the proposed system in two typical robotic scenarios: (1) object grasping; (2) 3D scene reconstruction.

Fanello S.R., Pattacini U., Gori I., Tikhanoff V., Randazzo M., Roncone A., Odone F. & Metta G. 2014, ‘3D Stereo Estimation and Fully Automated Learning of Eye-Hand Coordination in Humanoid Robots’, IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, November 18-20, 2014.
 

Automatic kinematic chain calibration using artificial skin

Published on Oct 22, 2014

This video deals with the task of solving the problem of self-(or double-)touch. This behaviour is defined as the robot being able to touch itself on a specific region of the skin, and it represents an unprecedented opportunity for a humanoid robot to achieve the simultaneous activation of multiple skin parts (in this video, patches belonging to the right hand and the left forearm). This high amount of information can be then used later on for exploiting a completely autonomous calibration of the body model (i.e. kinematic calibration or tactile calibration). In the reference paper cited below, this competence has been used to perform a kinematic calibration of both the right and the left arms.

Roncone A., Hoffmann M., Pattacini U. & Metta G. 2014, ‘Automatic kinematic chain calibration using artificial skin: self-touch in the iCub humanoid robot’, IEEE International Conference on Robotics and Automation (ICRA2014), IEEE pp.2305-2312, , Hong Kong, China, May 31-June 7, 2014.
 

Learning grasp dependent pull affordances of tools on the iCub humanoid robot

Published on Nov 11, 2014

This video shows a condensed version of the experiment carried out in order to study how tool affordances can be learned and predicted based on the tool’s functional/geometrical features. In a nutshell, what the robot does on each trial is to associate the geometrical features of the tool with the affordance of the action performed with that tool grasped in a particular orientation. Affordance here is defined as the vector representing the measure of the effect as a function of an action parameter. In this case the action parameter is the position of the tool with respect to the object while the effect is the displacement of the object after the pull action.
In this experiment we use 3 grasp orientations and 4 tools. After enough trials have been recorded a classifier is trained to predict the affordance of a tool based on its geometrical features. Based on this prediction, the iCub is able to determine how to position the tool to pull the object closer more successfully.
 

Improvements of the iCub balancing

Published on Nov 15, 2014

This video shows some of the latest results achieved in the whole-body control of iCub, the humanoid robot of the Italian Institute of Technology. In particular, it shows the improvements of the balancing controller, which now optimizes the internal torques according to some bounds on the external wrenches (i.e. feet forces and torques). These bounds ensure that, for instance, the robot's feet do not slip even when performing highly dynamic tasks.

The results have been achieved by the researches working at the Italian Institute of Technology and, in particular, by those funded by the European Project CoDyCo, coordinated by Dr. Francesco Nori.
 

iCub, humanoid robot, Italian Institute of Technology
ET2014
ST Microelectronics booth

Published on Nov 20, 2014
 

iCub Philosophy, Some History & Recent Results - video lecture by Prof. Giorgio Metta

Published on Dec 19, 2014

The iCub project was started 10 years ago within the field of Human Robotics, focused mostly on building models of cognitive behaviours. The goals of the project include understanding how the human brain resolves certain problems just to use this knowledge within robotics and build models that can allow robots to solve autonomously the same problems, hence become highly intelligent systems.

20 years ago neurophysiologists discovered special types of neurons called visuomotor neurons. This discovery triggered a shift in the way robots were programmed to learn and perceive as now, mirroring the function of these neurons, the goal was not only for the robot to perceive the surroundings but also to act upon the new information acquired, to carry out a task, solve a problem, imitate what a person would do when he or she perceives and processes external information.

To translate these goals into practice, first experiments where focused on registering human grasping actions via data gloves and trackers and then these registered actions, translated into images, were used to build a classifier which would aid the robot to predict better what action to carry out when perceiving certain information. Furthermore, neuro studies showing that it is not that important to recognise the object to be grasped it but it’s enough to recognise it’s shape, helped Prof. Metta and his team build models that prepared the robot to recognise the shape of an object just before executing an action like grasping it. This made possible for the robot to execute more complex behaviours such as recognise not only the objects to be grasped or pushed, but also to recognise tools which may help the robot to do something with the object in question. In one of the experiments lead by Prof. Metta, this was translated into an iCub grasping a tool to bring closer an object placed too far away from the robot, in order for the iCub to grasp it.

Watch Prof. Metta’s presentation to find out more about his amazing experiments using the iCub robot and what he’s latest research efforts are all about.
 

iCub balancing on one foot while interacting with humans

Published on Feb 24, 2015

This video shows the latest results achieved in the whole-body control of the iCub, the humanoid robot developed by the Italian Institute of Technology. In particular, it shows the performances of the balancing controller when the robot stands on one foot. The knowledge of the robot dynamics and the measurement of the external perturbations allow for safely interacting with humans as well as controlling highly dynamic motions.

The control of the robot is achieved by regulating the interaction forces between the robot and its surrounding environment. In particular, the force and torque exchanged between the robot's foot and the floor is regulated so that the robot keeps its balance even when strongly perturbed.

These new capacities will be pivotal when iCub will cohabitate with human beings in domestic environments. The results have been achieved by the researches working at the Italian Institute of Technology and, in particular, by those funded by the European Projects CoDyCo and Koroibot with Dr. Francesco Nori as principal investigator.
 

iCub balancing while performing goal directed actions: CoDyCo 2nd year validation

Published on Apr 22, 2015

iCub, the humanoid robot of the Italian Institute of Technology, can stand, balance perform goal directed actions while interacting with people. Thanks to the artificial skin, which equips the robot with 4000 sensitive contact points, the iCub's control system measures the external forces and properly regulate these interactions in order to keep the balance.

These new capacities will be pivotal when iCub will cohabitate with human beings in domestic environments. The results have been achieved by the researches working at the Italian Institute of Technology and, in particular, by those funded by the European Project CoDyCo, coordinated by Dr. Francesco Nori.
 

Interactive object learning on the iCub robot with Caffe libraries

Published on May 5, 2015

This video shows the new visual recognition capabilities of the iCub. Recognition mixes a classifier trained on top of the output of a deep convolutional neural network
 
Back
Top