# Topics > Entities > Scientific institutions >  Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Northrhine-Westphalia, Germany

## Airicist

Website - cit-ec.de

youtube.com/citecbielefeld

Bielefeld University on Wikipedia

Projects:

HECTOR, walking robot hexapod

----------


## Airicist

The Curious Robot 

 Uploaded on Jul 25, 2010




> In Bielefeld, work is carried out on a bimanual anthropomorphic platform including the torso BARTHOC as a communication partner. We study interactive robot learning within a object learning scenario, i.e. labeling, grasping, and removing objects, aiming at a more natural human-robot cooperation. In particular, our research focuses on: bimanual action, representation and execution, tactile sensors and manipulation based on tactile feedback, online-learning object detection, integration and coordination of perception and action and principles of human-robot dialog, including non-verbal communication, combination of exploratory and guided learning.

----------


## Airicist

HECTOR, the novel hexapod robot from Bielefeld 

 Uploaded on Apr 13, 2011




> A novel hexapod cognitive robot named HECTOR has been developed in CITEC's Mulero-project. HECTOR possesses the scaled-up morphology of a stick insect. The robot uses a new type of bioinspired, self-contained, elastic joint drives for the 18 joints of its 6 legs and 2 drives for body segment actuation. Both types of drives have been developed within the research group 'Mechatronics of Biomimetic Actuators' at Bielefeld University. HECTOR will serve as a test-bed for advanced concepts in autonomous walking which also include planning-ahead capabilities. The video shows the very first presentation of the design concept together with the prototype of HECTOR's legs. Beyond CITEC, HECTOR will also serve as the biomechatronic foundation for the EU-project EMICAB.

----------


## Airicist

Team ToBI | RoboCup 2014 Qualification Video

Published on Feb 7, 2014

----------


## Airicist

Interactive disambiguation of object references for grasping tasks 

 Published on Jul 18, 2014




> Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.
> 
> [1] A. Ukermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.

----------


## Airicist

Real-Time Hierarchical Scene Segmentation and Classification 

 Published on Aug 28, 2014




> We present an extension to our previously reported
> real-time scene segmentation approach which generates a complete hierarchy of segmentation hypotheses. An object classifier traverses the hypotheses tree in a top-down manner, returning good object hypotheses and thus helping to select the correct level of abstraction for segmentation and avoiding over- and under-segmentation. Combining model-free, bottom-up segmentation results with trained, top-down classification results, our approach improves both classification and segmentation results.

----------


## Airicist

Robot Christmas Elf CITEC 

Published on Dec 16, 2014




> A Production by the Neuroinformatics Group

----------


## Airicist

Towards Body Schema Learning using Training Data Acquired by Continuous Self-touch

Published on Sep 29, 2015




> This video is accompanied with our humanoids 2015 paper.
> "Towards Body Schema Learning using Training Data
> Acquired by Continuous Self-touch".
> 
> To augment traditionally vision-based body
> schema learning with a sensory channel that provides more
> accurate positional information, we propose a tactile-servoing
> feedback controller that allows a robot to continuously acquire
> self-touch information while sliding a fingertip across its own
> ...

----------


## Airicist

A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects

Published on Sep 29, 2015




> This video is accompanied with our humanoids 2015 paper.
> "A Visuo-Tactile Control Framework
> for Manipulation and Exploration of Unknown Objects".
> 
> We present a novel hierarchical control frame-
> work that unifies our previous work on tactile-servoing with
> visual-servoing approaches to allow for robust manipulation
> and exploration of unknown objects, including – but not
> limited to – robust grasping, online grasp optimization, in-hand
> ...

----------

