PDA

View Full Version : Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Northrhine-Westphalia, Germany



Airicist
15th February 2014, 11:36
Website - cit-ec.de (https://www.cit-ec.de)

youtube.com/citecbielefeld (https://www.youtube.com/citecbielefeld)

Bielefeld University (https://en.wikipedia.org/wiki/Bielefeld_University) on Wikipedia

Projects:

HECTOR (https://pr.ai/showthread.php?9324), walking robot hexapod

Airicist
15th February 2014, 11:38
https://youtu.be/D8Q8Udh7CMg

The Curious Robot

Uploaded on Jul 25, 2010


In Bielefeld, work is carried out on a bimanual anthropomorphic platform including the torso BARTHOC as a communication partner. We study interactive robot learning within a object learning scenario, i.e. labeling, grasping, and removing objects, aiming at a more natural human-robot cooperation. In particular, our research focuses on: bimanual action, representation and execution, tactile sensors and manipulation based on tactile feedback, online-learning object detection, integration and coordination of perception and action and principles of human-robot dialog, including non-verbal communication, combination of exploratory and guided learning.

Airicist
15th February 2014, 11:39
https://youtu.be/sRB6G1OlXJI

HECTOR, the novel hexapod robot from Bielefeld

Uploaded on Apr 13, 2011


A novel hexapod cognitive robot named HECTOR has been developed in CITEC's Mulero-project. HECTOR possesses the scaled-up morphology of a stick insect. The robot uses a new type of bioinspired, self-contained, elastic joint drives for the 18 joints of its 6 legs and 2 drives for body segment actuation. Both types of drives have been developed within the research group 'Mechatronics of Biomimetic Actuators' at Bielefeld University. HECTOR will serve as a test-bed for advanced concepts in autonomous walking which also include planning-ahead capabilities. The video shows the very first presentation of the design concept together with the prototype of HECTOR's legs. Beyond CITEC, HECTOR will also serve as the biomechatronic foundation for the EU-project EMICAB.

Airicist
15th February 2014, 11:40
https://youtu.be/JrxRPpCdC4E

Team ToBI | RoboCup 2014 Qualification Video

Published on Feb 7, 2014

Airicist
18th July 2014, 16:37
https://youtu.be/mkGp_V0oDvo

Interactive disambiguation of object references for grasping tasks

Published on Jul 18, 2014


Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.

[1] A. Ukermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.

Airicist
28th August 2014, 11:04
https://youtu.be/gI7c9RC7gKg

Real-Time Hierarchical Scene Segmentation and Classification

Published on Aug 28, 2014


We present an extension to our previously reported
real-time scene segmentation approach which generates a complete hierarchy of segmentation hypotheses. An object classifier traverses the hypotheses tree in a top-down manner, returning good object hypotheses and thus helping to select the correct level of abstraction for segmentation and avoiding over- and under-segmentation. Combining model-free, bottom-up segmentation results with trained, top-down classification results, our approach improves both classification and segmentation results.

Airicist
16th December 2014, 21:53
https://youtu.be/2naQbWa-Sv8

Robot Christmas Elf CITEC

Published on Dec 16, 2014


A Production by the Neuroinformatics Group

Airicist
29th September 2015, 17:00
https://youtu.be/_4WALbxJUxc

Towards Body Schema Learning using Training Data Acquired by Continuous Self-touch

Published on Sep 29, 2015


This video is accompanied with our humanoids 2015 paper.
"Towards Body Schema Learning using Training Data
Acquired by Continuous Self-touch".

To augment traditionally vision-based body
schema learning with a sensory channel that provides more
accurate positional information, we propose a tactile-servoing
feedback controller that allows a robot to continuously acquire
self-touch information while sliding a fingertip across its own
body. In this manner one can quickly acquire a large amount
of training data representing the body shape.
We compare three approaches to track the common contact
point observed when one robot arm is touching the other in
a bimanual setup: feed-forward control, solely relying on a
CAD-based kinematics, performs worst; a controller that is
only based on tactile feedback typically lacks behind; only the
combination of both approaches yields satisfactory results.
As a first, preliminary application, we use the self-touch
capability to calibrate the closed kinematic chain formed by
both arms touching each other. The obtained homogeneous
transform describing the relative mounting pose of both arms,
improves end-effector position estimations by a magnitude
compared to a traditional, vision-based approach.

Airicist
29th September 2015, 17:01
https://youtu.be/Lxnd1XwOA8M

A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects

Published on Sep 29, 2015


This video is accompanied with our humanoids 2015 paper.
"A Visuo-Tactile Control Framework
for Manipulation and Exploration of Unknown Objects".

We present a novel hierarchical control frame-
work that unifies our previous work on tactile-servoing with
visual-servoing approaches to allow for robust manipulation
and exploration of unknown objects, including – but not
limited to – robust grasping, online grasp optimization, in-hand
manipulation, and exploration of object surfaces.
The framework is divided into three layers: a joint-level
layer, a tactile servoing layer, and a visual servoing layer. While
the middle layer provides “blind” surface exploration skills,
maintaining desired contact patterns, the visual layer monitors
and controls the actual object pose providing high-level finger-
tip motion commands that are merged with the tactile-servoing
control commands.
We illustrate the versatility of the proposed framework using
a series of manipulation actions performed with two KUKA
LWR arms equipped with a tactile sensor array as a “sensitive
fingertip”. The two considered objects are unknown to the
robot, i.e. neither shape nor friction properties are available.