Results 1 to 9 of 9

Thread: Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Northrhine-Westphalia, Germany

  1. #1

    Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Northrhine-Westphalia, Germany

    Website - cit-ec.de

    youtube.com/citecbielefeld

    Bielefeld University on Wikipedia

    Projects:

    HECTOR, walking robot hexapod

  2. #2


    The Curious Robot

    Uploaded on Jul 25, 2010

    In Bielefeld, work is carried out on a bimanual anthropomorphic platform including the torso BARTHOC as a communication partner. We study interactive robot learning within a object learning scenario, i.e. labeling, grasping, and removing objects, aiming at a more natural human-robot cooperation. In particular, our research focuses on: bimanual action, representation and execution, tactile sensors and manipulation based on tactile feedback, online-learning object detection, integration and coordination of perception and action and principles of human-robot dialog, including non-verbal communication, combination of exploratory and guided learning.

  3. #3


    HECTOR, the novel hexapod robot from Bielefeld

    Uploaded on Apr 13, 2011

    A novel hexapod cognitive robot named HECTOR has been developed in CITEC's Mulero-project. HECTOR possesses the scaled-up morphology of a stick insect. The robot uses a new type of bioinspired, self-contained, elastic joint drives for the 18 joints of its 6 legs and 2 drives for body segment actuation. Both types of drives have been developed within the research group 'Mechatronics of Biomimetic Actuators' at Bielefeld University. HECTOR will serve as a test-bed for advanced concepts in autonomous walking which also include planning-ahead capabilities. The video shows the very first presentation of the design concept together with the prototype of HECTOR's legs. Beyond CITEC, HECTOR will also serve as the biomechatronic foundation for the EU-project EMICAB.

  4. #4


    Team ToBI | RoboCup 2014 Qualification Video

    Published on Feb 7, 2014

  5. #5


    Interactive disambiguation of object references for grasping tasks

    Published on Jul 18, 2014

    Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size, elongation, position). Ambigue references to objects will be resolved in an interactive dialogue asking for the most informative object property in a given situation. Ultimately pointing gestures can be used to resolve a reference. The robot system is able to pick and place objects to a new target location (which might be changing as well), to hand over an object to the user, and to talk about the current scene state.

    [1] A. Ukermann, R. Haschke, and H. Ritter, "Realtime 3D segmentation for human-robot interaction," in Proc. IROS, 2013, pp. 2136--2143.

  6. #6


    Real-Time Hierarchical Scene Segmentation and Classification

    Published on Aug 28, 2014

    We present an extension to our previously reported
    real-time scene segmentation approach which generates a complete hierarchy of segmentation hypotheses. An object classifier traverses the hypotheses tree in a top-down manner, returning good object hypotheses and thus helping to select the correct level of abstraction for segmentation and avoiding over- and under-segmentation. Combining model-free, bottom-up segmentation results with trained, top-down classification results, our approach improves both classification and segmentation results.

  7. #7


    Robot Christmas Elf CITEC

    Published on Dec 16, 2014

    A Production by the Neuroinformatics Group

  8. #8


    Towards Body Schema Learning using Training Data Acquired by Continuous Self-touch

    Published on Sep 29, 2015

    This video is accompanied with our humanoids 2015 paper.
    "Towards Body Schema Learning using Training Data
    Acquired by Continuous Self-touch".

    To augment traditionally vision-based body
    schema learning with a sensory channel that provides more
    accurate positional information, we propose a tactile-servoing
    feedback controller that allows a robot to continuously acquire
    self-touch information while sliding a fingertip across its own
    body. In this manner one can quickly acquire a large amount
    of training data representing the body shape.
    We compare three approaches to track the common contact
    point observed when one robot arm is touching the other in
    a bimanual setup: feed-forward control, solely relying on a
    CAD-based kinematics, performs worst; a controller that is
    only based on tactile feedback typically lacks behind; only the
    combination of both approaches yields satisfactory results.
    As a first, preliminary application, we use the self-touch
    capability to calibrate the closed kinematic chain formed by
    both arms touching each other. The obtained homogeneous
    transform describing the relative mounting pose of both arms,
    improves end-effector position estimations by a magnitude
    compared to a traditional, vision-based approach.

  9. #9


    A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects

    Published on Sep 29, 2015

    This video is accompanied with our humanoids 2015 paper.
    "A Visuo-Tactile Control Framework
    for Manipulation and Exploration of Unknown Objects".

    We present a novel hierarchical control frame-
    work that unifies our previous work on tactile-servoing with
    visual-servoing approaches to allow for robust manipulation
    and exploration of unknown objects, including – but not
    limited to – robust grasping, online grasp optimization, in-hand
    manipulation, and exploration of object surfaces.
    The framework is divided into three layers: a joint-level
    layer, a tactile servoing layer, and a visual servoing layer. While
    the middle layer provides “blind” surface exploration skills,
    maintaining desired contact patterns, the visual layer monitors
    and controls the actual object pose providing high-level finger-
    tip motion commands that are merged with the tactile-servoing
    control commands.
    We illustrate the versatility of the proposed framework using
    a series of manipulation actions performed with two KUKA
    LWR arms equipped with a tactile sensor array as a “sensitive
    fingertip”. The two considered objects are unknown to the
    robot, i.e. neither shape nor friction properties are available.

Similar Threads

  1. Replies: 2
    Last Post: 3rd August 2016, 21:41
  2. Replies: 4
    Last Post: 27th January 2015, 00:50
  3. Replies: 1
    Last Post: 17th June 2014, 22:33
  4. Replies: 0
    Last Post: 15th February 2014, 11:47
  5. Replies: 2
    Last Post: 19th June 2013, 20:22

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •