Results 1 to 6 of 6

Thread: Optimus, bomb disposal robot, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA

  1. #1

  2. #2


    Optimus': plan and execute

    Published on Jan 1, 2015

    -Motion plan is computed using Drake. I use a given trajectory bundle for the end effector as constraints. The trajectory bundle was learned from human motion.
    -The plan is shown on the interface, the operator can review the plan.
    -The operator approves the plan and sends to execute
    -Robot executes the plan.
    -Visualization of the current states at 10Hz.
    Uploaded time: December 23, 2014 10:16 AM

  3. #3


    Optimus' affordance grasp using hand seed

    Published on Jul 1, 2015

    This video shows our robot executing a grasping task using a shared autonomy approach. The system uses Director and Drake for visualization and motion planning.

  4. #4


    Optimus mobile manipulator simulation in Gazebo

    Published on Jul 19, 2015

    Initial development of a gazebo simulator for the Optimus robot at MIT

  5. #5


    Optimus robot - shared autonomy

    Published on Dec 2, 2015

    This video shows MIT’s robot OPTIMUS working under remote human control using a shared autonomy approach. Our system uses Director and Drake (drake.mit.edu) for visualization and motion planning.

    Optimus is a next generation bomb disposal robot, a 16-DOF dual-arm highly dexterous mobile manipulator that integrates a Highly Dexterous Manipulation System (HDMS) by RE2, a Husky UGV by Clearpath, 3-finger grippers by Robotiq and a Multisense SL by Carnegie Robotics. Optimus can be remotely controlled by a human operator to execute complex manipulation tasks and is being used in our supervised teleautonomy project at the Interactive Robotics Group at CSAIL - MIT.

  6. #6


    Manipulation tasks using shared autonomy. Optimus robot @ MIT

    Published on Jun 9, 2016

    - This video demonstrates the Optimus Robot performing manipulation tasks using shared autonomy. A human operator performs the task planning with assisted perception and assisted motion planning.
    - The robot has no previous knowledge about the tasks and objects have no fiducials. Only on-board sensing is used (no sensors external to the robot).
    - Robot motion is shown 1X and no robot motion has been edited. Planning actions are shown on Director, some parts are shown with speed factor of 4X. Planning time edited for brevity.

    Interactive Robotics Group @ MIT

Similar Threads

  1. Replies: 7
    Last Post: 30th October 2019, 18:09
  2. Replies: 0
    Last Post: 7th May 2015, 18:43
  3. Replies: 0
    Last Post: 21st November 2014, 20:39
  4. Replies: 3
    Last Post: 10th June 2014, 23:55
  5. Replies: 0
    Last Post: 21st January 2014, 19:37

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •