Multiple depth sensors can be used to avoid unobservable zones (gray areas) in the robot workspace. They can also remove situations falsely classified as incipient collisions by a single sensor, when the “shadow” of an obstacle falls close to the robot. Recently, we developed a very efficient method to estimate on line distances between a number of points of interests placed on the links of a robot and dynamic objects detected by a single depth camera. We present here results of the extension of this method to multiple depth cameras. A depth-space oriented discretization of the Cartesian space (depth grid map) is built off line to represent the workspace monitored by one or more depth cameras, and then used on line to fuse the information given by the multiple sensors in a very simple and fast way. The video shows collision avoidance experiments with two Kinects monitoring a continuous task of human-robot coexistence. The algorithm runs at 300Hz, ten time faster than the frame rate of the sensors.
Reference: F. Flacco, A. De Luca, "Real-time computation of distances to dynamic obstacles with multiple depth sensors," submitted to IEEE Robotics and Automation Letters, August 2015.
Социальные закладки