Robot's in-hand eye maps surroundings in 3D

Robots in-hand eye maps surroundings in 3D
x
Highlights

Researchers including an Indian-origin scientist from Carnegie Mellon University have found that a camera attached to the robot\'s hand can rapidly create a 3D model of its environment and also locate the hand within that 3D world.

Researchers including an Indian-origin scientist from Carnegie Mellon University have found that a camera attached to the robot's hand can rapidly create a 3D model of its environment and also locate the hand within that 3D world.

The team found they can improve the accuracy of the map by incorporating the arm itself as a sensor, using the angle of its joints to better determine the pose of the camera.

This will be important for a number of applications including inspection tasks. “Placing a camera or other sensor in the hand of a robot has become feasible as sensors have grown smaller and more power-efficient,” said Siddhartha Srinivasa, associate professor of robotics.

That is important because robots "usually have heads that consist of a stick with a camera on it”. They can't bend over like a person could to get a better view of a work space.

But an eye in the hand isn't much good if the robot can't see its hand and doesn't know where its hand is relative to objects in its environment. It's a problem shared with mobile robots that must operate in an unknown environment.

A popular solution for mobile robots is called simultaneous localization and mapping (SLAM) in which, the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3D map of the new environment.

"There are several algorithms available to build these detailed worlds, but they require accurate sensors and a ridiculous amount of computation," Srinivasa noted.

The researchers demonstrated their Articulated Robot Motion for SLAM (ARM-SLAM) using a small depth camera attached to a lightweight manipulator arm - the Kinova Mico.

In using it to build a 3-D model of a bookshelf, they found that it produced reconstructions equivalent or better to other mapping techniques. “We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation," Srinivasa pointed out.

The researchers presented their findings at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden on Tuesday.

Read More:

Hyundai unveils wearable robot

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS