Stanford synthetic object grasping point data

Home | Publications | STAIR manipulation | Robot Grasping

We present a labeled training set, i.e., a set of images of objects labeled with the 2-d location of the grasping point in each image. Collecting real-world data of this sort is cumbersome, and manual labeling is prone to errors. Thus, we instead chose to generate, and learn from, synthetic data that is automatically labeled with the correct grasps.

In detail, we generate synthetic images along with correct grasp using a computer graphics ray tracer, as this produces more realistic images than other simpler rendering methods. One of the advantages of using synthetic images is that once a synthetic model of an object has been created, a large number of training examples can be automatically generated by rendering the object under different (randomly chosen) lighting conditions, camera positions and orientations, etc.

Each dataset below consist of:
(a) image, (b) grasp labels (binary 0-1 image), (c) depthmap (range image), (d) 6-dof grasping point, (e) object orientation, (f) grasping parameters such as gripper width.

Explanation of Data format: README, File to read data into matlab: getCorrectedValuesStanfordData.m, getXYZfromDepth_fast.m.
* For data marked with an asterisk, the Matlab script will not work, write you own.

Code for calculating the features.




Any report/publication that uses the data above should cite:
Robotic Grasping of Novel Objects, Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Andrew Ng. In Neural Information Processing Systems (NIPS) 19, 2006.
Learning to Grasp Novel Objects using Vision, Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Chioma Osondu, Andrew Y. Ng. 10th International Symposium of Experimental Robotics (ISER), 2006.

Outdoor Scene Range Image Data

For image+stereo+laser data of outdoor scenes, visit here.


Note: Use of this data is restricted to research purposes only.
For use of this data in any commercial purposes, please contact authors.