Overview We also tested our method on a set of artificially generated scenes. These scenes contain instances of different types of vehicles, trees, houses, and the ground. Models of the objects in these scenes were obtained from the Princeton Shape Benchmark[1], and were combined in various ways. From these, a set of synthetic range scans were generated by placing a virtual sensor inside the scene, and casting rays to determine the scan distance in each direction. We corrupted the scan readings with additive white noise. We then triangulated the resulting point cloud, and subsampled down to our desired resolution. Point Features We used the same features (spin-images) [2] as the ones we used in the puppet dataset, but adjusted to the size of the objects in this dataset. Edge Features We use the surface links output by the scanner as edges in the MRF. We obtained the best results by using a single bias feature (set to 1 for all edges). |
Scene 1 |
Scene 2 |
Scene 3 |
Scene 4 |
Scene 5 |
Scene 1 |
Scene 2 |
Scene 3 |
Scene 4 |
Scene 5 |
Scene 6 |
Scene 7 |
Scene 8 |
Scene 1 |
Scene 2 |
Scene 3 |
Scene 4 |
Scene 5 |
Scene 6 |
Scene 7 |
Scene 8 |
Results |
AMN |
SVM |
Training set accuracy |
98.41% | 88.90% |
Testing set accuracy |
93.76% | 82.23% |
Precision on testing set |
93.60% |
89.35% |
Recall on testing set |
75.29% | 17.05% |