Stanford Aerial Pedestrian Dataset

Introduction

When humans navigate a crowed space such as a university campus or the sidewalks of a busy street, they follow common sense rules based on social etiquette. In order to enable the design of new algorithms that can fully take advantage of these rules to better solve tasks such as target tracking or trajectory forecasting, we need to have access to better data. To that end, we contribute the very first large scale dataset (to the best of our knowledge) that collects images and videos of various types of agents (not just pedestrians, but also bicyclists, skateboarders, cars, buses, and golf carts) that navigate in a real world outdoor environment such as a university campus. In the above images, pedestrians are labeled in pink, bicyclists in red, skateboarders in orange, and cars in green.

Publication

  • Alexandre Robicquet, Alexandre Alahi, Amir Abbas Sadeghian, Bryan Anenberg, Eli Wu, Silvio Savarese. Forecasting Social Navigation in Crowded Complex Scenes

Statistics

The dataset consists of eight unique scenes. The number of videos in each scene and the percentage of each agent in each scene is reported below.

Scenes Videos Bicyclist Pedestrian Skateboarder Cart Car Bus
gates 9 51.94 43.36 2.55 0.29 1.08 0.78
little 4 56.04 42.46 0.67 0 0.17 0.67
nexus 12 4.22 64.02 0.60 0.40 29.51 1.25
coupa 4 18.89 80.61 0.17 0.17 0.17 0
bookstore 7 32.89 63.94 1.63 0.34 0.83 0.37
deathCircle 5 56.30 33.13 2.33 3.10 4.71 0.42
quad 4 12.50 87.50 0 0 0 0
hyang 15 27.68 70.01 1.29 0.43 0.50 0.09

Annotation samples

Contact : anenberg at stanford dot edu

Last update : 12/19/2015