Abstract

We propose a weakly-supervised framework for action labeling in video, where only the order of occurring actions is required during training time. The key challenge is that the per-frame alignments between the input (video) and label (action) sequences are unknown during training. We address this by introducing the Extended Connectionist Temporal Classification (ECTC) framework to efficiently evaluate all possible alignments via dynamic programming and explicitly enforce their consistency with frame-to-frame visual similarities. This protects the model from distractions of visually inconsistent or degenerated alignments without the need of temporal supervision. We further extend our framework to the semi-supervised case when a few frames are sparsely annotated in a video. With less than 1% of labeled frames per video, our method is able to outperform existing semi-supervised approaches and achieve comparable performance to that of fully supervised approaches.

Download

The ECTC loss can be downloaded here.

Bibtex

@inproceedings{huang2016connectionist,
  title={Connectionist Temporal Modeling for Weakly Supervised Action Labeling},
  author={Huang, De-An and Fei-Fei, Li and Niebles, Juan Carlos},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2016}
}

Example Results

Acknowledgments

This work was supported by a grant from the Stanford AI Lab-Toyota Center for Artificial Intelligence Research.