I am a Ph.D. student in the Stanford Vision Lab, advised by Prof. Fei-Fei Li. My research interests are in computer vision, machine learning, and deep learning. I'm particularly interested in the areas of video understanding, human action recognition, and healthcare applications.

I interned at Facebook AI Research in Summer 2016, and Google Cloud AI in Summer 2017. I also co-taught Stanford's CS231N Convolutional Neural Networks course in 2017, with Justin Johnson and Fei-Fei Li.

Before starting my Ph.D., I received a B.S. in Electrical Engineering in 2010, and an M.S. in Electrical Engineering in 2013, both from Stanford. I also worked as a software engineer at Rockmelt (acquired by Yahoo) from 2009-2011.


Bedside Computer Vision -- Moving Artificial Intelligence from Driver Assistance to Patient Safety
Serena Yeung, N. Lance Downing, Li Fei-Fei, Arnold Milstein
New England Journal of Medicine 2018
Scaling Human-Object Interaction Recognition through Zero-Shot Learning
Liyue Shen, Serena Yeung, Judy Hoffman, Greg Mori, Li Fei-Fei
WACV 2018
Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks
Amy Jin, Serena Yeung, Jeffrey Jopling, Jonathan Krause, Dan Azagury, Arnold Milstein, Li Fei-Fei
NIPS 2017 Machine Learning for Health Workshop (Best paper Award)
Tackling Over-pruning in Variational Autoencoders
Serena Yeung, Anitha Kannan, Yann Dauphin, Li Fei-Fei
ICML 2017 Workshop on Principled Approaches to Deep Learning
Towards Vision-Based Smart Hospitals: A System for Tracking and Monitoring Hand Hygiene Compliance
Albert Haque, Michelle Guo, Alexandre Alahi, Serena Yeung, Zelun Luo, Alisha Rege, Jeffrey Jopling, Lance Downing, William Beninati, Amit Singh, Terry Platchek, Arnold Milstein, Li Fei-Fei
MLHC 2017
Learning to Learn from Noisy Web Videos
Serena Yeung, Vignesh Ramanathan, Olga Russakovsky, Liyue Shen, Greg Mori, Li Fei-Fei
CVPR 2017
EnergyNet: Predicting Energy Expenditures with Egocentric Multimodal Signals
Katsuyuki Nakamura, Serena Yeung, Alexandre Alahi, Li Fei-Fei
CVPR 2017
[pdf] [project page]
Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos
Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, Li Fei-Fei
IJCV 2017
[pdf] [project page] [data]
End-to-end Learning of Action Detection from Frame Glimpses in Videos
Serena Yeung, Olga Russakovsky, Greg Mori, Li Fei-Fei
CVPR 2016
[pdf] [project page] [code]
Towards Viewpoint Invariant 3D Human Pose Estimation
Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, Li Fei-Fei
ECCV 2016
[pdf] [project page]
Vision-Based Hand Hygiene Monitoring in Hospitals
Serena Yeung, Alexandre Alahi, Zelun Luo, Boya Peng, Albert Haque, Amit Singh, Terry Platchek, Arnold Milstein, Li Fei-Fei
NIPS 2015 Machine Learning for Health Care Workshop
VideoSET: Video Summary Evaluation through Text
Serena Yeung, Alireza Fathi, Li Fei-Fei
CVPR 2014 Egocentric Vision Workshop
[pdf] [data] [project page]
Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis
Quoc Le, Will Zou, Serena Yeung, Andrew Ng
CVPR 2011

AI-Assisted Healthcare

I am a member of the Stanford Program in AI-Assisted Care (PAC), which is a collaboration between the Stanford AI Lab and Stanford Clinical Excellence Research Center that aims to use computer vision and machine learning to create AI-assisted smart healthcare spaces. My particular focus is on applying my research in human action recognition towards hand hygiene-based infection control, logging of clinical care activities, and continuous patient assessment in hospital intensive care units.

Our collaborators include Lucile Packard Children's Hospital at Stanford, Stanford Health Care, and Intermountain Healthcare.


I enjoy helping future generations of researchers and engineers discover the exciting field of AI. I was a Research Instructor for SAILORS 2015, a 2-week summer program run by the AI Lab for 10th grade girls. In the past I've also co-organized Stanford's JETS Engineering Outreach Day for high school students, and been involved with programs including Exploring New Worlds, for elementary school students from underprivileged areas, and TechGyrls.