Bio. I currently lead the Optimization team at OpenAI.
Before that, I was a Ph.D. student in Computer Science at Stanford
University advised
by John
Duchi. My Ph.D. research was in machine learning, optimization, and
privacy.
Previously, I was a M.S. student
at Stanford University advised
by Stefano
Ermon, working on probabilistic models and
reinforcement learning. I completed my undergraduate
studies
at Lycée
Louis-Le-Grand
and Ecole
Polytechnique from which I obtained a B.S. and a
M.S. in 2014 and 2015. I also spent internships at
Facebook
Applied Machine Learning in 2016, Google
Brain in 2017 where I worked
with Jascha
Sohl-Dickstein
and Matt
Hoffman and Google Research in 2020 where I worked with Ananda Theertha Suresh, Satyen Kale and Mehryar Mohri.
Publications
- Learning with User-Level Privacy
- Daniel Levy*, Ziteng Sun*, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh.
- NeurIPS, 2021.
- [pdf]
- Adapting to Function Difficulty and Growth Conditions in Private Optimization
- Hilal Asi*, Daniel Levy*, John C. Duchi.
- NeurIPS, 2021.
- [pdf]
- Distributionally Robust Multilingual Machine Translation
- Chunting Zhou*, Daniel Levy*, Marjan Ghazvininejad, Xian Li, Graham Neubig
- EMNLP, 2021.
- [pdf][code]
- Large-Scale Methods for Distributionally Robust Optimization
- Daniel Levy*, Yair Carmon*, John C. Duchi, Aaron Sidford.
- NeurIPS, 2020.
- [pdf][code]
- Necessary and Sufficient Geometries for Gradient Methods
- Daniel Levy, John C. Duchi.
- NeurIPS, 2019. Selected for oral presentation.
- [pdf]
- Bayesian Optimization and Attribute
Adjustement
- Stephan Eismann, Daniel Levy, Rui
Shu, Stefan Barztsch, Stefano Ermon.
- UAI, 2018.
- [pdf]
- Generalizing Hamiltonian Monte Carlo with
Neural Networks
- Daniel Levy, Matthew D. Hoffman,
Jascha Sohl-Dickstein.
- ICLR, 2018.
- [pdf]
[code]
- Deterministic Policy Optimization by
Combining Pathwise and Score Function Estimators
for Discrete Action Spaces
- Daniel Levy, Stefano Ermon.
- AAAI, 2018.
- [pdf]
- Fast Amortized Inference and Learning in
Log-linear Models with Randomly Perturbed
Nearest Neighbor Search
- Stephen Mussman*, Daniel Levy*,
Stefano Ermon.
- UAI, 2017.
- [pdf]
- Data Noising as Smoothing in Neural Network
Language Models
- Ziang Xie, Sida I. Wang, Jiwei Li, Daniel
Levy, Aiming Nie, Dan Jurafsky, Andrew
Y. Ng.
- ICLR, 2017.
- [pdf]
Teaching
Winter 2021 Teaching assistant for
EE364A: Convex Optimization taught by
John Duchi.
Fall 2016 Teaching assistant
for
CS229: Machine
Learning taught
by
Andrew Ng
and
John
Duchi.
Service
Reviewer: ICML 2021, NeurIPS 2020, ICLR 2020, AAAI 2020, ICML 2019, ICLR 2019, AABI 2018, R2L Workshop (at NeurIPS 2018).