Bio. I am a second-year Ph.D. student in Computer Science at Stanford University advised by John Duchi. My research interests are in optimization and machine learning.

Previously, I was a M.S. student at Stanford University advised by Stefano Ermon, working on probabilistic models and reinforcement learning. I completed my undergraduate studies at Lycée Louis-Le-Grand and Ecole Polytechnique from which I obtained a B.S. and a M.S. in 2014 and 2015. I also spent internships at Facebook Applied Machine Learning in 2016 and at Google Brain in 2017 where I worked with Jascha Sohl-Dickstein and Matt Hoffman.


Necessary and Sufficient Geometries for Gradient Methods.
Daniel Levy, John Duchi.
To appear in NeurIPS, 2019. Selected for oral presentation.
Bayesian Optimization and Attribute Adjustement
Stephan Eismann, Daniel Levy, Rui Shu, Stefan Barztsch, Stefano Ermon.
UAI, 2018.
Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein.
ICLR, 2018.
[pdf] [code]
Deterministic Policy Optimization by Combining Pathwise and Score Function Estimators for Discrete Action Spaces
Daniel Levy, Stefano Ermon.
AAAI, 2018.
Fast Amortized Inference and Learning in Log-linear Models with Randomly Perturbed Nearest Neighbor Search
Stephen Mussman*, Daniel Levy*, Stefano Ermon.
UAI, 2017.
Data Noising as Smoothing in Neural Network Language Models
Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Levy, Aiming Nie, Dan Jurafsky, Andrew Y. Ng.
ICLR, 2017.


Fall 2016 I was teaching assistant for CS229: Machine Learning taught by Andrew Ng and John Duchi.


Reviewer: ICLR 2020, AAAI 2020, ICML 2019, ICLR 2019, AABI 2018, R2L Workshop (at NeurIPS 2018).