Bio. I am a
second-year Ph.D. student in Computer Science at Stanford
Duchi. My research interests are in optimization and
Previously, I was a M.S. student
at Stanford University advised
Ermon, working on probabilistic models and
reinforcement learning. I completed my undergraduate
Polytechnique from which I obtained a B.S. and a
M.S. in 2014 and 2015. I also spent internships at
Applied Machine Learning in 2016 and
Brain in 2017 where I worked
- Necessary and Sufficient Geometries for Gradient Methods.
- Daniel Levy, John Duchi.
- To appear in NeurIPS, 2019. Selected for oral presentation.
- Bayesian Optimization and Attribute
- Stephan Eismann, Daniel Levy, Rui
Shu, Stefan Barztsch, Stefano Ermon.
- UAI, 2018.
- Generalizing Hamiltonian Monte Carlo with
- Daniel Levy, Matthew D. Hoffman,
- ICLR, 2018.
- Deterministic Policy Optimization by
Combining Pathwise and Score Function Estimators
for Discrete Action Spaces
- Daniel Levy, Stefano Ermon.
- AAAI, 2018.
- Fast Amortized Inference and Learning in
Log-linear Models with Randomly Perturbed
Nearest Neighbor Search
- Stephen Mussman*, Daniel Levy*,
- UAI, 2017.
- Data Noising as Smoothing in Neural Network
- Ziang Xie, Sida I. Wang, Jiwei Li, Daniel
Levy, Aiming Nie, Dan Jurafsky, Andrew
- ICLR, 2017.
I was teaching assistant
for CS229: Machine
by Andrew Ng
Reviewer: ICLR 2020, AAAI 2020, ICML 2019, ICLR 2019, AABI 2018, R2L Workshop (at NeurIPS 2018).