The Thirty-ninth International Conference on Machine Learning (ICML) 2022 is being hosted July 17th - 23th. We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and blogs below. Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!
List of Accepted Papers
No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit
Authors: Rylan Schaeffer, Mikail Khona, Ila Rani Fiete
Contact: rschaef@cs.stanford.edu
Keywords: deep learning, neuroscience
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models
Authors: Eric Mitchell*, Peter Henderson*, Christopher D. Manning, Dan Jurafsky, Chelsea Finn
Contact: phend@stanford.edu
Keywords: foundation models, ai safety, meta-learning
Streaming Inference for Infinite Feature Models
Authors: Rylan Schaeffer, Yilun Du, Gabrielle Kaili-May Liu, Ila Rani Fiete
Contact: rschaef@cs.stanford.edu
Keywords: variational inference, combinatorial stochastic processes, bayesian nonparametrics
A General Recipe for Likelihood-free Bayesian Optimization
Authors: Jiaming Song, Lantao Yu, Willie Neiswanger, Stefano Ermon
Contact: jiaming.tsong@gmail.com
Links: Paper | Website
Keywords: bayesian optimization, likelihood-free inference
A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning
Authors: Archit Sharma*, Rehaan Ahmad*, Chelsea Finn
Contact: architsh@stanford.edu
Links: Paper | Website
Keywords: reinforcement learning, continual learning, adversarial learning
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Authors: Kendrick Shen*, Robbie Jones,* Ananya Kumar*, Sang Michael Xie*, Jeff Z. HaoChen, Tengyu Ma, Percy Liang
Contact: kshen6@cs.stanford.edu
Award nominations: Long talk
Links: Paper
Keywords: pre-training, representation learning, domain adaptation, contrastive learning, spectral graph theory
Contrastive Adapters for Foundation Model Group Robustness
Authors: Michael Zhang, Christopher Ré
Contact: mzhang@cs.stanford.edu
Links: Paper
Keywords: robustness, foundation models, lightweight tuning, adapters
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
Authors: Michael Zhang, Nimit S. Sohoni, Hongyang R. Zhang, Chelsea Finn, Christopher Ré
Contact: mzhang@cs.stanford.edu
Award nominations: Long Talk (Oral)
Links: Paper
Keywords: spurious correlations, robustness, contrastive learning
How to Leverage Unlabeled Data in Offline Reinforcement Learning
Authors: Tianhe Yu*, Aviral Kumar*, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine
Contact: tianheyu@cs.stanford.edu
Links: Paper
Keywords: offline rl, deep rl
Improving Out-of-Distribution Robustness via Selective Augmentation
Authors: Huaxiu Yao*, Yu Wang*, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn
Contact: huaxiu@cs.stanford.edu
Links: Paper | Video
Keywords: out-of-distribution robustness, domain generalization, spurious correlation, distribution shifts, selective augmentation
Inducing Causal Structure for Interpretable Neural Networks
Authors: Atticus Geiger*, Zhengxuan Wu*, Hanson Lu*, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, Christopher Potts
Contact: atticusg@gmail.com
Links: Paper | Website
Keywords: causality, interpretability,
Integrating Reward Maximization and Population Estimation: Sequential Decision-Making for Internal Revenue Service Audit Selection
Authors: Peter Henderson, Ben Chugg, Brandon Anderson, Kristen Altenburger, Alex Turk, John Guyton, Jacob Goldin, Daniel E. Ho
Contact: phend@cs.stanford.edu
Links: Paper
Keywords: bandits,real world ml,sampling
Joint Entropy Search For Maximally-Informed Bayesian Optimization
Authors: Carl Hvarfner, Frank Hutter, Luigi Nardi
Contact: lnardi@stanford.edu
Links: Paper | Website
Keywords: bayesian optimization, hyperparameter optimization, entropy search
Meaningfully debugging model mistakes using conceptual counterfactual explanations
Authors: Abubakar Abid, Mert Yuksekgonul, James Zou
Contact: merty@stanford.edu
Links: Paper
Keywords: interpretability, counterfactual explanations, concept-based explanations, reliable machine learning
Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Authors: Mayee Chen*, Dan Fu*, Avanika Narayan, Michael Zhang, Zhao Song, Kayvon Fatahalian, Chris Ré
Contact: mfchen@stanford.edu, danfu@cs.stanford.edu
Links: Paper | Blog Post | Video | Website
Keywords: contrastive learning, transfer learning, robustness
Understanding Dataset Difficulty with V-Usable Information
Authors: Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta
Contact: kawin@stanford.edu
Award nominations: long talk
Links: Paper
Keywords: dataset,interpretability,data-centric ai,information theory
We look forward to seeing you at ICML 2022!