The International Conference on Learning Representations (ICLR) 2024 is being hosted in Vienna Austria from May 7 - May 11. We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and blogs below. Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!

Main conference

Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models

Authors: Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi
Contact: merty@stanford.edu
Links: Paper | Website
Keywords: interpretability, hallucinations, factual errors


Compositional Generative Inverse Design

Authors: Tailin Wu, Takashi Maruyama, Long Wei, Tao Zhang, Yilun Du, Gianluca Iaccarino, Jure Leskovec
Contact: tailin@cs.stanford.edu
Award nominations: spotlight
Links: Paper | Website
Keywords: inverse design, generative design, pde, physical simulation, compositional


Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data

Authors: Yuhui Zhang, Elaine Sui, Serena Yeung-Levy
Contact: yuhuiz@stanford.edu
Links: Paper | Website
Keywords: multi-modal contrastive learning, representation learning, vision-language, multi-modality


Context-Aware Meta-Learning

Authors: Christopher Fifty, Dennis Duan, Ronald G. Junkins, Ehsan Amid, Jure Leskovec, Christopher Re, Sebastian Thrun
Contact: fifty@cs.stanford.edu
Links: Paper | Video | Website
Keywords: meta-learning, few-shot learning


Contrastive Preference Learning: Learning from Human Feedback without RL

Authors: Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, Dorsa Sadigh
Contact: jhejna@stanford.edu
Links: Paper | Video
Keywords: reinforcement learning from human feedback, preference-based rl, human-in-the-loop rl, preference learning


Counting Graph Substructures with Graph Neural Networks

Authors: Charilaos I. Kanatsoulis, Alejandro Ribeiro
Contact: charilaos@cs.stanford.edu
Links: Paper
Keywords: graph neural networks, equivariance, representation learning, structures, molecular graphs


DIA Adaptive Instrument Design for Indirect Experiments

Authors: Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill
Contact: ychandak@stanford.edu
Links: Paper
Keywords: experiment design, instrumental variable, influence function, causal inference


Denoising Diffusion Bridge Models

Authors: Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon
Contact: lzhou907@stanford.edu
Links: Paper
Keywords: diffusion models, generative models, flow models


Hypothesis Search: Inductive Reasoning with Language Models

Authors: Ruocheng Wang*, Eric Zelikman*, Gabriel Poesia, Yewen Pu, Nick Haber, Noah Goodman
Contact: rcwang@cs.stanford.edu
Links: Paper
Keywords: inductive reasoning, large language models


Identifying the Risks of LM Agents with an LM-Emulated Sandbox

Authors: Yangjun Ruan*, Honghua Dong*, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, Tatsunori Hashimoto
Contact: ruanyangjun@gmail.com
Award nominations: Spotlight
Links: Paper | Website
Keywords: language model agent, tool use, evaluation, safety, language model


Language-Informed Visual Concept Learning

Authors: Sharon Lee*, Yunzhi Zhang*, Shangzhe Wu, Jiajun Wu
Contact: yzzhang@stanford.edu
Links: Paper | Website
Keywords: image generation, visual-language model


Lemur: Integrating Large Language Models in Automated Program Verification

Authors: Haoze (Andrew) Wu, Clark Barrett, Nina Narodytska
Contact: haozewu@stanford.edu
Links: Paper | Website
Keywords: automated reasoning, program verification, llm


Authors: Xinyu Yang, Weixin Liang, James Zou
Contact: xinyuyang1203@gmail.com, wxliang@stanford.edu
Links: Paper
Keywords: dataset documentation, data-centric ai, large-scale analysis


On the Learnability of Watermarks for Language Models

Authors: Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto
Contact: cygu@stanford.edu
Links: Paper | Website
Keywords: watermarking, large language models, distillation


Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions

Authors: Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto, James Zou
Contact: fede@stanford.edu
Links: Paper | Website
Keywords: safety, llms, foundation models


Principled Federated Domain Adaptation: Gradient Projection and Auto-Weighting

Authors: Enyi Jiang, Yibo Jacky Zhang, Sanmi Koyejo
Contact: yiboz@stanford.edu
Links: Paper
Keywords: federated learning, domain adaptation


Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features

Authors: Annie S. Chen*, Yoonho Lee*, Amrith Setlur, Sergey Levine, Chelsea Finn
Contact: asc8@stanford.edu
Links: Paper
Keywords: distribution-shift robustness, fine-tuning, adaptation, transfer learning


RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval

Authors: Parth Sarthi, Salman Abdullah, Aditi Tuli, Shubh Khanna, Anna Goldie, Christopher D. Manning
Contact: psarthi@cs.stanford.edu
Links: Paper | Website
Keywords: retrieval augmented language models, information retrieval, summarization, qa, llm


Workshops

Development and Evaluation of Deep Learning Models for Cardiotocography Interpretation

Authors: Nicole Chiou, Nichole Young-Lin, Christopher Kelly, Julie Cattiau, Tiya Tiyasirichokchai, Abdoulaye Diack, Sanmi Koyejo, Katherine A Heller, Mercy Nyamewaa Asiedu


Contact: nicchiou@stanford.edu
Workshop: Time Series for Health
Keywords: machine learning, time series, evaluation, distribution shifts, cardiotocography, fetal health, maternal health


A Distribution Shift Benchmark for Smallholder Agroforestry. Do Foundation Models Improve Geographic Generalization?

Authors: Siddharth Sachdeva, Isabel Lopez, Chandrasekhar Biradar, David Lobell
Contact: siddsach@stanford.edu
Workshop: Machine Learning for Remote Sensing
Links: Paper
Keywords: robustness, distribution shifts, remote sensing, benchmark datasets


An Evaluation Benchmark for Autoformalization in Lean4

Authors: Jasdeep Sidhu, Shubhra Mishra, Aryan Gulati, Devanshu Ladsaria, Brando Miranda
Contact: shubhra@stanford.edu
Workshop: Tiny Papers
Links: Paper
Keywords: large language models, llm, autoformalization, theorem proving, dataset


On Fairness of Low-Rank Adaptation of Large Models

Authors: Zhoujie Ding*, Ken Ziyu Liu*, Pura Peetathawatchai, Berivan Isik, Sanmi Koyejo
Contact: d1ng@stanford.edu
Workshop: Mathematical and Empirical Understanding of Foundation Models, Practical ML for Limited/Low Resource Settings, Reliable and Responsible Foundation Models, Secure and Trustworthy Large Language Models
Links: Paper
Keywords: low-rank adaptation, lora, bias, fairness, subgroup fairness, evaluations, llms, large models


We look forward to seeing you at ICLR 2024!