We provide a Bayesian inference framework for in-context learning in large language models like GPT-3 and show empirical evidence for our framework, including connections to how in-context learning can still work well despite randomizing the labels in few-shot examples.
LinkBERT is a new language model pretrained to capture document link knowledge such as hyperlinks of the web. It greatly helps knowledge-intensive applications such as question answering.
We show that selective classification, where models are allowed to abstain when they are uncertain, can fail to improve and even hurt accuracy over certain subpopulations of the data.
By tapping into knowledge stored explicitly in text corpora, retrieval helps tackle the inefficiency, opaqueness, and static nature of large language models.
How can we use machine learning to fix source code errors (e.g. in C, Python) for us? We introduce Break-It-Fix-It, a new unsupervised method to train code repair models.