I'm a Ph.D. student in Computer Science at Stanford University, where I have been fortunate to be advised by Prof. James Zou. I am a part of Stanford Artificial Intelligence Laboratory (SAIL), where I have collaborated with Prof. Daniel Jurafsky, Prof. Daniel A. McFarland, and Prof. Serena Yeung.
I did my master at Stanford in Electrical Engineering, working with Prof. James Zou and Prof. Zhou Yu. Prior to Stanford, I received a B.S. in Computer Science from Zhejiang University in 2019, where I worked with Prof. Kai Bu and Prof. Mingli Song. I have also spent time interning at Amazon Alexa AI, Apple, and Tencent.
Daniel A. McFarland,
arXiv preprint arXiv:2310.01783 (2023)
Paper Twitter Code
With the breakthrough of large language models (LLM) such as GPT-4, there is growing interest in using LLMs to generate scientific feedback on research manuscripts. However, the utility of LLM-generated feedback has not been systematically studied. To address this gap, we created an automated pipeline using GPT-4 to provide comments on the full PDFs of scientific papers. Our results suggest that LLM and human feedback can complement each other. While human expert review is and should continue to be the foundation of rigorous scientific process, LLM feedback could benefit researchers, especially when timely expert feedback is not available and in earlier stages of manuscript preparation before peer-review.
Assessing the quality and impact of individual data points is critical for improving model performance and mitigating undesirable biases within the training dataset. In this paper, we introduce OpenDataVal, an easy-to-use and unified benchmark framework that empowers researchers and practitioners to apply and compare various data valuation algorithms.
As AI model-building becomes more automated, much of the resources and time in practice are devoted to designing what data to collect, data cleaning, annotations and data evaluations. Our article discusses the best practices, new challenges and opportunities for each of these key components of the data for AI pipeline.
We should be very cautious when using detectors to classify if text is written by AI or human. Our research has shown that such detectors classify over 50% of real text written by non-native English speakers as AI-generated, while most polished essays generated by GPT evade detection. This creates a bias and false positives against non-native speakers, as literary language is often classified as "human."
Daniel A. McFarland,
Paper Cell.com Twitter Recording
Stanford HAI News: Analyzing 50 Years of Stanford Patents
Finding patterns of success across 50 years of innovation | Scope
OTL 50th Anniversary Report: A Half Century of Pioneering Innovation
Computational analysis of 4,512 inventions marketed by Stanford's Office of Technology Licensing between 1970 and 2020 characterizes how the academic innovation landscape changed over time. We identified factors, such as the composition of the inventors, associated with the commercial success of the inventions. We also identified linguistic differences in how high-revenue and low-revenue inventions in the same field are described and marketed.
Robust peer review process is essential for the advancement of knowledge. However, peer review outcomes can depend on external factors—e.g. timing of the submission, availability of specific editors and reviewers—that are largely random and orthogonal to the quality of the work. While researchers often complain about luck-of-the-draw of the review process, there lacks systematic analysis of the impact of these external factors.
Recent works empirically find that there is a strong linear relationship between in-distribution (ID) and out-of-distribution (OOD) performance, but we show that this is not necessarily true if there are subpopulation shifts. In this paper, we empirically show that out-of-distribution performance often has nonlinear ("moon shape") correlation with in-distribution performance under subpopulation shifts.
Our new paper explains the intriguing AI ModalityGap: in multi-modal AI, there are large gaps in the representation space separating different data types. We show changing the gap improves zero-shot learning and fairness. Interestingly, modality gaps are created at model initialization and are reinforced by contrastive learning.
Contributed Talk at ICML 2022 Workshop on Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet
International Conference on Learning Representations (ICLR 2022)
Paper HTML Website Code HuggingFace Recording Blog
MetaShift introduces a collection of >10K sets of images with annotated contexts! Context is missing in many ML datasets but is critical for understanding model performance. It enables evaluating how ML works in different contexts (e.g. indoor cat vs outdoor cat). Bonus: we give distance between contexts.
Machine learning systems that seemingly perform well on average can still make systematic errors on important subsets of data. We introduce an interactive Systematic Error Analysis and Labeling (SEAL) tool that uses a two-step approach to first identify high error slices of data and then in the second step introduce methods to give human-understandable semantics to those under-performing slices.
To deploy machine learning algorithms in real-world applications, we must pay attention to distribution shift, i.e. when the test distribution is different from the training distribution, which substantially degrades model performance. We propose a simple mixup-based method to learn invariant functions via selective augmentation.
Science Advance (2022)
Machine Learning for Health (ML4H 2021)
Paper Science.org Diverse Dermatology Images (DDI) Dataset
Training physicians and algorithms in dermatology diversity | Scope
In order to train and test AI algorithms in dermatology, we need diverse, validated benchmarks. We curated the Diverse Dermatology Images (DDI) dataset to meet this need—the first publicly available, expertly curated, and pathologically confirmed image dataset with diverse skin tones.
Our new ISIT 2021 paper proposes neural group testing to speed up DeepLearning. The idea is to adaptively apply the network to groups of data pooled at suitable layers, which greatly reduces total compute.
We propose a workflow that automatically labels training data with minimum human efforts involved, built upon our previous ACL 2020 work.
Review Ratings: 4, 4, 4.5 in 5-point scale
Our new EMNLP paper shows how to teach ML via natural language explanation of contrasts between concepts (e.g. "difference between COVID and flu is ..."). It's much more efficient than using labeled examples. Excited for more human-like learning!
Review Ratings: 4.5, 4.5, 5 in 5-point scale
For dialog system evaluation, we found that self-reported dialog ratings are skewed, noisy and insensitive due to bias and variance among different users. We propose a three-stage denoising pipeline to reduce self-reported ratings and, at the same time, build an automatic comparison-based automatic dialog quality predictor.
We propose an end-to-end framework for task-oriented dialog systems, which can flexibly incorporate supervision from multiple intermediate dialog system modules (e.g. natural language understanding, dialog state tracking, dialog policy learning and natural language generation) in an end-to-end manner.
A computer architecture conference paper on in-storage hardware acceleration for deep learning.
We are the first to leverage computer vision techinques for image-based nondestructive textile fiber identification, which is practically useful in fashion, decoration, and design industry. Existing methods based on physical, chemical and microscopy techniques are normally limited by their long identification cycles, many human factors, high technological barriers, and existing damage.
Outstanding Graduation Thesis, Zhejiang University
Access patterns over untrusted memory have long been exploited to infer sensitive information like program types or even secret keys. We propose a light-weight obfuscation solutions to hide real memory accesses.