Home

I am a Research Scientist at Facebook. I spend most of my time making advertising more relevant to users.
At Stanford, I worked with Prof Andrew Y. Ng in the Stanford AI Lab

I am interested in studying machine learning and its applications, especially to large-scale problems or high-dimensional data.
Some of the applications I have worked on: computer vision, text processing, natural language processing, web search, speech/music processing, online advertising.


Research papers

Learning Relevance in a Heterogeneous Social Network and Its Application in Online Targeting
Chi Wang, Rajat Raina, David Fong, Ding Zhou, Jiawei Han, Greg Badros
SIGIR 2011, The 34th Annual ACM SIGIR Conference on Information Retrieval, Beijing, China. [pdf]

Self-taught Learning
Rajat Raina
Ph.D. thesis, Stanford University.

Large-scale Deep Unsupervised Learning using Graphics Processors
Rajat Raina, Anand Madhavan, Andrew Y. Ng
ICML 2009. [pdf]
[Describes efficient methods for learning large deep belief networks and sparse coding models. Using graphics processors, we can train very large models with ~100 million free parameters in a single day (instead of in several weeks).]

Exponential Family Sparse Coding with Application to Self-taught Learning
Honglak Lee, Rajat Raina, Alex Teichman, and Andrew Y. Ng
IJCAI 2009. [pdf] (Shorter version appeared at ICML 2008 Workshop on Prior Knowledge for Text and Language Processing [pdf])

Learning Large Deep Belief Networks using Graphics Processors
Rajat Raina, Andrew Y. Ng
NIPS 2008 Workshop on Parallel Implementations of Learning Algorithms [abstract]

Shift-invariant sparse coding for audio classification,
Roger Grosse, Rajat Raina, Helen Kwong, and Andrew Y. Ng.
UAI 2007. [pdf, bibtex, code]
[Efficient algorithms for shift-invariant sparse coding, with application to self-taught learning for audio classification.]

Self-taught learning: Transfer learning from unlabeled data,
Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer and Andrew Y. Ng.
ICML 2007. [pdf, bibtex] (Shorter version appeared at NIPS 2006 Workshop on Learning when Test and Training Inputs Have Different Distributions.)
[In self-taught learning, we are given a small amount of labeled data for a supervised learning task, and lots of additional unlabeled data that does not share the labels of the supervised problem and does not arise from the same distribution. This paper introduces an algorithm for self-taught learning based on sparse coding.]

Efficient sparse coding algorithms,
Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng.
NIPS 2006. [pdf, bibtex, code]
[Sparse coding in an order of magnitude less time than previous algorithms. Introduces the feature-sign search algorithm for general L1-regularized least squares problems. Demonstrations of end-stopping and nCRF effects.]

Constructing informative priors using transfer learning,
Rajat Raina, Andrew Y. Ng and Daphne Koller.
ICML 2006. [pdf, bibtex] (Shorter version appeared at NIPS 2005 Workshop on Inductive Transfer.)
[A transfer learning algorithm that learns a Gaussian prior for logistic regression. Better classification with a small training set.]

Robust textual inference via learning and abductive reasoning,
Rajat Raina, Andrew Y. Ng and Christopher D. Manning.
AAAI 2005. [ps, pdf, bibtex]
[An algorithm for textual inference that "tries to combine the elegance and preciseness of logic with the robustness and scalability of statistical learning".]

Robust textual inference using diverse knowledge sources,
Rajat Raina, Aria Haghighi, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova, Bill MacCartney, Marie-Catherine de Marneffe, Christopher D. Manning and Andrew Y. Ng.
First PASCAL Recognizing Textual Challenge Workshop, 2005. [pdf, bibtex]
[On Stanford's submission to the PASCAL Recognizing Textual Entailment Challenge. We were in 1st place on one of two metrics.]

Classification with Hybrid Generative/Discriminative Models,
Rajat Raina, Yirong Shen, Andrew Y. Ng and Andrew McCallum.
NIPS 2003. [ps, pdf, bibtex]
[Describes a hybrid generative/discriminative classification algorithm, with theoretical and empirical justification for text classification.]


Other Projects

I worked closely with the NIPS 2007 Program Chairs as the NIPS Workflow Master.
[What does the Workflow Master do? Such fun work as: writing programs to automatically assign initial reviewers to each of 1000 submitted papers!]

Microsoft Research
Research Intern in the Machine Learning and Applied Statistics group, Jul-Sep 2006. Mentor: Joshua Goodman.
[Devised and implemented machine learning methods for detecting web spam using properties of linked webpages].

Google
Intern, Jun-Sep 2004. Mentor: Vibhu Mittal.
[Worked with Google's map-reduce infrastructure to design a method for automatic extraction of question-answer pairs from unstructured FAQ webpages.].

IITKanpur RoboCup 2002 Simulated League Team
Akhil Gupta, Rajat Raina, Amitabha Mukerjee
RoboCup 2002, Fukuoka, Japan
[RoboCup is the world cup for soccer robots and simulations.]

EPFL (Swiss Federal Institute of Technology), Switzerland
Intern, AI Lab, May-Jul 2001.
[Implemented a software agent architecture using agent ontologies.].

The reading list for my AI Qualifying Exam at Stanford.


The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. -- Edsgar W. Dijkstra