Distinguished Speaker Series – Regina Barzilay

Distinguished Speaker Series – Regina Barzilay

Title: Finding Simple Solutions for Hard NLP Problems

Speaker:  Regina Barzilay

Abstract: Progress on many well-established problems in Natural Language Processing comes from applying generic machine learning techniques to the task at hand. While successful, this perspective omits the hidden simplicity of many tasks or ways that they could be made simpler to solve. In this talk, I will show how simple methods can be effectively applied to core NLP tasks — dependency parsing and information extraction.

Dependency parsing as a structured prediction task is a hard combinatorial problem, typically solved by adapting general optimization methods for parsing. However, we demonstrate that, on average, parsing appears easier than its broader complexity class would suggest, and show that a simple and flexible randomized algorithm outperforms state-of-the-art optimization techniques.

Traditional formulations of information extraction focus on learning extraction patterns from a given document. In contrast, we refocus the effort on finding other sources that contain the information sought but expressed in a form that a basic extractor can “understand”. The final system is implemented in the reinforcement learning framework that combines query reformulation, basic extraction, and answer validation. Empirical performance shows that learning to chase for easy answers yields significant performance gains over traditional extractors.

This is joint work with Tommi Jaakkola, Tao Lei, Karthik Narasimhan and Yuan Zhang.

Bio: Regina Barzilay is a professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Her research interests are in natural language processing. She is a recipient of various awards including of the NSF Career Award, the MIT Technology Review TR-35 Award, Microsoft Faculty Fellowship and several Best Paper Awards in top NLP conferences. She received her Ph.D. in Computer Science from Columbia University, and spent a year as a postdoc at Cornell University.

Sponsored by the Stanford Computer Forum

Sponsored by the Stanford Computer Forum

No Comments Yet.

Leave a comment

You must be Logged in to post a comment.