Distinguished Speaker Series – Yoshua Bengio

Title: The Challenge of Sequential Modeling

Speaker:  Yoshua Bengio

Abstract: 
Deep learning research has been moving forward at a rapid pace in recent years, with progress on many fronts and several major challenges still ahead of us. One of the difficult questions is that of properly modelling sequences. The main tool is the recurrent neural network and its extensions such as neural Turing machines and other trainable state machines. The applications span the modelling of video, speech, natural language and more. This presentation will summarize recent progress in this area at the Montreal Institute for Learning Algorithms. We have greatly improved our understanding of the architectural properties of recurrent networks, in particular as they influence both the optimization difficulty and the ability to capture greater non-linearity and longer-term dependencies. We have also found better ways of training state machines like recurrent nets by injecting and annealing noise. We are better understanding how higher-level variables, possibly at slower time scales, can be used to improve sequence modelling. These ideas have been used for language modelling as well as dialogue generation, introducing novel ways of backpropagating through stochastic discrete decisions and learn how to segment sequences causally.

Bio: Yoshua Bengio received a PhD in Computer Science from McGill University, Canada in 1991. After two post-doctoral years, one at M.I.T. with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun and Vladimir Vapnik, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of three books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since ‘2000 he holds a Canada Research Chair in Statistical Learning Algorithms, is a Senior Fellow of the Canadian Institute for Advanced Research and since 2014 he co-directs its program focused on deep learning. He heads the Montreal Institute for Learning Algorithms (MILA), currently the largest academic research group on deep learning. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the new International Conference on Learning Representations. His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, generative models, biologically inspired learning algorithms, natural language understanding and other challenging applications of machine learning.

Organizers: Chris Manning and Stefano Ermon

Sponsored by the Stanford Computer Forum

Sponsored by the Stanford Computer Forum

No Comments Yet.

Leave a comment

You must be Logged in to post a comment.