Title : Memory Augmented Neural Networks
Speaker : Sarath Chandar , PhD student University of Montreal
Date : Jan 20, 2017 (Fri)
Time : 5:30pm
Venue: KD101

Designing of general-purpose learning algorithms is a long-standing goal of artificial intelligence. 
A general purpose AI agent should be able to have a memory that it can store and retrieve information 
from. Despite the success of deep learning in particular with the introduction of LSTMs and GRUs to 
this area, there are still a set of complex tasks that can be challenging for conventional neural networks. 
Those tasks often require a neural network to be equipped with an explicit, external memory in which a 
larger, potentially unbounded, set of facts need to be stored. They include but are not limited to, reasoning, 
planning, episodic question-answering and learning compact algorithms. Recently two promising approaches 
based on neural networks to this type of tasks have been proposed: Memory Networks and Neural Turing Machines.
In this talk, I will give an overview of this new paradigm of "neural networks with memory". I will present a 
unified architecture for Memory Augmented Neural Networks (MANN) and discuss the ways in which one can address
the external memory and hence read/write from it. Then we will introduce Neural Turing Machines and Memory 
Networks as specific instantiations of this general architecture. In the second half of the talk, we will focus
on recent advances in MANN which focus on the following questions: How can we read/write from an extremely 
large memory in a scalable way? How can we design efficient non-linear addressing schemes? How can we do efficien
reasoning using large scale memory and an episodic memory? The answer to any one of these questions introduces
a variant of MANN. I will conclude the tutorial with several open challenges in MANN.

Speaker Bio: Sarath Chandar is currently a PhD student in University of Montreal under the supervision 
of Yoshua Bengio and Hugo Larochelle. His work mainly focuses on Deep Learning for complex NLP tasks like question
answering and dialog systems. He also investigates scalable training procedure and memory access mechanisms for
memory network architectures. In the past, he has worked on multilingual representation learning and transfer 
learning across multiple languages. His research interests includes Machine Learning, Natural Language Processing,
Deep Learning, and Reinforcement Learning. Before joining University of Montreal, he was a Research Scholar in 
IBM Research India for a year. He has completed his MS by Research in IIT Madras. 

To view the complete publication list and speaker profile, please visit: http://sarathchandar.in/