This course will take the Bayesian statistical modeling approach to machine learning. Some of the key benefits of the Bayesian approach include the ability to quantify the uncertainty in the parameters/predictions through posterior probability distributions, the ability to incorporate prior knowledge in a principled way, the ability to learn the model hyperparameters and the right model size/complexity automatically from data, and the property of embodying online learning in a natural way. In this course, we will discuss the foundations of Bayesian modeling, especially in the context of machine learning and, through various case-studies/running-examples, we will look at how to set up a machine learning problem as a Bayesian model and how to design sampling/optimization techniques to perform computationally scalable inference in these models.
Instructor's consent. However, note that this course will make extensive use of concepts from probability, statistics, and optimization. Therefore a solid background on these topics, as well as introductory machine learning, will be essential. Students are also supposed to be familiar with programming in MATLAB/Python.
(Tentative break-up) There will be 5 homework assignments (total 30%) which may include a programming component, a mid-term (20%), a final-exam (25%), and a course project (25%)
A tentative set of topics to be covered in this course includes:
Treatment of the above topics will be via several case-studies/running-examples, which include generalized linear models, finite/infinite mixture models, finite/infinite latent factor models, matrix factorization of real/discrete/count data, sparse linear models, linear Gaussian models, linear dynamical systems and time-series models, topic models for text data, etc.
We will primarily use lecture notes/slides from this class. In addition, we will refer to monographs and research papers (from top Machine Learning conferences and journals) for some of the topics. Some recommended, although not required, books are: