Home > > CS 773: Online Learning and Optimization

CS 773: Online Learning and Optimization

CS 773: Online Learning and Optimization

Pre-requisites

Instructor’s consent. Familiarity with basics of probability and statistics, and linear algebra would be essential. Prior exposure to machine learning techniques would be desirable.

Short Description

This is intended to be a course on advanced techniques used in optimization and learning. The techniques introduced in this course can be used to perform tasks such as sequential prediction and optimization on large-scale streaming data. These are the methods of choice for massive learning tasks and form the basis for a large family of optimization and learning routines. The course will be self contained but will nevertheless benefit from familiarity with machine learning as a source of basic learning theoretic concepts, as well as motivation for large-scale learning and optimization.

Topics

  1. Introduction to preliminaries in convex and real analysis, and probability theory
  2. Online Prediction with Full Feedback
    • Online classification – Perceptron, Winnow
    • Online regression – gradient descent, exponentiated gradient
    • Learning with expert advice – weighted majority, hedge
  3. Online Convex Optimization
    • Review of batch optimization – batch gradient descent
    • Follow the leader – regularized, perturbed
    • Online gradient descent – polynomial and logarithmic regret bounds
    • Online mirror descent – links to OGD, FTL
    • Stochastic gradient descent – online to batch conversion and application to batch learning
  4. Online Prediction with Limited Feedback
    • Stochastic/adversarial multi-armed bandits - EXP3, UCB, Thompson Sampling
    • Linear and contextual bandits - lin-UCB, lin-TS
  5. Special Topics (depending on interest and available time)
    • Tracking shifting experts
    • Mini-batch methods
    • Beyond additive regret
    • Online second order methods
    • Derivative-free optimization

References

There will be no textbook for this course. Material available from recent books, monographs, and publications will form the majority of reference material for the course.

  1. E. Hazan. Introduction to Online Convex Optimization. Draft, 2015. http://ocobook.cs.princeton.edu/
  2. N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games, Cambridge University Press, 2006.
  3. S. Shalev-Shwartz. Online Learning and Online Convex Optimization, Foundations and Trends in Machine Learning, 4(2): 107-194, 2012.
  4. M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning, The MIT Press, 2012.
  5. S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Foundations and Trends in Machine Learning, 5(1): 1-122, 2012.
  6. S. Boyd and L. Vandenberghe. Convex Optimization, Cambridge University Press, 2004.
  7. Publications at major machine learning venues such as NIPS, ICML, COLT, AISTATS, and journals such as J. Machine Learning Res., Machine Learning J., and IEEE Trans. Inf. Theory.