Seminar by Satyadev Nandakumar

Prediction Games against Stationary Ergodic Processes

Satyadev Nandakumar
IIT Kanpur

    Date:    Thursday, October 18th, 2012
    Time:    5PM
    Venue:   CS101.

Abstract:

The problem of prediction is broadly as follows - Is there an algorithm, which given a sequence of binary observations, x[0], x[1], ... x[N-1], can predict the next bit? To formalize this, we could fix two criteria - first, what typical property the set of possible outcomes possesses, and second, how we will measure the performance of a predictor.

Learning theorists have studied an interesting variant - given a collection of "experts" who predict the outcomes, can we do as well as the best among them, without knowing in advance which expert is best? Here the "aggregator" does not have any extra information about the source sequences, yet it has to combine the expert predictors in a reasonable way.

The performance of the predictors will be characterized with the help of a loss function. Let b be the prediction. Typical loss functions are the absolute loss function (score 1 if x[N+1]=b, 0 otherwise), and the log-loss function (-log(|(1-x[N+1]) - b|)) related to the Shannon entropy.

There are learning-theoretic algorithms which asymptotically perform as well as the best expert (within a multiplicative constant.). In the talk we outline our recent work which establishes a lower bound for this problem - assuming that the outcomes come from a stationary ergodic distribution, both the "best expert" and the aggregator incur the generalized entropy rate of the distribution on almost every infinite sequence of outcomes. (This is joint work with Mrinalkanti Ghosh)

Back to Seminars in 2012-13