Online Learning from Optimal Actions

Yuri Fonseca

[Proceedings link] [PDF]

Session: Online Learning, Game Theory 1 (A)

Session Chair: Haipeng Luo

Poster: Poster Session 1

Abstract: We study the problem of online contextual optimization where, at each period, instead of observing the loss, we observe, after-the-fact, the optimal action an oracle with full knowledge of the objective function would have taken. At each period, the decision-maker has access to a new set of feasible actions to select from and to a new contextual function that affects that period's loss function. We aim to minimize regret, which is defined as the difference between our losses and the ones incurred by an all-knowing oracle. We obtain the first regret bound for this problem that is logarithmic in the time horizon. Our results are derived through the development and analysis of a novel algorithmic structure that leverages the underlying geometry of the problem.

Summary presentation

Full presentation

Discussion