Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition

Liyu Chen , Haipeng Luo , Chen-Yu Wei

[Proceedings link] [PDF]

Session: Bandits, RL and Control 1 (A)

Session Chair: Yuxin Chen

Poster: Poster Session 2

Abstract: We study the stochastic shortest path problem with adversarial costs and known transition, and show that the minimax regret is $O(\sqrt{DT_\star K})$ and $O(\sqrt{DT_\star SA K})$ for the full-information setting and the bandit feedback setting respectively, where $D$ is the diameter, $T_\star$ is the expected hitting time of the optimal policy, $S$ is the number of states, $A$ is the number of actions, and $K$ is the number of episodes. Our results significantly improve upon the recent work of (Rosenberg and Mansour, 2020) which only considers the full-information setting and achieves suboptimal regret. Our work is also the first to consider bandit feedback with adversarial costs. Our algorithms are built on top of the Online Mirror Descent framework with a variety of new techniques that might be of independent interest, including an improved multi-scale expert algorithm, a reduction from general stochastic shortest path to a special loop-free case, a skewed occupancy measure space, and a novel correction term added to the cost estimators. Interestingly, the last two elements reduce the variance of the learner via positive bias and the variance of the optimal policy via negative bias respectively, and having them simultaneously is critical for obtaining the optimal high-probability bound in the bandit feedback setting.

Summary presentation

Full presentation