Adversarially Robust Learning with Unknown Perturbation Sets

Omar Montasser , Steve Hanneke , Nathan Srebro

[Proceedings link] [PDF]

Session: Robustness, Privacy and Fairness (B)

Session Chair: Thomas Steinke

Poster: Poster Session 2

Abstract: We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.

Summary presentation

Full presentation