These are the videos recorded at the Conference on Learning Theory, 2017, Amsterdam.
Friday, July 7th
- 09:00
Vitaly Feldman and Thomas Steinke
Generalization for Adaptively-chosen Estimators via Stable Median
- 09:20
Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian and Nathan Srebro
Learning Non-Discriminatory Predictors
- 09:40
Mitali Bafna and Jonathan Ullman
The Price of Selection in Differential Privacy
- 09:50
Pranjal Awasthi, Avrim Blum, Nika Haghtalab and Yishay Mansour
Efficient PAC Learning from the Crowd
- 10:20
Yuchen Zhang, Percy Liang and Moses Charikar
A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics (Best Paper Award)
- 10:40
Maxim Raginsky, Alexander Rakhlin and Matus Telgarsky
Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis
- 10:50
Arnak Dalalyan
Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent
- 11:00
Nicolas Brosse, Alain Durmus, Eric Moulines and Marcelo Pereyra
Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo
- 11:10
Alon Gonen and Shai Shalev-Shwartz
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems
- 11:35
Scott Aaronson
PAC-Learning and Reconstruction of Quantum States
- 14:30
Yury Polyanskiy, Ananda Theertha Suresh and Yihong Wu
Sample complexity of population recovery
- 14:50
Shachar Lovett and Jiapeng Zhang
Noisy Population Recovery from Unknown Noise
- 15:00
Ilias Diakonikolas, Daniel Kane and Alistair Stewart
Learning Multivariate Log-concave Distributions
- 15:10
Constantinos Daskalakis, Manolis Zampetakis and Christos Tzamos
Ten Steps of EM Suffice for Mixtures of Two Gaussians
- 15:20
Ravi Kannan and Santosh Vempala
The Hidden Hubs Problem
- 16:00
Joon Kwon, Vianney Perchet and Claire Vernade
Sparse Stochastic Bandits
- 16:10
Yevgeny Seldin and Gabor Lugosi
An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits
- 16:20
Alekh Agarwal, Haipeng Luo, Behnam Neyshabur and Robert Schapire
Corralling a Band of Bandit Algorithms
- 16:30
Jonathan Scarlett, Ilija Bogunovic and Volkan Cevher
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
- 16:40
Lijie Chen, Jian Li and Mingda Qiao
Towards Instance Optimal Bounds for Best Arm Identification
- 16:50
Tomer Koren, Roi Livni and Yishay Mansour
Bandits with Movement Costs and Adaptive Pricing
- 17:20
Alon Cohen, Tamir Hazan and Tomer Koren
Tight Bounds for Bandit Combinatorial Optimization
- 17:30
Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile and Sébastien Gerchinovitz
Online Nonparametric Learning, Chaining, and the Role of Partial Feedback
- 17:40
Open Problems Session
Open Problem Session
Saturday, July 8th
- 09:00
Andrea Locatelli, Alexandra Carpentier and Samory Kpotufe
Adaptivity to Noise Parameters in Nonparametric Active Learning
- 09:20
Simon Du, Sivaraman Balakrishnan, Jerry Li and Aarti Singh
Computationally Efficient Robust Estimation of Sparse Functionals
- 09:30
Jerry Li and Ludwig Schmidt
Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities
- 09:40
Daniel Vainsencher, Shie Mannor and Huan Xu
Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization
- 09:50
Yeshwanth Cherapanamjeri, Prateek Jain and Praneeth Netrapalli
Thresholding based Efficient Outlier Robust PCA
- 10:20
Song Mei, Theodor Misiakiewicz, Andrea Montanari and Roberto Oliveira
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality
- 10:40
Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik and Colin White
Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems
- 10:50
Moran Feldman, Christopher Harshaw and Amin Karbasi
Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization
- 11:00
Avinatan Hassidim and Yaron Singer
Submodular Optimization under Noise
- 11:10
Alexandr Andoni, Daniel Hsu, Kevin Shi and Xiaorui Sun
Correspondence retrieval
- 11:35
Ashok Cutkosky and Kwabena Boahen
Online Learning Without Prior Information (Best Student Paper Award)
- 11:55
Alexander Rakhlin and Karthik Sridharan
On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities
- 12:15
Gergely Neu and Vicenç Gómez
Fast rates for online learning in Linearly Solvable Markov Decision Processes
- 12:25
Dylan Foster, Alexander Rakhlin and Karthik Sridharan
ZIGZAG: A new approach to adaptive online learning
- 14:50
Avrim Blum and Yishay Mansour
Efficient Co-Training of Linear Separators under Weak Dependence
- 15:10
Amir Globerson, Roi Livni and Shai Shalev-Shwartz
Effective Semisupervised Learning on Manifolds
- 15:20
Lunjia Hu, Ruihan Wu, Tianhong Li and Liwei Wang
Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes
- 15:30
Nader Bshouty, Dana Drachsler Cohen, Martin Vechev and Eran Yahav
Learning Disjunctions of Predicates
Sunday, July 9th
- 09:00
Vitaly Feldman
A General Characterization of the Statistical Query Complexity
- 09:20
Michal Moshkovitz and Dana Moshkovitz
Mixing Implies Lower Bounds for Space Bounded Learning
- 09:40
Salil Vadhan
On Learning versus Refutation
- 09:50
Pasin Manurangsi and Aviad Rubinstein
Inapproximability of VC Dimension and Littlestone’s Dimension
- 10:20
Rafael Frongillo and Andrew Nobel
Memoryless Sequences for Differentiable Losses
- 10:40
Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan and Bo Waggoner
Multi-Observation Elicitation
- 10:50
Clément Canonne, Ilias Diakonikolas, Daniel Kane and Alistair Stewart
Testing Bayesian Networks
- 11:00
Constantinos Daskalakis and Qinxuan Pan
Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing
- 11:10
Debarghya Ghoshdastidar, Ulrike von Luxburg, Maurilio Gutzeit and Alexandra Carpentier
Two-Sample Tests for Large Random Graphs using Network Statistics
- 11:35
Andrea Montanari
Computational barriers in statistical learning
- 14:30
Lijun Zhang, Tianbao Yang and Rong Jin
Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds
- 14:50
Nicolas Flammarion and Francis Bach
Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
- 15:00
Bin Hu, Peter Seiler and Anders Rantzer
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
- 15:10
Jialei Wang, Weiran Wang and Nathan Srebro
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox
- 15:20
Eric Balkanski and Yaron Singer
The Sample Complexity of Optimizing a Convex Function
- 16:00
Max Simchowitz, Kevin Jamieson and Benjamin Recht
The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime
- 16:20
Arpit Agarwal, Shivani Agarwal, Sepehr Assadi and Sanjeev Khanna
Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons
- 16:40
Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao and Ruosong Wang
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
- 16:50
Shipra Agrawal, Vashist Avadhanula, Vineet Goyal and Assaf Zeevi
Thompson Sampling for the MNL-Bandit
Monday, July 10th
- 09:00
Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski and Sanjeev Arora
On the Ability of Neural Nets to Express Distributions
- 09:20
Amit Daniely
Depth Separation for Neural Networks
- 09:30
David Helmbold and Phil Long
Surprising properties of dropout in deep networks
- 09:40
Surbhi Goel, Varun Kanade, Adam Klivans and Justin Thaler
Reliably Learning the ReLU in Polynomial Time
- 09:50
Nicholas Harvey, Christopher Liaw and Abbas Mehrabian
Nearly-tight VC-dimension bounds for neural networks
- 10:20
Aaron Potechin and David Steurer
Exact tensor completion with sum-of-squares
- 10:40
Tselil Schramm and David Steurer
Fast and robust tensor decomposition with applications to dictionary learning
- 10:50
Anima Anandkumar, Yuan Deng, Rong Ge and Hossein Mobahi
Homotopy Analysis for Tensor PCA
- 11:00
Marc Lelarge and Léo Miolane
Fundamental limits of symmetric low-rank matrix estimation
- 11:10
David Gamarnik, Quan Li and Hongyi Zhang
Matrix Completion from O(n) Samples in Linear Time
- 11:55
Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet and John Urschel
Rates of estimation for determinantal point processes
- 12:05
Michael Kearns and Zhiwei Steven Wu
Predicting with Distributions
- 12:15
Andreas Maurer
A second-order look at stability and generalization
- 12:25
Nikita Zhivotovskiy
Optimal learning via local entropies and sample compression