Accepted papers

The proceedings of COLT 2017 are available in the Proceedings of Machine Learning Research (PMLR)

Best Paper. Yuchen Zhang, Percy Liang and Moses Charikar. A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics
Best Student Paper. Ashok Cutkosky and Kwabena Boahen. Online Learning Without Prior Information
Constantinos Daskalakis, Manolis Zampetakis and Christos Tzamos. Ten Steps of EM Suffice for Mixtures of Two Gaussians
Shachar Lovett and Jiapeng Zhang. Noisy Population Recovery from Unknown Noise
Lijun Zhang, Tianbao Yang and Rong Jin. Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds
Jonathan Scarlett, Ilija Bogunovic and Volkan Cevher. Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
Avrim Blum and Yishay Mansour. Efficient Co-Training of Linear Separators under Weak Dependence
Michal Moshkovitz and Dana Moshkovitz. Mixing Implies Lower Bounds for Space Bounded Learning
Mitali Bafna and Jonathan Ullman. The Price of Selection in Differential Privacy
Nader Bshouty, Dana Drachsler Cohen, Martin Vechev and Eran Yahav. Learning Disjunctions of Predicates
Avinatan Hassidim and Yaron Singer. Submodular Optimization under Noise
Debarghya Ghoshdastidar, Ulrike von Luxburg, Maurilio Gutzeit and Alexandra Carpentier. Two-Sample Tests for Large Random Graphs using Network Statistics
Andreas Maurer. A second-order look at stability and generalization
Eric Balkanski and Yaron Singer. The Sample Complexity of Optimizing a Convex Function
Yevgeny Seldin and Gabor Lugosi. An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits
Daniel Vainsencher, Shie Mannor and Huan Xu. Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization
Alekh Agarwal, Haipeng Luo, Behnam Neyshabur and Robert Schapire. Corralling a Band of Bandit Algorithms
David Helmbold and Phil Long. Surprising properties of dropout in deep networks
Nikita Zhivotovskiy. Optimal learning via local entropies and sample compression
Ilias Diakonikolas, Daniel Kane and Alistair Stewart. Learning Multivariate Log-concave Distributions
Max Simchowitz, Kevin Jamieson and Benjamin Recht. The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime
Lunjia Hu, Ruihan Wu, Tianhong Li and Liwei Wang. Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes
Alon Gonen and Shai Shalev-Shwartz. Fast Rates for Empirical Risk Minimization of Strict Saddle Problems
Nicolas Brosse, Alain Durmus, Eric Moulines and Marcelo Pereyra. Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo
Marc Lelarge and Léo Miolane. Fundamental limits of symmetric low-rank matrix estimation
Anima Anandkumar, Yuan Deng, Rong Ge and Hossein Mobahi. Homotopy Analysis for Tensor PCA
Ravi Kannan and Santosh Vempala. The Hidden Hubs Problem
Amit Daniely. Depth Separation for Neural Networks
Tomer Koren, Roi Livni and Yishay Mansour. Bandits with Movement Costs and Adaptive Pricing
Nicholas Harvey, Christopher Liaw and Abbas Mehrabian. Nearly-tight VC-dimension bounds for neural networks
Arnak Dalalyan. Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent
Maxim Raginsky, Alexander Rakhlin and Matus Telgarsky. Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis
Alon Cohen, Tamir Hazan and Tomer Koren. Tight Bounds for Bandit Combinatorial Optimization
Moran Feldman, Christopher Harshaw and Amin Karbasi. Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization
Surbhi Goel, Varun Kanade, Adam Klivans and Justin Thaler. Reliably Learning the ReLU in Polynomial Time
Bin Hu, Peter Seiler and Anders Rantzer. A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Michael Kearns and Zhiwei Steven Wu. Predicting with Distributions
Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile and Sébastien Gerchinovitz. Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning
Lijie Chen, Jian Li and Mingda Qiao. Towards Instance Optimal Bounds for Best Arm Identification
Vitaly Feldman. A General Characterization of the Statistical Query Complexity
Jerry Li. Robust Sparse Estimation Tasks in High Dimensions*
Nicolas Flammarion and Francis Bach. Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
Yeshwanth Cherapanamjeri, Prateek Jain and Praneeth Netrapalli. Thresholding based Efficient Outlier Robust PCA
Amir Globerson, Roi Livni and Shai Shalev-Shwartz. Effective Semisupervised Learning on Manifolds
Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet and John Urschel. Rates of estimation for determinantal point processes
Ilias Zadik and David Gamarnik. High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition.
Ashok Cutkosky and Kwabena Boahen. Online Learning Without Prior Information
Pasin Manurangsi and Aviad Rubinstein. Inapproximability of VC Dimension and Littlestone’s Dimension
Joon Kwon, Vianney Perchet and Claire Vernade. Sparse Stochastic Bandits
Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian and Nathan Srebro. Learning Non-Discriminatory Predictors
Alexander Rakhlin and Karthik Sridharan. On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities
Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik and Colin White. Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems
Arpit Agarwal, Shivani Agarwal, Sepehr Assadi and Sanjeev Khanna. Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons
Jerry Li and Ludwig Schmidt. Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities
Jialei Wang, Weiran Wang and Nathan Srebro. Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox
Alexandr Andoni, Daniel Hsu, Kevin Shi and Xiaorui Sun. Correspondence retrieval
Andrea Locatelli, Alexandra Carpentier and Samory Kpotufe. Adaptivity to Noise Parameters in Nonparametric Active Learning
David Gamarnik, Quan Li and Hongyi Zhang. Matrix Completion from O(n) Samples in Linear Time
Salil Vadhan. On Learning versus Refutation
Simon Du, Sivaraman Balakrishnan and Aarti Singh. Computationally Efficient Robust Estimation of Sparse Functionals*
Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan and Bo Waggoner. Multi-Observation Elicitation
Constantinos Daskalakis and Qinxuan Pan. Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing
Vitaly Feldman and Thomas Steinke. Generalization for Adaptively-chosen Estimators via Stable Median
Shipra Agrawal, Vashist Avadhanula, Vineet Goyal and Assaf Zeevi. Thompson Sampling for the MNL-Bandit
Rafael Frongillo and Andrew Nobel. Memoryless Sequences for Differentiable Losses
Gergely Neu and Vicenç Gómez. Fast rates for online learning in Linearly Solvable Markov Decision Processes
Pranjal Awasthi, Avrim Blum, Nika Haghtalab and Yishay Mansour. Efficient PAC Learning from the Crowd
Tselil Schramm and David Steurer. Fast and robust tensor decomposition with applications to dictionary learning
Song Mei, Theodor Misiakiewicz, Andrea Montanari and Roberto Oliveira. Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality
Yury Polyanskiy, Ananda Theertha Suresh and Yihong Wu. Sample complexity of population recovery
Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski and Sanjeev Arora. On the Ability of Neural Nets to Express Distributions
Aaron Potechin and David Steurer. Exact tensor completion with sum-of-squares
Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao and Ruosong Wang. Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
Dylan Foster, Alexander Rakhlin and Karthik Sridharan. ZIGZAG: A new approach to adaptive online learning

*: Papers to be merged.