
ACCEPTED PAPERS
Uniform concentration and symmetrization for weak interactions
Andreas Maurer, Massimiliano PontilExponential Convergence Time of Gradient Descent for OneDimensional Deep Linear Neural Networks
Ohad ShamirThe Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling
Christopher Tosh, Sanjoy DasguptaDisagreementBased Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm
Tongyi Cao, Akshay KrishnamurthyStochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate
Lijun Zhang, ZhiHua ZhouAn Optimal HighOrder Tensor Method for Convex Optimization
Bo Jiang, Haoyue Wang, Shuzhong ZhangTesting Identity of Multidimensional Histograms
Ilias Diakonikolas, Daniel M. Kane, John PeeblesImproved Pathlength Regret Bounds for Bandits
Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, ChenYu WeiSorted Topk in Rounds
Mark Braverman, Jieming Mao, Yuval PeresPrivately Learning HighDimensional Distributions
Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan UllmanLearning Neural Networks with Two Nonlinear Layers in Polynomial Time
Surbhi Goel, Adam R. KlivansOn the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA
Dan GarberNearly MinimaxOptimal Regret for Linearly Parameterized Bandits
Yingkai Li, Yining Wang, Yuan ZhouLower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas, Cristóbal GuzmánLearning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi, Rajesh Jayaram, David P. WoodruffStochastic Gradient Descent Learns State Equations with Nonlinear Activations
Samet OymakLearning Ising Models with Independent Failures
Surbhi Goel, Daniel M. Kane, Adam R. KlivansThe Gap Between ModelBased and ModelFree Methods on the Linear Quadratic Regulator: An Asymptotic Viewpoint
Stephen Tu, Benjamin RechtTight analyses for non smooth stochastic gradient descent
Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander RandhawaEstimating the Mixing Time of Ergodic Markov Chains
Wolfer Geoffrey, Kontorovich AryehSharp Analysis for Nonconvex SGD Escaping from Saddle Points
Cong Fang, Zhouchen Lin, Tong ZhangReasoning in Bayesian Opinion Exchange Networks Is PSPACEHard
Jan Hązła, Ali Jadbabaie, Elchanan Mossel, M. Amin RahimianLearning in Nonconvex Games with an Optimization Oracle
Naman Agarwal, Alon Gonen, Elad HazanDepth Separations in Neural Networks: What is Actually Being Separated?
Itay Safran, Ronen Eldan, Ohad ShamirGradient Descent for OneHiddenLayer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala, John WilmesA Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise
Francis Bach, Kfir Y. LevyMultiarmed Bandit Problems with Strategic Arms
Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew WeinbergDistributionDependent Analysis of GibbsERM Principle
Ilja Kuzborskij, Nicolò CesaBianchi, Csaba SzepesvariSample complexity of partition identification using multiarmed bandits
Sandeep Juneja, Subhashini KumarasamyThe Optimal Approximation Factor in Density Estimation
Olivier Bousquet, Daniel Kane, Shay MoranPlanting trees in graphs, and finding them back
Laurent Massoulié, Ludovic Stephan, Don TowsleyFast determinantal point processes via distortionfree intermediate sampling
Michal DerezinskiIs your function low dimensional?
Anindya De, Elchanan Mossel, Joe NeemanAccuracyMemory Tradeoffs and Phase Transitions in Belief Propagation
Vishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan MosselMaking the Last Iterate of SGD Information Theoretically Optimal
Prateek Jain, Dheeraj Nagaraj, Praneeth NetrapalliSharp Theoretical Analysis for Nonparametric Testing under Random Projection
Meimei Liu, Zuofeng Shang, Guang ChengRobustness of spectral methods for community detection
Ludovic Stephan, Laurent MassouliéAn InformationTheoretic Approach to Minimax Regret in Partial Monitoring
Tor Lattimore, Csaba SzepesvariGlobal Convergence of the EM Algorithm for Mixtures of Two Component Linear Regression
Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek DavisNearoptimal method for highly smooth convex optimization
Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron SidfordSampling and Optimization on Convex Sets in Riemannian Manifolds of NonNegative Curvature
Navin Goyal, Abhishek ShettyLearning to Prune: Speeding up Repeated Computations
Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen VitercikConsistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon
Alexander Rakhlin, Xiyu ZhaiOn Communication Complexity of Classification Problems
Daniel Kane, Roi Livni, Shay Moran, Amir YehudayoffTheoretical guarantees for sampling and inference in generative models with latent diffusions
Belinda Tzen, Maxim RaginskyMeanfield theory of twolayers neural networks: dimensionfree bounds and kernel limit
Song Mei, Theodor Misiakiewicz, Andrea MontanariBatchSize Independent Regret Bounds for the Combinatorial MultiArmed Bandit Problem
Nadav Merlis, Shie MannorCombinatorial Algorithms for Optimal Design
Vivek Madan, Mohit Singh, Uthaipon Tantipongpipat, Weijun XieMinimax experimental design: Bridging the gap between statistical and worstcase approaches to least squares regression
Michal Derezinski, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. WarmuthBudgeted Kernelized Bandits for Large Scale Gaussian Process Optimization
Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo RosascoCombining Online Learning Guarantees
Ashok CutkoskyEstimation of smooth densities in Wasserstein distance
Jonathan Weed, Quentin BerthetArtificial Constraints and Hints for Unbounded Online Learning
Ashok CutkoskyInference under Local Constraints: Lower Bounds from ChiSquare Contractions
Jayadev Acharya, Clément L. Canonne, Himanshu TyagiCommunication and Memory Efficient Testing of Discrete Distributions
Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth RaoStochastic firstorder methods: nonasymptotic and computeraided analyses via potential functions
Adrien Taylor, Francis BachLearning rates for Gaussian mixtures under group invariance
VictorEmmanuel BrunelOptimal Tensor Methods in Smooth Convex and Uniformly Convex Optimization
Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, Cesar A. UribeContextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting
Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng ZhangModelbased RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Modelfree Approaches
Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John LangfordA Robust Spectral Algorithm for Overcomplete Tensor Decomposition
Samuel B. Hopkins, Tselil Schramm, Jonathan ShiDiscrepancy, Coresets, and Sketches in Machine Learning
Zohar Karnin, Edo LibertyWhen can unlabeled data improve the learning rate?
Christina Göpfert, Shai BenDavid, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth UrnerBeyond LeastSquares: Fast Rates for Regularized Empirical Risk Minimization through SelfConcordance
Ulysse MarteauFerey, Dmitrii M. Ostrovskii, Francis Bach, Alessandro RudiA Theory of Selective Prediction
Mingda Qiao, Gregory ValiantReconstructing Trees from Traces
Sami Davies, Miklos Racz, Cyrus RashtchianSolving Empirical Risk Minimization in the Current Matrix Multiplication Time
Yin Tat Lee, Zhao Song, Qiuyi ZhangNonconvex sampling with the Metropolisadjusted Langevin algorithm
Oren Mangoubi, Nisheeth K. VishnoiStabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge, Zhize Li, Weiyao Wang, Xiang WangOn Mean Estimation for General Norms with Statistical Queries
Jerry Li, Aleksandar Nikolov, Ilya Razenshteyn, Erik WaingartenMaximum Entropy Distributions: Bit Complexity and Stability
Damian Straszak, Nisheeth K. VishnoiClassification with unknown class conditional label noise on noncompact feature spaces
Henry Reeve, Ata KabanTesting Markov Chains Without Hitting
Yeshwanth Cherapanamjeri, Peter BartlettFiniteTime Error Bounds For Linear Stochastic Approximation and TD Learning
R. Srikant, Lei YingSumofsquares meets square loss: Fast rates for agnostic tensor completion
Dylan Foster, Andrej RisteskiA New Algorithm for Nonstationary Contextual Bandits: Efficient, Optimal, and Parameterfree
Yifang Chen, ChungWei Lee, Haipeng Luo, ChenYu WeiStatistical Learning with a Nuisance Component
Dylan Foster, Vasilis SyrgkanisOptimal AverageCase Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness
Matthew Brennan, Guy BreslerAffine Invariant Covariance Estimation for HeavyTailed Distributions
Dmitrii M. Ostrovskii, Alessandro RudiLearning Linear Dynamical Systems with SemiParametric Least Squares
Max Simchowitz, Ross Boczar, Benjamin RechtNonasymptotic Analysis of Biased Stochastic Approximation Scheme
Belhal Karimi, Blazej Miasojedow, Eric Moulines, HoiTo WaiAdaptive Hard Thresholding for Nearoptimal Consistent Robust Regression
Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek JainOn the Performance of Thompson Sampling on Logistic Bandits
Shi Dong, Tengyu Ma, Benjamin Van RoyLower Bounds for Locally Private Estimation via Communication Complexity
John Duchi, Ryan RogersLower bounds for testing graphical models: colorings and antiferromagnetic Ising models
Ivona Bezakova, Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric VigodaSpace lower bounds for linear prediction
Yuval Dagan, Gil Kur, Ohad ShamirTesting Mixtures of Discrete Distributions
Maryam Aliakbarpour, Ravi Kumar, Ronitt RubinfeldUniversality of Computational Lower Bounds for Submatrix Detection
Matthew Brennan, Guy Bresler, Wasim HuleihelHow Hard is Robust Mean Estimation?
Samuel B. Hopkins, Jerry LiApproximate Guarantees for Dictionary Learning
Aditya Bhaskara, Wai Ming TaiPrivate Center Points and Learning of Halfspaces
Amos Beimel, Shay Moran, Kobbi NIssim, Uri StemmerFast Mean Estimation with SubGaussian Rates
Yeshwanth Cherapanamjeri, Nicolas Flammarion, Peter BartlettBetter Algorithms for Stochastic Bandits with Adversarial Corruptions
Anupam Gupta, Tomer Koren, Kunal TalwarSampleOptimal LowRank Approximation of Distance Matrices
Piotr Indyk, Ali Vakilian, Tal Wagner, David WoodruffHigh probability generalization bounds for uniformly stable algorithms with nearly optimal rate
Vitaly Feldman, Jan VondrakComputational Limitations in Robust Classification and WinWin Results
Akshay Degwekar, Vinod VaikuntanathanThe implicit bias of gradient descent on nonseparable data
Ziwei Ji, Matus TelgarskyBandit Principal Component Analysis
Wojciech Kotlowski, Gergely NeuA nearoptimal algorithm for approximating the John Ellipsoid
Michael B. Cohen, Ben Cousins, Yin Tat Lee, Xin YangPure Entropic Regularization for MTS
Christian Coester, James R. LeeNormal Approximation for Stochastic Gradient Descent via NonAsymptotic Rates of Martingale CLT
Andreas Anastasiou, Krishnakumar Balasubramanian, Murat ErdogduAchieving the Bayes Error Rate in Stochastic Block Model by SDP, Robustly
Yingjie Fei, Yudong ChenAdaptively Tracking the Best Bandit Arm with an Unknown Number of Distribution Changes
Peter Auer, Pratik Gajane, and Ronald OrtnerVC Classes are Adversarially Robustly Learnable, but Only Improperly
Omar Montasser, Steve Hanneke, Nathan SrebroThe Complexity of Making the Gradient Small in Stochastic Convex Optimization
Dylan Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake WoodworthTowards Testing Monotonicity of Distributions Over General Posets
Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak YodpinyaneeOn the Computational Power of Online Gradient Descent
Vaggos Chatziafratis, Tim Roughgarden, Joshua R. WangHow do infinite width bounded norm networks look in function space?
Pedro Savarese, Itay Evron, Daniel Soudry, Nathan SrebroVortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in ZeroSum Games
Yun Kuen Cheung, Georgios PiliourasActive Regression via LinearSample Sparsification
Xue Chen, Eric PriceFaster Algorithms for HighDimensional Robust Covariance Estimation
Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. WoodruffParameterfree Online Convex Optimization with SubExponential Noise
KwangSung Jun, Francesco OrabonaComputationally and Statistically Efficient Truncated Regression
Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Emmanouil ZampetakisLipschitz Adaptivity with Multiple Learning Rates in Online Learning
Zakaria Mhammedi, Wouter M. Koolen, Tim van ErvenThe AllorNothing Phenomenon in Sparse Linear Regression
Galen Reeves, Jiaming Xu, Ilias ZadikA Rank1 Sketch for Matrix Multiplicative Weights
Yair Carmon, John C. Duchi, Aaron Sidford, Kevin TianOptimal Learning for Mallows Block Model
Robert BusaFekete, Dimitris Fotakis, Balazs Szorenyi, Manolis ZampetakisGeneralization and learning under Dobrushin's condition
Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha JayantiCall for papers
The 32nd Annual Conference on Learning Theory (COLT 2019) will take place in Phoenix, Arizona, June 2528, 2019, as part of the ACM Federated Computing Research Conference, which also includes EC and STOC. We invite submissions of papers addressing theoretical aspects of machine learning and related topics. We strongly support a broad definition of learning theory, including, but not limited to:
 Design and analysis of learning algorithms
 Statistical and computational complexity of learning
 Optimization methods for learning
 Unsupervised and semisupervised learning
 Interactive learning, planning and control, and reinforcement learning
 Online learning and decisionmaking under uncertainty
 Interactions of learning theory with other mathematical fields
 Artificial neural networks, including deep learning
 Highdimensional and nonparametric statistics
 Learning with algebraic or combinatorial structure
 Bayesian methods in learning
 Game theory and learning
 Learning with system constraints (e.g., privacy, computational, memory, communication)
 Learning from complex data: e.g., networks, time series
 Learning in other settings: e.g., computational social science, economics
Submissions by authors who are new to COLT are encouraged. While the primary focus of the conference is theoretical, the authors may support their analysis by including relevant experimental results.
All accepted papers will be presented in a single track at the conference. At least one of each paper’s authors should be present at the conference to present the work. Accepted papers will be published electronically in the Proceedings of Machine Learning Research (PMLR). The authors of accepted papers will have the option of optingout of the proceedings in favor of a 1page extended abstract. The full paper reviewed for COLT will then be placed on the arXiv repository.
In coordination with the STOC 2019 PC chair, we allow papers submitted to STOC to be simultaneously submitted to COLT provided that the authors (1) explicitly state that the submission is under consideration for STOC, (2) immediately withdraw the COLT submission if the STOC submission is accepted, and (3) agree to allow the (anonymized) STOC reviews to be provided to the COLT reviewers once they are available. The authors will be allowed to submit a rebuttal to the STOC reviews a few days after the STOC reviews are available (via the submission server).
 Paper submission deadline: February 1, 11:00 PM EST
 Author feedback: March 2227
 Author notification: April 17
 Conference: June 2528 (welcome reception on June 24)
 Design and analysis of learning algorithms
 Statistical and computational complexity of learning
 Optimization methods for learning
 Unsupervised and semisupervised learning
 Interactive learning, planning and control, and reinforcement learning
 Online learning and decisionmaking under uncertainty
 Interactions of learning theory with other mathematical fields
 Artificial neural networks, including deep learning
 Highdimensional and nonparametric statistics
 Learning with algebraic or combinatorial structure
 Bayesian methods in learning
 Game theory and learning
 Learning with system constraints (e.g., privacy, computational, memory, communication)
 Learning from complex data: e.g., networks, time series
 Learning in other settings: e.g., computational social science, economics
Submissions by authors who are new to COLT are encouraged. While the primary focus of the conference is theoretical, the authors may support their analysis by including relevant experimental results.
All accepted papers will be presented in a single track at the conference. At least one of each paper’s authors should be present at the conference to present the work. Accepted papers will be published electronically in the Proceedings of Machine Learning Research (PMLR). The authors of accepted papers will have the option of optingout of the proceedings in favor of a 1page extended abstract. The full paper reviewed for COLT will then be placed on the arXiv repository.
Paper Awards:
COLT will award both best paper and best student paper awards. To be eligible for the best student paper award, the primary contributor(s) must be fulltime students at the time of submission. For eligible papers, authors must indicate at submission time if they wish their paper to be considered for a student paper award. The program committee may decline to make these awards, or may split them among several papers.Dual Submissions:
Submissions that are substantially similar to papers that have been previously published, accepted for publication, or submitted in parallel to other peerreviewed conferences with proceedings may not be submitted to COLT, with the sole exception described below. The same policy applies to journals, unless the submission is a short version of a paper submitted to a journal, and not yet published. Authors must declare such dual submissions through the submission server.In coordination with the STOC 2019 PC chair, we allow papers submitted to STOC to be simultaneously submitted to COLT provided that the authors (1) explicitly state that the submission is under consideration for STOC, (2) immediately withdraw the COLT submission if the STOC submission is accepted, and (3) agree to allow the (anonymized) STOC reviews to be provided to the COLT reviewers once they are available. The authors will be allowed to submit a rebuttal to the STOC reviews a few days after the STOC reviews are available (via the submission server).
Formatting:
Submissions are limited to 12 PMLRformatted pages, plus unlimited additional pages for references and appendices. All details, proofs and derivations required to substantiate the results must be included in the submission, possibly in the appendices. However, the contribution, novelty and significance of submissions will be judged primarily based on the main text (without appendices), and so enough details, including proof details, must be provided in the main text to convince the reviewers of the submissions’ merits. Formatting and submission instructions can be found on the conference website: http://learningtheory.org/colt2019/.Rebuttal Phase:
As in previous years, there will be a rebuttal phase during the review process. Initial reviews will be sent to authors before final decisions have been made. Authors will have the opportunity to provide a short response on the PC’s initial evaluation.Important Dates:
All dates are in 2019. Paper submission deadline: February 1, 11:00 PM EST
 Author feedback: March 2227
 Author notification: April 17
 Conference: June 2528 (welcome reception on June 24)