Accepted papers

  • Vianney PerchetPhilippe RigolletSylvain Chassang and Erik SnowbergBatched Bandit Problems
  • Jean Lafond. Low Rank tustrix Completion with Exponential Family Noise
  • Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng and Shang-Hua Teng. Efficient Sampling for Gaussian Graphical Models via Spectral Sparsification
  • Amit DanielyA PTAS for Agnostically Learning Halfspaces
  • Jan Leike and Marcus HutterBad Universal Priors and Notions of Optimality
  • Bruce Hajek, Yihong Wu and Jiaming Xu. Computational Lower Bounds for Community Detection on Random Graphs
  • Miroslav Dudik, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins and Masrour Zoghi. Contextual Dueling Bandits
  • Issei Matsumoto, Kohei Hatano and Eiji TakimotoOnline Density Estimation of Bradley-Terry Models
  • Ohad Shamir. On the Complexity of Bandit Linear Optimization
  • Nicolo Cesa-Bianchi, Yishay Mansour and Ohad Shamir. On the Complexity of Learning with Kernels
  • Hubie Chen and Matt ValerioteLearnability of Solutions to Conjunctive Queries: The Full Dichotomy
  • Hans SimonAn Almost Optimal PAC Algorithm
  • Dean Foster, Howard Karloff and Justin Thaler. Variable Selection is Hard
  • Yash Deshpande and Andrea MontanariImproved Sum-of-Squares Lower Bounds for Hidden Clique and Hidden Submatrix Problems
  • Peter Chin, Anup Rao and Van Vu. Stochastic Block Model and Community Detection in the Sparse Graphs: A spectral algorithm with optimal rate of recovery
  • Uriel Feige, Yishay Mansour and Robert Schapire. Learning and inference in the presence of corrupted inputs
  • Junpei KomiyamaJunya HondaHisashi Kashima and Hiroshi NakagawaRegret Lower Bound and Optimal Algorithm in Dueling Bandit Problem
  • Nicolas Flammarion and Francis Bach. From Averaging to Acceleration, There is Only a Step-size
  • Roy Frostig, Rong Ge, Sham Kakade and Aaron Sidford. Competing with the Empirical Risk Minimizer in a Single Pass
  • Prateek Jain and Praneeth Netrapalli. Fast Exact Matrix Completion with Finite Samples
  • Huizhen Yu. On Convergence of Emphatic Temporal-Difference Learning
  • Thomas Steinke and Jonathan Ullman. Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery
  • Patrick Rebeschini and Amin KarbasiFast Mixing for Discrete Point Processes
  • Sébastien Bubeck and Ronen EldanThe entropic barrier: a simple and optimal universal self-concordant barrier
  • Zohar Karnin and Edo LibertyOnline PCA with Spectral Bounds
  • Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel and Tomer Koren. Online Learning with Feedback Graphs: Beyond Bandits
  • Aditya Gopalan and Shie Mannor. Thompson Sampling for Learning Parameterized Markov Decision Processes
  • Varun Kanade and Elchanan Mossel. MCMC Learning
  • Richard Peng, He Sun and Luca Zanetti. Partitioning Well-Clustered Graphs: Spectral Clustering Works!
  • Wouter M. Koolen and Tim Van ErvenSecond-order Quantile Methods for Experts and Combinatorial Games
  • Sébastien Bubeck, Ofer Dekel, Tomer Koren and Yuval Peres. Bandit Convex Optimization: sqrt{T} Regret in One Dimension
  • Pierre Gaillard and Sébastien Gerchinovitz. A Chaining Algorithm for Online Nonparametric Regression
  • Parameswaran Kamalaruban, Robert Williamson and Xinhua ZhangExp-Concavity of Proper Composite Losses
  • Rafael Frongillo and Ian Kash. Vector-Valued Property Elicitation
  • Haipeng Luo and Robert Schapire. Achieving All with No Parameters: Adaptive NormalHedge
  • Maria-Florina Balcan, Avrim Blum and Santosh VempalaEfficient Representations for Lifelong Learning and Autoencoding
  • Christos Papadimitriou and Santosh VempalaCortical Learning via Prediction
  • Zaid Harchaoui, Anatoli Juditsky, Arkadi Nemirovski and Dmitry Ostrovsky. Adaptive recovery of signals by convex optimization
  • Gergely Neu. First-order regret bounds for combinatorial semi-bandits
  • Mehrdad Mahdavi, Lijun Zhang and Rong Jin. Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization
  • Nicolas Goix, Anne Sabourin and Stéphan Clémençon. Learning the dependence structure of rare events: a non-asymptotic study
  • Yuxin ChenHamed HassaniAmin Karbasi and Andreas KrauseSequential Information Maximization: When is Greedy Near-optimal?
  • Rong Ge, Furong Huang, Chi Jin and Yang Yuan. Escaping From Saddle Points — Online Stochastic Gradient for Tensor Decomposition
  • Rasmus Kyng, Anup Rao, Sushant Sachdeva and Daniel A. Spielman. Algorithms for Lipschitz Learning on Graphs
  • Majid JanzaminAnimashree Anandkumar and Rong GeLearning Overcomplete Latent Variable Models through Tensor Methods
  • Behnam Neyshabur, Ryota Tomioka and Nathan Srebro. Norm-Based Capacity Control in Neural Networks
  • Sudeep Kamath, Alon Orlitsky, Venkatadheeraj Pichapati and Ananda Theertha Suresh. On Learning Distributions from their Samples
  • Pranjal Awasthi, Moses Charikar, Kevin Lai and Andrej Risteski. Label optimal regret bounds for online local learning
  • Peter Bartlett, Wouter Koolen, Alan Malek, Eiji Takimoto and Manfred Warmuth. Minimax Fixed-Design Linear Regression
  • Mark Reid, Rafael Frongillo, Robert Williamson and Nishant Mehta. Generalized Mixability via Entropic Duality
  • Rachel CummingsStratis Ioannidis and Katrina LigettTruthful Linear Regression
  • Sanjeev Arora, Rong Ge, Tengyu Ma and Ankur Moitra. Simple, Efficient, and Neural Algorithms for Sparse Coding
  • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri and Manfred Warmuth. On-Line Learning Algorithms for Path Experts with Non-Additive Losses
  • Christos ThrampoulidisSamet Oymak and Babak HassibiRegularized Linear Regression: A Precise Analysis of the Estimation Error
  • Konstantin Makarychev, Yury Makarychev and Aravindan Vijayaraghavan. Correlation Clustering with Noisy Partial Information
  • Santosh Vempala and Ying XiaoMax vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity
  • Arpit Agarwal and Shivani Agarwal. On Consistent Surrogate Risk Minimization and Property Elicitation
  • Alexandre Belloni, Tengyuan Liang, Hari Narayanan and Alexander RakhlinEscaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions
  • Jacob Steinhardt and John Duchi. Minimax rates for memory-bounded sparse linear regression
  • Matus Telgarsky, Miroslav Dudík and Robert Schapire. Convex Risk Minimization and Conditional Probability Estimation
  • Tengyuan Liang, Alexander Rakhlin and Karthik Sridharan. Learning with Square Loss: Localization through Offset Rademacher Complexity
  • Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky, Venkatadheeraj Pichapati and Ananda Theertha Suresh. Faster Algorithms for Testing under Conditional Sampling
  • Justin Eldridge, Mikhail Belkin and Yusu Wang. Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clustering
  • Gautam Dasarathy, Robert Nowak and Xiaojin Zhu. $S^2$: An Efficient Graph Based Active Learning Algorithm with Application to Nonparametric Classification
  • Pranjal Awasthi, Maria Florina Balcan, Nika Haghtalab and Ruth Urner. Efficient Learning of Linear Separators under Bounded Noise
  • Yang Cai, Constantinos Daskalakis and Christos Papadimitriou. Optimum Statistical Estimation with Strategic Data Sources
  • Sam Hopkins, Jonathan Shi and David Steurer. Tensor principal component analysis
  • Samory Kpotufe, Ruth Urner and Shai Ben-David. Hierarchical label queries with data-dependent partitions
  • Alexander Rakhlin and Karthik Sridharan. Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints
  • Akshay Balsubramani and Yoav Freund. Optimally Combining Classifiers Using Unlabeled Data