Accepted Papers
- A Depth Hierarchy for Computing the Maximum in ReLU Networks via Extremal Graph Theory
Itay Safran
- Risk Comparisons in Linear Regression: Implicit Regularization Dominates Explicit Regularization
Jingfeng Wu, Peter Bartlett, Jason Lee, Sham Kakade, Bin Yu
- Deep Q-Learning on Hölder Spaces
Qian Qi
- The Hidden Cost of Approximation in Online Mirror Descent
Ofir Schlisselberg, Uri Sherman, Tomer Koren, Yishay Mansour
- Recursively Enumerably Representable Classes and Computable Versions of the Fundamental Theorem of Statistical Learning
David Kattermann, Lothar Sebastian Krapp
- Almost sure null bankruptcy of test-by-betting strategies
Hongjian Wang, Shubhada Agrawal, Aaditya Ramdas
- A Tight Lower Bound for Non-stochastic Multi-armed Bandits with Expert Advice
Zachary Chase, Shinji Ito, Idan Mehalel
- Universality of high-dimensional scaling limits of stochastic gradient descent
Aukosh Jagannath, Reza Gheissari
- Stochastic Safe Action Model Learning
Zihao Deng, Brendan Juba
- Variational Tail Bounds for Norms of Random Vectors and Matrices
Sohail Bahmani
- Faster Newton Methods for Convex and Nonconvex Optimization in Gradient Complexity
Lesi Chen, Chengchang Liu, Luo Luo, Jingzhao Zhang
- The matrix-vector complexity of Ax=b
Raphael Meyer, Ethan Epperly, Michał Dereziński
- A Distribution Testing Approach to Clustering Distributions
Gunjan Kumar, Yash Pote, Jonathan Scarlett
- Adaptive Learning Rates with Surrogate Probability for Follow-the-Perturbed-Leader
Jongyeong Lee, Junya Honda, Shinji Ito, Chansoo Kim
- Recovery of Planted Subgraphs
Wasim Huleihel
- Separating Oblivious and Adaptive Models of Variable Selection
Ziyun Chen, Jerry Li, Kevin Tian, Yusong Zhu
- Online Convex Optimization with Sublinear Noisy Probes
Simone Di Gregorio, Anupam Gupta, Stefano Leonardi, Matteo Russo
- Actively learning halfspaces without synthetic data
Hadley Black, Barna Saha, Arya Mazumdar, Kasper Larsen, Geelon So
- Dimension Reduction via Sum-of-Squares and Improved Clustering Algorithms for Non-Spherical Mixtures
Prashanti Anderson, Mitali Bafna, Rares-Darius Buhai, Pravesh K. Kothari, David Steurer
- Wasserstein Policy Learning for Distributional Outcomes
Yiyan Huang, Cheuk Hang Leung, Qi Wu, Zhiheng Zhang
- Second-Order Bounds for [0,1]-Valued Regression via Betting Loss
Yinan Li, Sungjoon Yoon, Ethan Huang, Kwang-Sung Jun
- Testing for a Hidden Geometry in Random Graphs
Amit Silber, Mor Oren, Wasim Huleihel
- Ambiguous Online Learning
Vanessa Kosoy
- Partition Function Estimation under Bounded $f$-Divergence
Adam Block, Abhishek Shetty
- Minimax optimal differentially private synthetic data for smooth queries
Rundong Ding, Yiyun He, Yizhe Zhu
- A Simple, Optimal and Efficient Algorithm for Online Exp-Concave Optimization
Yi-Han Wang, Peng Zhao, Zhi-Hua Zhou
- A Unified Lower Bound on the Noisy Query Complexity of Boolean Functions
Yuzhou Gu, Xin Li, Yinzhan Xu
- Phase Transition for Stochastic Block Model with more than $\sqrt{n}$ Communities
Alexandra Carpentier, Christophe Giraud, Nicolas Verzelen
- Learning Conditional Averages
Marco Bressan, Nataly Brukhim, Nicolò Cesa-Bianchi, Emmanuel Esposito, Yishay Mansour, Shay Moran, Maximilian Thiessen
- Learning Periodic Strategies in Blocking Bandits is as Hard as Bandits with Switching Costs
Nicolò Cesa-Bianchi, Junya Honda, Yuko Kuroki, Atsushi Miyauchi, Lukas Zierahn
- Strongly Polynomial Time Complexity of Policy Iteration for $L_\infty$ Robust MDPs
Ali Asadi, Krishnendu Chatterjee, Ehsan Kafshdar Goharshady, Mehrdad Karrabi, Alipasha Montaseri, Carlo Pagano
- Instance-optimal high-precision shadow tomography with few-copy measurements: A metrological approach
Senrui Chen, Weiyuan Gong, Sisi Zhou
- Near-Optimal Regret for Distributed Adversarial Bandits: A Black-Box Approach
Hao Qiu, Mengxiao Zhang, Nicolò Cesa-Bianchi
- Uniform Laws of Large Numbers in Product Spaces
Ron Holzman, Shay Moran, Alexander Shlimovich
- Tight Bounds for Logistic Regression with Large Stepsize Gradient Descent in Low Dimension
Michael Crawshaw, Mingrui Liu
- A Perfectly Truthful Calibration Measure
Jason Hartline, Lunjia Hu, Yifan Wu
- Optimal Hardness of Online Algorithms for Large Common Induced Subgraphs
David Gamarnik, Miklos Racz, Gabe Schoenbach
- Trajectory Data Suffices for Statistically Efficient Policy Evaluation in Fixed-Horizon Offline RL with Linear q-pi Realizability and Concentrability
Volodymyr Tkachuk, Csaba Szepesvári, Xiaoqi Tan
- Computing Lewis weights to high precision using local relative smoothness
Sander Gribling, Aaron Sidford, Chenyi Zhang
- Tight Long-Term Tail Decay of (Clipped) SGD in Non-Convex Optimization
Aleksandar Armacki, Dragana Bajović, Dušan Jakovetić, Soummya Kar, Ali H. Sayed
- Nearly Linear-Time User-Level DP-SCO with Optimal Rates
Badih Ghazi, Ravi Kumar, Daogao Liu, Pasin Manurangsi
- Adaptive Weighted Averaging
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
- An Exponential Lower Bound for Spectral Density Estimation on Unweighted Graphs
Pan Peng, Yuyang Wang, Qiping Yang, Yichun Yang
- Overlap Analysis of the Shortest Path Problem: Local Search, Landscapes, and Franz–Parisi Potential
Joonhyung Shin, Frederic Koehler
- Polynomial-time sampling despite disorder chaos
Eric Ma, Tselil Schramm
- Quiet Planting for k-SAT, Multiple Solutions of Arbitrary Geometry
Ali Ahmadi, Kiarash Banihashem, Iman Gholami, Mohammad Taghi Hajiaghayi, Jan Olkowski
- Optimal Neural Network Approximation of Smooth Compositional Functions on Sets with Low Intrinsic Dimension
Thomas Nagler, Sophie Langer
- Sample-Efficient Omniprediction for Proper Losses
Isaac Gibbs, Ryan Tibshirani
- Testing Noise Assumptions of Learning Algorithms
Surbhi Goel, Adam Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
- Fixed-Parameter Tractability of Private Synthetic Data Generation
Badih Ghazi, Cristobal Guzman, Pritish Kamath, Alexander Knop, Ravi Kumar, Pasin Manurangsi
- Truly Adapting to Adversarial Constraints in Constrained MABs
Francesco Emanuele Stradi, Kalana Kalupahana, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti
- How fast can you find a good hypothesis?
Anders Aamand, Maryam Aliakbarpour, Justin Chen, Sandeep Silwal
- Statistical Learning from Attribution Sets
Lorne Applebaum, Robert Busa-Fekete, August Chen, Claudio Gentile, Tomer Koren, Aryan Mokhtari
- Spectral Recovery of a Planted Triangle-Dense Subgraph
Sam van der Poel, Cheng Mao, Benjamin McKenna
- Theoretical Compression Bounds for Wide Multilayer Perceptrons
houssam elcheairi, David Gamarnik, Rahul Mazumder
- Model Agreement via Anchoring
Eric Eaton, Surbhi Goel, Marcel Hussing, Michael Kearns, Aaron Roth, Sikata Sengupta, Jessica Sorrell
- Provable Learning of Random Hierarchy Models and Hierarchical Shallow-to-Deep Chaining
Yunwei Ren, Yatin Dandi, Florent Krzakala, Jason Lee
- Fast, Parallel, Query-Efficient Binary Classification
Ishani Karmarkar, Liam O'Carroll, Aaron Sidford
- Learning from Equivalence Queries, Revisited
Mark Braverman, Roi Livni, Yishay Mansour, Shay Moran, Kobbi Nissim
- Unified Framework of Distributional Regret in Multi-Armed Bandits and Reinforcement Learning
HARIN LEE, Min-hwan Oh
- Randomization for Faster Exact Optimization of Discounted Markov Decision Processes
Andrei Graur, Aaron Sidford, Ta-Wei Tu
- Is Multi-Distribution Learning as Easy as PAC Learning: Sharp Rates with Bounded Label Noise
Rafael Hanashiro, Abhishek Shetty, Patrick Jaillet
- Learning Ising Models from Evolutions
Jason Gaitonde, Ankur Moitra, Elchanan Mossel
- On the Statistical Query Complexity of Learning Semiautomata: a Random Walk Approach
George Giapitzakis, Kimon Fountoulakis, Eshaan Nichani, Jason Lee
- Omniprediction with Long-Term Constraints
Yahav Bechavod, Aaron Roth, Jiuyao Lu
- Accelerated Convex Optimization via Hamiltonian Dynamics with Deterministic Integration Time
Qiang Fu, Siddharth Mitra, Vishwak Srinivasan, Xiuyuan Wang, Andre Wibisono, Ashia Wilson
- Boosting with List-Decodable Codes
Addison Prairie, Li-Yang Tan
- On the Gradient Complexity of Private Optimization with Private Oracles
Michael Menart, Aleksandar Nikolov
- Swap Regret Minimization Through Response-Based Approachability
Ioannis Anagnostides, Gabriele Farina, Maxwell Fishelson, Haipeng Luo, Jon Schneider
- Last-Iterate Convergence of Randomized Kaczmarz and SGD with Greedy Step Size
Michal Derezinski, Xiaoyu Dong
- Query Efficient Structured Matrix Learning
Noah Amsel, Pratyush Avi, Tyler Chen, Feyza Duman Keles, Chinmay Hegde, Christopher Musco, Cameron Musco, David Persson
- Diffusion-Network Alignment: An Efficient Algorithm and Explicit Probability Bounds
Ziao Wang, Lei Ying
- Phase Transition in Convex Relaxations for Graph Alignment
Laurent Massoulie, Sushil Mahavir Varma, Louis Vassaux, Irene Waldspurger
- The Geometry of Efficient Nonconvex Sampling
Santosh Vempala, Andre Wibisono
- Near-optimal Swap Regret Minimization for Convex Losses
Lunjia Hu, Jon Schneider, Yifan Wu
- Learning depth-3 circuits via quantum agnostic boosting
Srinivasan Arunachalam, Arkopal Dutt, Alexandru Gheorghiu, Michael de Oliveira
- Revisiting the (Sub)Optimality of Best-of-N for Inference-Time Alignment
Ved Sriraman, Adam Block
- Algorithmic Thinking Theory
MohammadHossein Bateni, Vincent Cohen-Addad, Yuzhou Gu, Silvio Lattanzi, Simon Meierhans, Christopher Mohri
- Finite Sample Bounds for Learning with Score Matching
Devin Smedira, Abhijith Jayakumar, Sidhant Misra, Marc Vuffray, Andrey Lokhov
- Can SGD Select Good Fishermen? Local Convergence under Self-Selection Biases and Beyond
alkis kalavasis, Anay Mehrotra, Felix Zhou
- Language Generation with Infinite Contamination
Anay Mehrotra, Grigoris Velegkas, Xifan Yu, Felix Zhou
- Differentially Private Language Generation in the Limit
Anay Mehrotra, Grigoris Velegkas, Xifan Yu, Felix Zhou
- On the Power of Adaptivity for $\varepsilon$-Best Arm Identification in Linear Bandits
Arnab Maiti, Yunbei Xu, Kevin Jamieson
- DDPM Score Matching and Distribution Learning
Sinho Chewi, Alkis Kalavasis, Anay Mehrotra, Omar Montasser
- Gradient-Variation Regret Bounds for Unconstrained Online Learning
Yuheng Zhao, Andrew Jacobsen, Nicolò Cesa-Bianchi, Peng Zhao
- On Efficient Robust Regression with Subquadratic Samples
Deeksha Adil, Jaroslaw Blasiok, Hongjie Chen, Deepak Narayanan Sridharan
- Optimal Sample Complexity Lower Bounds on Conditional Independence Testing
Jan Seyfried, Neelkanth Mishra, Sayantan Sen, Marco Tomamichel
- On the Stability of Nonlinear Dynamics in GD and SGD: Beyond Quadratic Potentials
Rotem Mulayoff, Sebastian Stich
- Fast algorithms for learning a Gaussian under halfspace truncation with optimal sample complexity
Haitong Liu, Deepak Narayanan Sridharan, David Steurer, Manuel Wiedmer
- Rate-optimal community detection near the KS threshold via node-robust algorithms
Jingqiu Ding, Yiding Hua, Kasper Lindberg, David Steurer, Aleksandr Storozhenko
- On the implicit regularization of Langevin dynamics with projected noise
Austin Stromme, Adrien Vacher, Govind Menon
- Robust Algorithms for Finding Cliques in Random Intersection Graphs via Sum-of-Squares
Andreas Göbel, Janosch Ruff, Leon Schiller
- Cloning is as Hard as Learning for Stabilizer States
Nikhil Bansal, Matthias C. Caro, Gaurav Mahajan
- Recovery thresholds for hidden weighted sparse graphs
Zhe Hou, Jingcheng Liu
- Limitations of SGD for Multi-Index Models Beyond Statistical Queries
Daniel Barzilai, Ohad Shamir
- Avoiding exp($k^*$) Scaling for Thompson Sampling in Combinatorial Semi-Bandits: From Multiple Seeds to a Single Seed
Tianyuan Jin, Heyang Zhao, Vincent Tan, Quanquan Gu
- Rigorous Asymptotics for First-Order Algorithms Through the Dynamical Cavity Method
Francisco Pernice, David Gamarnik, Yatin Dandi, Lenka Zdeborova
- Adversarial Learning in Games with Bandit Feedback: Logarithmic Pure-Strategy Maximin Regret
Shinji Ito, Haipeng Luo, Arnab Maiti, Taira Tsuchiya, Yue Wu
- Reconstructing Riemannian Metrics From Random Geometric Graphs
Han Huang, Elchanan Mossel, Pakawut Jiradilok
- The Median is Easier than it Looks: Approximation with a Constant-Depth, Linear-Width ReLU Network
Abhigyan Dutta, Itay Safran, Paul Valiant
- Functional Stochastic Localization
Anming Gu, Bobby Shi, Kevin Tian
- Information-Computation Gaps in Quantum Learning via Low-Degree Likelihood
Sitan Chen, Weiyuan Gong, Jonas Haferkamp, Yihui Quek
- Tight list replicability bounds via a novel sphere covering theorem
Ari Blondal, Hamed Hatami, Pooya Hatami, Chavdar Lalov, Sivan Tretiak
- High-Dimensional Gaussian Mean Estimation under Realizable Contamination
Ilias Diakonikolas, Daniel Kane, Thanasis Pittas
- Online Realizable Regression and Applications for ReLU Networks
Ilan Doron-Arad, Idan Mehalel, Elchanan Mossel
- Universal priors: solving empirical Bayes via Bayesian inference and pretraining
Nick Cannella, Anzo Teh, Yanjun Han, Yury Polyanskiy
- Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness
Moise Blanchard, Alexander Rakhlin, Abhishek Shetty
- Linear Regression under Missing or Corrupted Coordinates
Ilias Diakonikolas, Jelena Diakonikolas, Daniel Kane, Jasper Lee, Thanasis Pittas
- Active Learning on Adversarially Corrupted Graphs
Marco Bressan, Nicolò Cesa-Bianchi, Tommaso d'Orsi, Emmanuel Esposito, Silvio Lattanzi
- Optimal Reconstruction from Linear Queries
Yuval Filmus, Shay Moran, Elizaveta Nesterova
- Information-Theoretic Thresholds for Bipartite Latent-Space Graphs Under Noisy Observations
Andreas Göbel, Marcus Pappik, Leon Schiller
- On the Importance of Randomization in Discriminative Feature Feedback
Valentio Iverson, Tosca Lechner, Sivan Sabato
- Self-Concordant Perturbations for Linear Bandits
Lucas Lévy, Jean-Lou Valeau, Arya Akhavan, Patrick Rebeschini
- Adaptive Matrix Online Learning through Smoothing with Guarantees for Nonsmooth Nonconvex Optimization
Ruichen Jiang, Zakaria Mhammedi, Mehryar Mohri, Aryan Mokhtari
- Convergence Rates for Distribution Matching with Sliced Optimal Transport
Gauthier Thurin, Claire Boyer, Kimia Nadjahi
- Distribution-Free Sequential Prediction with Abstentions
Jialin Yu, Moise Blanchard
- Efficient Swap Multicalibration of Elicitable Properties
Lunjia Hu, Haipeng Luo, Spandan Senapati, Vatsal Sharan
- Online Learning for Uninformed Markov Games: Empirical Nash-Value Regret and Non-Stationarity Adaptation
Junyan Liu, Haipeng Luo, Zihan Zhang, Lillian Ratliff
- Space-Efficient Language Generation in the Limit
Nicolas Flammarion, Chirag Pabbaraju, Hristo Papazov, Miltiadis Stouras, Ola Svensson
- Efficient Learning and Symmetry Discovery under Exact Invariances
Ashkan Soleymani, Behrooz Tahmasebi, Patrick Jaillet, Stefanie Jegelka
- Graph neural networks extrapolate out-of-distribution for shortest paths
Robert Nerem, Samantha Chen, Sanjoy Dasgupta, Yusu Wang
- Online Learning with Simulators: No Regret in a Computationally Bounded World
Sasha Voitovych, Alexander Rakhlin, Abhishek Shetty, Noah Golowich
- Simultaneous Blackwell Approachability and Applications to Multiclass Omniprediction
Lunjia Hu, Kevin Tian, Chutong Yang
- Sandwiching Polynomials for Geometric Concepts with Low Intrinsic Dimension
Adam Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
- Price of universality in vector quantization is at most 0.11 bit
Alina Harbuzova, Or Ordentlich, Yury Polyanskiy
- Efficient Sampling with Discrete Diffusion Models: Sharp and Adaptive Guarantees
Daniil Dmitriev, Zhihan Huang, Yuting Wei
- Is Memorization Helpful or Harmful? Prior Information Sets the Threshold
Chen Cheng, Rina Barber
- An Empirical Bayes Perspective on Heteroskedastic Mean Estimation
Yanjun Han, Abhishek Shetty, Jacob Shkrob
- A Quasi-Polynomial Time Mean Estimator Under Mean-Shift Contamination with Unknown Covariance
Ilias Diakonikolas, Jingyi Gao, Giannis Iakovidis, Daniel Kane, Sihan Liu, Thanasis Pittas
- Clipping the Price of Adaptivity at the Tail
Itai Kreisler, Oliver Hinder, Yair Carmon
- Ripple Mechanisms for Discrete and Private Statistics
Matthew Joseph, Alex Kulesza, Yuyan Wang, Alexander Yu
- Relatively Smart: A New Approach for Instance-Optimal Learning
Alireza Pour, Shaddin Dughmi
- Lyapunov-Based Sample Complexity Analysis for Weakly-Coupled MDPs
Tianhao Wu, Matthew Zurek, Weina Wang, Qiaomin Xie
- Estimating Ising Models in Total Variation Distance
Constantinos Daskalakis, Vardis Kandiros, Rui Yao
- Optimal Inference Schedules for Masked Diffusion Models
Sitan Chen, Kevin Cong, Jerry Li
- On The Complexity of Best-Arm Identification in Non-Stationary Linear Bandits
Leo Maynard-Zhang, Zhihan Xiong, Kevin Jamieson, Maryam Fazel
- Optimal Variance-Dependent Regret Bounds for Infinite-Horizon MDPs
Guy Zamir, Matthew Zurek, Yudong Chen
- How Does the ReLU Activation Affect the Implicit Bias of Gradient Descent on High-Dimensional Neural Network Regression?
Kuo-Wei Lai, Guanghui Wang, Molei Tao, Vidya Muthukumar
- Regret Minimization with Adaptive Opponents in Repeated Games
Mingyang Liu, Asuman Ozdaglar, Tiancheng Yu, Kaiqing Zhang
- Learning with Curriculum I]{Learning to Reason with Curriculum I: Provable Benefits of Autocurriculum
Nived Rajaraman, Audrey Huang, Miro Dudik, Robert Schapire, Dylan J. Foster, Akshay Krishnamurthy
- On the Curse of Dimensionality in Private Sparse Covariance Estimation and PCA
Syamantak Kumar, Shourya Pandey, Purnamrita Sarkar, Kevin Tian
- Self-normalized martingales under smoothness assumption and uniform regret bounds for sequential linear regression
Fan Chen, Jian Qian, Alexander Rakhlin, Nikita Zhivotovskiy
- An Information-Theoretic Analysis for Active Learning
Abdellah Aznag, Adam Elmachtoub, Rachel Cummings
- Eigen-Spike Emergence, Quadratic Deterministic Equivalents, and the Classification of Nonlinearly-Separable Data
Collin Cranston, Zhichao Wang, Todd Kemp, Michael Mahoney
- Privately Estimating Black-Box Statistics
Gunter Steinke, Thomas Steinke
- How Many Features Can a Language Model Store Under the Linear Representation Hypothesis?
Nikhil Garg, Jon Kleinberg, Kenneth Peng
- Density estimation for Hellinger via minimum-distance estimators: mixtures of Gaussians, log-concave, and more
Spencer Compton, Jerry Li
- High-accuracy log-concave sampling with stochastic gradients
Fan Chen, Sinho Chewi, Constantinos Daskalakis, Alexander Rakhlin
- Equivalence of Coarse and Fine-Grained Models for Learning with Distribution Shift
Adam Klivans, Shyamal Patel, Konstantinos Stavropoulos, Arsen Vasilyan
- Learning Decision-Sufficient Representations for Linear Optimization
Yuhan Ye, Saurabh Amin, Asuman Ozdaglar
- Compact Geometric Representations of Hierarchies
Prashant Gokhale, Piotr Indyk, Yuhao Liu, Sandeep Silwal, Tony Wang, Haike Xu
- On Randomized Algorithms in Online Strategic Classification
Chase Hutton, Adam Melrod, Han Shao
- Sharp analysis of linear ensemble sampling
David Janz, Arya Akhavan, Csaba Szepesvari
- Steering diffusion models with quadratic rewards: a fine-grained analysis
Ankur Moitra, Andrej Risteski, Dhruv Rohatgi
- Toward Simultaneously Optimal Regret in U-Calibration
Rafael Frongillo, Haipeng Luo, Nishant Mehta, Jon Schneider
- Why is score-based sampling so effective? A general and adaptive reduction to SLC sub-problems
Martin Wainwright
- Calibeating Made Simple
Yurong Chen, Zhiyi Huang, Michael Jordan, Haipeng Luo
- Efficient High-Dimensional Online Outcome Indistinguishable Generative Models
Gabriele Farina, Juan Perdomo
- Optimism Stabilizes Thompson Sampling for Adaptive Inference
Shunxing Yan, Han Zhong
- On the Asymptotics of Self-Supervised Pre-training: Two-Stage M-Estimation and Representation Symmetry
Mohammad Tinati, Stephen Tu
- Random Reshuffling Beats Stochastic Gradient Descent
Zijian Liu
- Worst-case Error Bounds for Online Learning of Smooth Functions
Weian Xie
- Minimax Limits of 𝑘 -Fold Cross-Validation via Majority
Ido Nachum, Rudiger Urbanke, Thomas Weinberger
- Low-Degree Method Fails to Predict Robust Subspace Recovery
He Jia, Aravindan Vijayaraghavan
- The Sample Complexity of Multiclass and Sparse Contextual Bandits
Liad Erez, Fan Chen, Alexander Rakhlin, Tomer Koren, Alon Cohen, Yishay Mansour, Shay Moran
- Margin in Abstract Spaces
Yair Ashlagi, Roi Livni, Shay Moran, Tom Waknine
- Tight Sample Complexity of Transformers
Chenxiao Yang, Nathan Srebro, Zhiyuan Li
- Wedge Sampling: Efficient Tensor Completion with Nearly-Linear Sample Complexity
Hengrui Luo, Anna Ma, Ludovic Stephan, Yizhe Zhu
- A Characterization of List Language Identification in the Limit
Moses Charikar, Chirag Pabbaraju, Ambuj Tewari
- Blackwell Approachability Bridges Gradient Equilibrium and No-Regret Learning
Nika Haghtalab, Michael Jordan, Brian Lee, Ryan Tibshirani
- The monotonicity of the Franz-Parisi potential is equivalent with Low-degree MMSE lower bounds
Konstantinos Tsirkas, Leda Wang, Ilias Zadik
- Spectral Valleys and Sharp Failures in Greedy Determinant Maximization
Rajiv Khanna
- Taming the Monster Every Context: Complexity Measure and Unified Framework for Offline-Oracle Efficient Contextual Bandits
Hao Qin, Chicheng Zhang
- A Single Stepsize Suffices for Unprojected Linear TD(0): Simultaneous Robust and Fast Rates via Polyak–Ruppert Averaging
Wei-Cheng Lee, Francesco Orabona
- Language Identification with Succinct Machine-Independent Traces
Moses Charikar, Jon Kleinberg, Chirag Pabbaraju
- Online Market Making and the Value of Observing the Order Book
Davide Maran, Marcello Restelli
- Optimal Learning-Rate Schedules under Functional Scaling Laws: Power Decay and Warmup-Stable-Decay
Binghui Li, Zilin Wang, Fengling Chen, Shiyang Zhao, Ruiheng Zheng, Lei Wu
- Almost Linear Convergence under Minimal Score Assumptions: Quantized Transition Diffusion
Xunpeng Huang, Yingyu Lin, Lijing Kuang, Hanze Dong, Difan Zou, Yian Ma, Tong Zhang
- Optimal Prediction-Augmented Algorithms for Testing Independence of Distributions
Maryam Aliakbarpour, Alireza Azizi, Ria Stevens
- Fast and Large-Scale Unbalanced Optimal Transport via its Semi-Dual and Adaptive Gradient Methods
Ferdinand Genans
- High Probability Convergence Guarantees of Stochastic Gradient Descent Ascent in Structured Nonconvex Min-Max Games
Junsoo Ha
- Stable algorithms Lower Bounds for Estimation from MMSE Discontinuities
Xifan Yu, Ilias Zadik
- On-Average Stability of Multipass SGD and Effective Dimension
Simon Vary, Tyler Farghly, Ilja Kuzborskij, Patrick Rebeschini
- Leveraging Similarities in Multi-Armed Bandits
Khaled Eldowa, Thibaud Rahier, Cablant Augustin, Panayotis Mertikopoulos, Pierre Gaillard
- Data Augmentation: A Fourier Analysis Perspective
Behrooz Tahmasebi, Melanie Weber, Stefanie Jegelka
- Tight Sample Complexity Bounds for Entropic Best Policy Identification
Amer Essakine, Claire Vernade
- Convergence of Continual Learning in Homogeneous Deep Networks
Matan Schliserman, Gon Buzaglo, Itay Evron, Daniel Soudry
- Learning from Biased and Costly Data Sources: Minimax-optimal Data Collection under a Budget
Michael Harding, Kirthevasan Kandasamy, Vikas Singh
- When Both Layers Learn: Training Dynamics of Representing Linear Models via ReLU Networks
Berk Tinaz, Changzhi Xie, Mahdi Soltanolkotabi
- Private Linear Regression via a Down-Sensitivity to Privacy Reduction
Ittai Rubinstein, Chris Ge, Samuel Hopkins
- Continuous time policy evaluation is easier with noisy dynamics
Samuel Robertson, Thomas Newton, Csaba Szepesvari