Invited Speakers

Computational barriers in statistical learning

Within classical theory, the fundamental limits to statistical tasks are due to information-theoretic bottlenecks. This is the case both in estimation theory and in statistical learning. However, in a growing number of statistical problems, the fundamental limits predicted by information-theoretic arguments are not achieved by any known polynomial time algorithm. Over the last few years, significant progress has been achieved in understanding this phenomenon on specific examples. I will provide an introduction tho this exciting area, discuss our current understanding, and outline outstanding challenges.

Andrea Montanari received a Laurea degree in Physics in 1997, and a Ph. D. in Theoretical Physics in 2001 (both from Scuola Normale Superiore in Pisa, Italy). He has been post-doctoral fellow at Laboratoire de Physique Théorique de l’Ecole Normale Supérieure (LPTENS), Paris, France, and the Mathematical Sciences Research Institute, Berkeley, USA. Since 2002 he is Chargé de Recherche (with Centre National de la Recherche Scientifique, CNRS) at LPTENS. In September 2006 he joined Stanford University as a faculty, and since 2015 he is Full Professor in the Departments of Electrical Engineering and Statistics.

He was co-awarded the ACM SIGMETRICS best paper award in 2008. He received the CNRS bronze medal for theoretical physics in 2006, the National Science Foundation CAREER award in 2008, the Okawa Foundation Research Grant in 2013, and the Applied Probability Society Best Publication Award in 2015. He is an Information Theory Society distinguished lecturer for 2015-2016. In 2016 he received the James L. Massey Research & Teaching Award of the Information Theory Society for young scholars.


PAC-Learning and Reconstruction of Quantum States

Do we really need ~2^n classical bits in order to describe a quantum state of ~n qubits? In this talk I’ll survey a sequence of results indicating that, if we only care about their behavior on yes-or-no measurements, then quantum states actually contain less information than meets the eye. These include: my 2004 result that quantum advice
can be simulated using classical advice plus postselection; my 2006 result on the PAC-learnability of quantum states; my 2010 result (joint with Andy Drucker) that quantum advice can be simulated by trusted classical advice combined with untrusted quantum advice; and finally, a new unpublished result showing that, given poly(n) copies of an n-qubit quantum state |psi>, one can measure those copies in such a way as to learn |psi>’s behavior with respect to exp(n) different two-outcome measurements. I’ll also mention a recent experimental demonstration of the quantum state PAC-learning theorem.

Scott Aaronson is David J. Bruton Centennial Professor of Computer Science at the University of Texas at Austin. He received his bachelor’s from Cornell University and his PhD from UC Berkeley, and did postdoctoral fellowships at the Institute for Advanced Study as well as the University of Waterloo. Before coming to UT Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at MIT. Aaronson’s research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. He’s received the National Science Foundation’s Alan T. Waterman Award, the United States PECASE Award, the Vannevar Bush Fellowship, and MIT’s Junior Bose Award for Excellence in Teaching.