University of Florida Homepage

2016: Michael Jordan

portrait of Michael Jordan

Michael Jordan

Professor of Statistics and Computer Sciences
University of California, Berkeley

Home Page

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley.

His research interests bridge the computational, statistical, cognitive and biological sciences, and have focused in recent years on Bayesian nonparametric analysis, probabilistic graphical models, spectral methods, kernel machines and applications to problems in distributed computing systems, natural language processing, signal processing and statistical genetics. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering and a member of the American Academy of Arts and Sciences. He is a Fellow of the American Association for the Advancement of Science. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM.

Abstracts

General Lecture
(3:30 p.m., March 29, 2017)
Live Stream

On Computational Thinking, Inferential Thinking and Data Science

The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blend the inferential and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in “Big Data” is apparent from their sharply divergent nature at an elementary level—in computer science, the growth of the number of data points is a source of “complexity” that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of “simplicity” in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as “runtime” in core statistical theory and the lack of a role for statistical concepts such as “risk” in core computational theory. I present several research vignettes aimed at bridging computation and statistics, including the problem of inference under privacy and communication constraints, and methods for trading off the speed and accuracy of inference.

Technical Lecture
(2:30 p.m., March 30, 2017)

Communication-Avoiding Statistical Inference

Modern data analysis increasingly takes place on distributed computing platforms. In the distributed setting, procedures that minimize communication among processors can be orders-of-magnitude faster than naive procedures. This fact has revolutionized numerical linear algebra, but it has yet to have significant impact on statistics. I discuss communication-avoiding approaches to statistical inference, including a novel form of the bootstrap, a primal-dual approach to M-estimation, a surrogate likelihood framework and distributed forms of false discovery rate control.