John Lafferty
Department of Statistics and Data Science
Yale University
Professor John D. Lafferty is the John C. Malone Professor of Statistics and Data Science. He is a world-renowned expert on statistical machine learning, with a focus on computational and statistical aspects of nonparametric methods, high-dimensional data, graphical models, and statistical language modeling.
Lafferty earned his doctoral degree in mathematics from Princeton University, where he was a member of the Program in Applied and Computational Mathematics. He worked as a research staff member at the IBM Thomas J. Watson Research Center, before joining the faculty at Carnegie Mellon University. He was previously the Louis Block Professor of Statistics and Computer Science at the University of Chicago before joining Yale in July 2017 as professor of statistics and data science, with a secondary appointment in computer science.
He has won four “Test of Time” awards from the International Conference on Machine Learning and in 2015, he delivered an Institute of Mathematical Statistics Medallion Lecture.
Abstracts
General Lecture
April 24, 2018
Fruit Flies in Statistics and Machine Learning
The fruit fly is the model scientific organism, employed as a laboratory animal to study the principles of life. Important advances in statistical methodology and theory have come from probing various statistical “fruit flies,” as we illustrate with some examples. However, the use of fruit flies is less prevalent in recent machine learning research. With the accelerating use of machine learning to engineer complex systems, it is increasingly difficult to understand the statistical principles at play. We argue that there is a need for more methodical inquiry through the study of simpler “organisms” in machine learning research. Traditional statistical perspectives can be of service in this effort, but there is also room for more empirical approaches.
Technical Lecture
April 25, 2018
Structure and Adaptivity in Optimization and Statistical Learning
Abstract: Problem structure typically must be exploited for optimal optimization and learning algorithms. But adaptive procedures are required to avoid making strong assumptions about this structure. We discuss recent work related to this general theme. First, we present a notion of fine scale adaptivity in convex optimization, bridging concepts between statistics and numerical analysis. Next, we discuss adaptivity and structure for shape constrained problems that generalize classical isotonic and convex regression. Finally, we describe some recent work on testing for structure in random networks, showing that global community structure can be detected using only local subgraph statistics.