###### Dr. Peter Bühlmann

###### Department of Mathematics

###### ETH Zürich, Switzerland

Professor Bühlmann is currently Professor in the Department of Mathematics at ETH Zurich. He was previously the chair of the department from 2013-2017. He is a world renowned expert in statistical machine learning, causal inference, and computational biology. He has received many honors during his career including being elected fellow of the Institute of Mathematical Statistics (IMS) and the American Statistical Association, serving as co-editor of the Annals of Statistics, and delivering prestigious lectures including the Bahadur Memorial Lecture (University of Chicago), Neyman Lecture (IMS), and the Hartigan Lecture (Yale University), among many other honors.

#### Abstracts

##### General Lecture-Video

*January 31, 2019*

**Causality, Invariance and Robustness**

Is it a cause or an effect? This simple but fundamental question has a long history in science. Randomized studies serve as the gold standard for inferring causality but they are often expensive or even impossible to do due to ethical reasons. Recent approaches try to “substitute in part” the randomized studies by models, algorithms and statistical methods. Perhaps surprisingly, heterogeneity in potentially large-scale data can be beneficially exploited for causal inference and novel robustness, with wide-ranging prospects for various applications. The key idea relies on a notion of probabilistic invariance: it opens up new insights for formulating causality as a certain risk minimization problem with a corresponding notion of predictive robustness.

##### Technical Lecture-Video

*February 1, 2019*

**Hidden Confounders: Adjustment with some User-Friendly Methods**

Hidden confounding variables lead to biased regression estimators. We propose some adjustment methods for different settings which are very easy to use: the data is transformed and subsequent estimation is performed “as usual” with e.g. Lasso or ordinary least squares. The first scenario concerns high-dimensional i.i.d. data while the second one is exploiting structure from heterogeneous data (and thus ties in with the topic of the first lecture). The methods provide robustification against hidden confounders and in the latter set-up with heterogeneous data, we generalize instrumental variables regression for causal inference.