First-Class Professor of Statistics
Center for Research in Decision Mathematics
Christian Robert is one of the world’s leading researchers in Bayesian statistics and the theory and applications of Monte Carlo methods. He has compiled an outstanding record of publication in the leading journals in statistics, and is the author of 13 advanced texts and monographs. He has served as co-editor of the Journal of the Royal Statistical Society — Series B (Statistical Methodology), and as an associate editor for the Annals of Statistics, the Journal of the American Statistical Association, and Statistical Science. He is a senior member of the Institut Universitaire de France; a fellow of the Institute of Mathematical Statistics, the American Statistical Society, and the Royal Statistical Society; past-president of the International Society for Bayesian Analysis; and former Head of the Statistics Laboratory of the Centre de Recherche en Economie et Statistique (CREST), Institut National de la Statistics et des Études Économique (INSEE), Paris.
Nov 13, 2014
My Life as a Mixture
Mixtures of distributions are fascinating objects for statisticians in that they both constitute a straightforward extension of standard distributions and offer a complex benchmark for evaluating statistical procedures, with a likelihood both computable in a linear time and enjoying an exponential number of local models (and sometimes infinite modes). This fruitful playground appeals in particular to Bayesians as it constitutes an easily understood challenge to the use of improper priors and of objective Bayes solutions. This talk will review some ancient and some more recent works of mine on mixtures of distributions, from the 1990 Gibbs sampler to the 2000 label switching and to later studies of Bayes factor approximations, nested sampling performances, improper priors, improved importance samplers, ABC, and an inverse perspective on the Bayesian approach to testing of hypotheses.
Nov 14, 2014
Approximate Bayesian Computing (ABC) for Model Choice: from Statistical Sufficiency to Machine Learning
Since its introduction in the late 1990’s, the performances of the ABC method have been analysed from several perspectives, starting with the pure practical motivations of the population geneticists who created it to an approximate Bayesian method, to a non-parametric one. We cover in this talk a new vision the specific case of model selection, showing how we originally developed convergent methods for Gibbs random fields, before moving to a pessimistic view of the consistency of the method and producing necessary and sufficient conditions for this consistency to hold, then to the realisation that generic machine learning tools like KNNs and random forests should be put to use to run model selection in the complex models covered by ABC techniques. Our perspective radically alters the way model selection is operated as we ban approximations of posterior probabilities for the models under comparison, since they cannot be reliably estimated, and propose instead to compute the performances of the selection method. As an aside, we argue that both KNN and random forest methods can be adapted to the settings of interest, with a recommendation on the automated selection on the tolerance level and sparse implementations of the random forest tree construction, using subsampling and reduced reference tables. This talk is based on joint works with Jean-Marie Cornuet, Arnaud Estoup, Jean-Michel Marin, Natesh Pillai, Pierre Pudlo and Judith Rousseau.