Statistics Seminar

Fall 2019

Seminars are held from 4:00 p.m. – 5:00 p.m. in Griffin-Floyd 100 unless otherwise noted.Refreshments are available before the seminars from 3:30 p.m. – 4:00 p.m. in Griffin-Floyd Hall 103.

Date Speaker    Title
Sep 5 Bikas Sinha

(Indian Statistical Institute)

Reliability Estimation in Exponential Populations

Sep 26 Yuexiao Dong

(Temple University)

On dual model-free variable selection with two groups of variables

 Oct 17 Yiyuan She

(Florida State University)

Supervised Clustering with Low Rank

Nov 14 Matey Neykov

(Carnegie Mellon University)


Nov 21 Robert Strawderman

(University of Rochester)




Reliability Estimation in Exponential Populations
Bikas Sinha, Indian Statistical Institute
Abstract : We confine only to simple exponential population as the life distribution of a given product. We confine to point estimation of reliability R(t) at a given / specified time point (t, t > 0). With R(t)= Pr[ X > t]. We confine to unbiasedness criterion in the exact sense. E[R^(t)]=R(t). We discuss some understanding about the theory towards development of the subject matter. Computations are routine and are illustrated by  a few examples.
On dual model-free variable selection with two groups of variables
Yuexiao Dong, Temple University


Abstract: In the presence of two groups of variables, existing model-free variable selection methods only reduce the dimensionality of the predictors. We extend the popular marginal coordinate hypotheses Cook (2004) in the sufficient dimension reduction literature and consider the dual marginal coordinate hypotheses, where the role of the predictor and the response is not important. Motivated by canonical correlation analysis (CCA), we propose a CCA-based test for the dual marginal coordinate hypotheses, and devise a joint backward selection algorithm for dual model-free variable selection. The performances of the proposed test and the variable selection procedure are evaluated through synthetic examples and a real data analysis.

Yiyuan She, Florida State University

Modern clustering applications are often faced with challenges from high dimensionality, nonconvexity and parameter tuning. This paper gives a mathematical formulation of low-rank supervised clustering and can automatically group the predictors in building a multivariate predictive model.  By use of linearization and block coordinate descent, a simple-to-implement   algorithm is developed, which  performs subspace learning and clustering iteratively with guaranteed convergence. We show a tight error bound of the proposed method,   study its minimax optimality, and propose   a new information criterion  for parameter tuning, all with   distinctive   rates from the large body of literature  based on sparsity. Extensive simulations and real-data experiments demonstrate the excellent performance of rank-constrained inherent clustering.