Invited Sessions Details

Advances in nonparametric and semiparametric statistics

Presenter: Marco Carone

When: Thursday, July 14, 2016      Time: 11:00 AM - 12:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Automating the construction of asymptotically efficient estimators

Despite the risk of misspecification they are tied to, parametric models continue to be used in statistical practice because they are accessible to all. In particular, efficient estimation procedures in parametric models are simple to describe and implement. Unfortunately, the same cannot be said of semiparametric and nonparametric models. While these flexible alternatives often much more appropriately reflect the available scientific knowledge, performing efficient inference in these models is generally challenging. The efficient influence function of the parameter of interest is a key analytic object from which the construction of asymptotically efficient estimators can potentially be streamlined. However, the theoretical derivation of the efficient influence function is often a very difficult task, even for seasoned researchers. In this talk, we propose and discuss an automated numerical procedure for approximating the efficient influence function of a given parameter in a given statistical model. The approach proposed generalizes to arbitrary models the simple procedures described recently by Frangakis et al. (2015) and Luedtke, Carone & van der Laan (2015) for the case of nonparametric models. This proposal may be a significant step forward toward automating asymptotic efficient estimation within general statistical models, thereby rendering much more accessible the use of realistic models in statistical analyses.

Advances in nonparametric and semiparametric statistics

Presenter: Mark van der Laan

When: Thursday, July 14, 2016      Time: 11:00 AM - 12:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Circumventing the curse of dimensionality in asymptotic efficient estimation

Suppose we observe n independent and identically distributed observations of a finite dimensional bounded random variable. This talk is concerned with the construction of an efficient targeted minimum loss-based estimator (TMLE) of a pathwise differentiable target parameter based on a realistic statistical model. The only smoothness condition we will enforce on the statistical model is that the relevant nuisance parameters of the data distribution are multivariate real valued cadlag functions and have a finite supremum and variation norm. We develop a loss-based super-learner with a library that contains minimum loss based estimators minimizing the empirical risk over the parameter space under the additional constraint that the variation norm is bounded by a set constant, across a set of constants for which the maximal constant converges to infinity with sample size. We show that this super-learner is guaranteed to converge at a rate faster than the critical rate n^{-1/4}. Subsequently, we establish that a TMLE using such a super-learner as initial estimator is asymptotically efficient at any data generating distribution in the model, under very weak structural conditions on the target parameter mapping and model. We demonstrate our general theorem by constructing such a one-step TMLE of the average causal effect in a nonparametric model, and presenting the corresponding efficiency theorem. We also discuss the practical implementation of this estimator.

Advances in nonparametric and semiparametric statistics

Presenter: Rajarshi Mukherjee

When: Thursday, July 14, 2016      Time: 11:00 AM - 12:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Minimax and Adaptive Estimation of Nonlinear Functionals

We consider asymptotically minimax as well as adaptive minimax estimation of a class of "smooth" nonlinear functionals in nonparametric models. Of special interest among the class of functionals considered is that of a treatment effect in the presence of many contin- uous confounders. Assuming a semiparametric model for the treatment effect functional, we discuss relevant results and subtleties while drawing inference under low regularity con- ditions on the underlying infinite dimensional parameters of the model. In the context of more general models and functionals, we provide adaptive upper bounds for estimators based on second order U-statistics arising from finite dimensional approximation of the infinite dimensional models using projection type kernels. An accompanying general adaptive lower bound tool is provided yielding bounds on chi-square divergence between mixtures of product measures. Appealing to the theory of higher order in influence functions we provide examples of a class of functionals where such estimation procedures might be relevant.

Advances in nonparametric and semiparametric statistics

Presenter: Aditya Guntuboyina

When: Thursday, July 14, 2016      Time: 11:00 AM - 12:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

On shape constrained density estimation

I will describe accuracy results for the maximum likelihood estimator (MLE) in shape constrained density estimation. I will first review classical results due to Birge on monotone density estimation. Then I will present new results on log-concave density estimation. Possible extensions to multidimensional shape constrained estimation will be outlined. This is based on joint work with Arlene Kim and Richard Samworth.