Session 17 - Department of Statistics - Purdue University Skip to main content

New Developments in Foundations of Statistical Inference

Speaker(s)

  • Ryan Martin (University of Illinois, Chicago)
  • Huiping Xu (Indiana University-Purdue University, Indianapolis)
  • Duncan Ermini Leaf (Pacific Quantitative Research Services)
  • Jun Xie (Purdue University)

Description

Modern high-dimensional inference problems call for attention on foundations of statistical inference. This session is on a new but promising inferential system, called the inferential model (IM) framework, which produces prior-free and frequency-calibrated probabilistic uncertainty assessment. Four speakers will introduce IMs and demonstrate through a collection of "benchmark" problems, including constrained parameter problems, multiple testing, and variable selection, that in addition to its sound logic of reasoning with uncertainty, the IM framework performs as good as or better than existing methods. 

As one should be reminded of the old Chinese saying "It takes ten years to grow tree, but a hundred to rear people", there is no doubt that IMs will meet some initial skepticism. Perhaps, the important message to be brought by the speakers is that as long as we statisticians keep trying, new generations will eventually have solid foundations of statistical inference for making data-driven scientific discoveries. And, such a day may not be far away. 

Schedule

Sun, June 24 - Location: STEW 206

TimeSpeakerTitle
8:30 - 9:15AM Ryan Martin Inferential models: A framework for prior-free posterior probabilistic inference
Abstract: Posterior probabilistic inference without priors is an important goal but, of the existing attempts to reach this goal, including Fisher's fiducial inference, none has been able to give a completely satisfactory picture. In this talk I will introduce a new framework for probabilistic inference, based on inferential models (IMs), which not only provides data-dependent probabilistic measures of uncertainty about the unknown parameter, but does so with an almost automatic long-run frequency calibration property. The key to this new approach is the focus on predicting unobserved auxiliary variables with random sets. I will start with simple examples to build up the basic ideas, and then move on to more interesting and challenging problems which require special auxiliary variable dimension reduction techniques.
9:15 - 9:55AM Huiping Xu Inferential models for Variable Selection for linear regression
Abstract: Linear regression is arguably one of the most widely used statistical methods in applications. However, important problems, especially variable selection, remain a challenge for classical modes of inference. This talk introduces a recently proposed framework of inferential models (IMs) in the linear regression context. In general, the IM approach produces meaningful probabilistic summaries of the statistical evidence for and against assertions about the unknown parameter of interest. Moreover, these summaries are properly calibrated in a frequentist sense, allowing results to be easily interpreted both within and across experiments. Numerical examples together with a simulation study show that the IM framework is promising for linear regression analysis.
10:00-10:30AM Break
10:30 - 11:15 Duncan Ermini Leaf Inferential models for constrained parameter problems
Abstract: In statistical inference problems, there can be a priori constraints on the parameters of a probability model. As an example, measurements of an inherently non-negative quantity, such as a mass, may have a Normal error distribution. An observed measurement from this distribution can be negative, but its mean parameter cannot be so.

This parameter constraint is external to the specified Normal error model. Incorporating such constraints into statistical inference procedures remains a challenging problem. I will discuss four methods for handling parameter constraints in the inferential model framework: the elastic belief method, Dempster's conditioning rule, generalized inferential models, and a data adjustment method. All of these approaches have the desirable validity criteria for inferential models. I will compare these methods and show their connections to some confidence interval constructions for constrained parameters. 
11:15 - 11:55AM Jun Xie Probabilistic inference for multiple testing
Abstract: A probabilistic inferential model is developed for large-scale simultaneous hypothesis testing. Starting with a simple hypothesis testing problem, the inferential model produces lower and upper probabilities on an assertion of the null or alternative hypothesis. It uses a data generating mechanism and is based on the observed data. More specifically, the inferential model is obtained by improving Fisher's fiducial and the Dempster-Shafer theory of belief functions, so that it produces probabilistic inferential results with desirable frequency properties. For a large set of hypotheses, a sequence of assertions concerning the total number of true alternative hypotheses are proposed. The inferential model is derived to calculate the lower and upper probabilities on the sequence of assertions and hence offers a new way for large-scale multiple testing. The proposed method is applied in identifying differentially expressed genes in microarray data analysis.

Purdue Department of Statistics, 150 N. University St, West Lafayette, IN 47907

Phone: (765) 494-6030, Fax: (765) 494-0558

© 2023 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.