Session 9 - Department of Statistics - Purdue University Skip to main content

Statistical Estimation and Decision Theory

Speaker(s)

  • Daniela Szatmari-Voicu (Kettering University)
  • Zhihua (Sophia) Su (University of Minnesota, Twin Cities)
  • Herman Rubin (Purdue University)
  • J.T. Gene Hwang (Cornell University; National Chung Cheng University, Taiwan)

Description

Schedule

Fri, June 22 - Location: STEW 202

TimeSpeakerTitle
1:30 - 1:55 Daniela Szatmari-Voicu Robust M- and L-estimators of Scale Parameter
Abstract: We consider first the class of M-estimators of scale that are location-scale equivariant and Fisher consistent at the error distribution of the shrinking contamination neighborhood and derive an expression for the maximal asymptotic mean-squared-error, for a suitably regular score function, followed by a lower bound on it. We next show that the minimax asymptotic mean-squared-error is attained at an M-estimator of scale with the truncated MLE score function which, when specialized to the Standard Normal error distribution has the form of Huber's Proposal 2. The latter minimax property is also shown to hold for α- trimmed variance as an L-estimator of scale.
2:00 - 2:25 Zhihua (Sophia) Su Envelope Models and Methods
Abstract: This talk presents a new statistical concept called an envelope. An envelope has the potential to achieve substantial efficiency gains in multivariate analysis by identifying and cleaning up immaterial information in the data. The efficiency gains will be demonstrated both by theory and example. If time permits, some recent developments in this area will also be discussed. They refine and extend the enveloping idea, adapting it to more data types and increasing the potential to achieve efficiency gains. Applications of envelopes and their connection to other fields will also be mentioned.
2:30 - 2:55 Herman Rubin The Difference Between Statistical Decision Theory and “Plugin” Bayes
Abstract: In all of my papers concerning the foundations of statistical decision theory, my emphasis was on action, not belief. Making some strong assumptions, it is possible to get from action to belief, but these are unnecessary and not that reasonable. The conclusions from the action assumptions are that decisions should be based on the evaluation of all the costs in all states of nature, and that this can be done by minimizing a linear combination of the expected utilities in the various states. IF the minimization can be carried out, it will be a version of Bayes, but one can compare procedures even if Bayes procedures cannot be computed, and make intelligent decisions on what action to take, even if procedures are restricted.
3:00-3:30PM Break
3:30 - 3:55 J.T. Gene Hwang Statistical inference after Selection- What can we do to be statistically valid and efficient?

Abstract: Modern statistical applications often involve many parameters and the scientific interest often lies in estimatingor making inference regarding the parameters that were selected by data. For example, in microarray analysis, statistical inference for the parameters corresponding to the most significant genes is of great interest. This type of statistical inference is called post selection statistical inference. We shall assume that no further data are collected which happens often. Hence the same data used for selection is now used for statistical inference.

Naive statistical inference ignoring the selection causes severe bias especially in the large p small n (i.e. large population size and small sample size) scenario. Bonferroni type procedures are valid for post selection inference, but are very conservative.

We shall demonstrate how Empirical Bayes Bayes procedures including estimators and confidence intervals are superior to the Naive and the Bonferroni's procedure. The Empirical Bayes (Lindley-James-Stein) estimator has virtually no selection bias. The empirical Bayes interval centered at the empirical Bayes estimator is short and is valid for post selection inference in the sense that its coverage probabilities with respect to a class of priors are numerically shown to be above a nominal level. It can turn the “curse of dimensionality” into “blessing of dimensionality”. If time allows, we shall report results relating to false coverage rate (FCR), which parallels false discovery rate FDR) for hypothesis testing.

Purdue Department of Statistics, 150 N. University St, West Lafayette, IN 47907

Phone: (765) 494-6030, Fax: (765) 494-0558

© 2023 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.