GSO Spring Speaker 2008


A Review of Surprises Encountered in Bayesian Model Selection

James O. Berger
The Arts and Sciences Professor of Statistics, Duke University and Director of the Statistical and Applied Mathematical Sciences Institute (SAMSI) Joint with Department of Statistics Research Colloquium

Start Date and Time: Thu, 21 Feb 2008, 4:30 PM

End Date and Time: Thu, 21 Feb 2008, 6:00 PM

Venue: MATH 175


This talk reviews the following ideas, all of which I at one time thought to be true, but now think false.

  • Use of p-values is better than fixed alpha-level testing, since p-values are conditional on the data.
  • Frequentist testing and Bayesian testing are incompatible; for instance, Bayes tests do not depend on the stopping rule in sequential settings while frequentist tests do so depend, necessitating "spending alpha" for looks at the data.
  • The best single model to a Bayesian is the highest posterior probability model.
  • Model selection priors cannot be derived from the data.
  • Only a relatively small number of models will typically receive significant posterior probability (or other "weight"), and hence description of model uncertainty can focus on a few best models.

Again, I now view all these statements to be false, and will discuss why. Many of these issues will be illustrated through an example involving high-energy physics. 

Last Updated: Sep 19, 2017 8:20 AM

Purdue Department of Statistics, 250 N. University St, West Lafayette, IN 47907

Phone: (765) 494-6030, Fax: (765) 494-0558

© 2015 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science Webmaster.