GSO Spring Speaker 2008

A Review of Surprises Encountered in Bayesian Model Selection
James O. Berger
Professor
The Arts and Sciences Professor of Statistics, Duke University and Director of the Statistical and Applied Mathematical Sciences Institute (SAMSI) Joint with Department of Statistics Research Colloquium
Venue: MATH 175
Abstract:
This talk reviews the following ideas, all of which I at one time thought to be true, but now think false.
- Use of p-values is better than fixed alpha-level testing, since p-values are conditional on the data.
- Frequentist testing and Bayesian testing are incompatible; for instance, Bayes tests do not depend on the stopping rule in sequential settings while frequentist tests do so depend, necessitating "spending alpha" for looks at the data.
- The best single model to a Bayesian is the highest posterior probability model.
- Model selection priors cannot be derived from the data.
- Only a relatively small number of models will typically receive significant posterior probability (or other "weight"), and hence description of model uncertainty can focus on a few best models.
Again, I now view all these statements to be false, and will discuss why. Many of these issues will be illustrated through an example involving high-energy physics.