**Selected Ongoing Projects:**

** Global-local shrinkage and the horseshoe+ estimator:** The horseshoe estimator of Carvalho, Polson and Scott (2010) was one of the first works to demonstrate the power of "global-local" shrinkage in ultra-sparse Bayesian variable selection problems. Since then, multiple attractive theoretical properties of this estimator have been discovered. In a collaborative work with Nick Polson, Jyotishka Datta and Brandon Willard, we propose a new estimator, termed the "horseshoe+ estimator," that improves upon the horseshoe, both theoretically and empirically. Bhadra et al. (2017) give the details. It also appears global-local shrinkage priors are good candidates for default priors for low-dimensional, nonlinear functions in a normal means model, where the so-called "flat" priors fail. Bhadra et al. (2016) demonstrate their use in a few such problems. Bhadra et al. (2016a) demonstrate the use of two integral identities for generating global-local mixtures. Bhadra et al. (2016b) formally demonstrate that the prediction performance for the class of global shrinkage regression methods (ridge regression, principal components regression etc.) can be improved by using local, component-specific shrinkage parameters. Bhadra et al. (2017a) derive fast computational algorithms to perform feature selection using the non-convex horseshoe regularization penalty. Li, Craig and Bhadra (2017) propose the use of the horseshoe prior in estimating the precision matrix for multivariate Gaussian data. Bhadra et al. (2017b) is a review article summarizing the important developments in global-local shrinkage methods between 2010 and 2017.

** Bayesian models for joint mean-covariance estimation and for mixed discrete-continuous data:** Bayesian
variable and covariance selections have been treated separately in the
statistics literature for a long time. We do a combined analysis in the
context of a Gaussian
sparse seemingly unrelated regression (SSUR) model to infer jointly the
important sparse set of predictors as well as the important sparse set of
non-zero partial correlations in the responses. We apply our technique to
expression quantitative trait loci (eQTL) analysis where the expression level
of a gene (response) is typically affected by a set of
important SNPs (predictors) and the responses exhibit
conditional dependence among themselves. Both the number of predictors and the number of
correlated responses routinely exceed the sample size. We find that a
marginalization-based collapsed Gibbs sampler offers a computationally efficient solution. The first ideas appeared in Bhadra and Mallick (2013). Building on that, Feldman, Bhadra and Kirshner (2014) found a way to relax the need to be restricted to decomposable graphs. Bhadra and Baladandayuthapani (2013) is an application of the methodology to brain cancer (glioblastoma) data. Bhadra, Rao and Baladandayuthapani (2018) developed a technique to perform network inference in presence of multivariate data that are of mixed discrete and continuous nature.

**Selected Past Projects:**

**Iterated filtering and its
applications in modeling infectious disease dynamics:** Iterated
filtering is a simulation-based technique for maximum likelihood inference in hidden
Markov models with intractable likelihood. Particle filters (i.e., sequential Monte Carlo filters) are
used in iterated filtering to devise a stochastic approximation scheme that
converges to the maximum likelihhod estimate of the model parameters. We
provide theoretical results on iterated filtering in Ionides et al. (2011), proving the method
results in consistent estimates and show it to be a special case of a broad
class of stochastic approximation techniques. We apply iterated filtering to
estimate parameters in a compartment model of epidemic malaria to capture the
spread of the disease in Northwest
India and answer scientific questions regarding role of rainfall in the spread
of the epidemic in Bhadra et al. (2011) and Laneri et al. (2010). This is joint work
with Ed Ionides
and Mercedes Pascual, among others.

**Adaptive particle allocation for off-line iterated particle filters:**
In many off-line
sequential Monte Carlo (SMC) based techniques, the filter is used repeatedly to estimate
model parameters or likelihood of the data. Examples include iterated
filtering of Ionides et
al. or
particle MCMC of Andrieu et al . In the
off-line setting, we formulate a way to minimize
the variance of the likelihood estimate resulting from SMC given a constraint on
the total number of particles, i.e., the available computing power. Results in Bhadra and Ionides (2016)
indicate up to 55% computational savings.

Home
| Curriculum Vitæ
| Research
| Presentations
| Teaching
| Software