**Selected Ongoing Projects:**

** Beyond Matérn: the Confluent Hypergeometric covariance function for Gaussian process models:** One of my most recent research interests concerns Gaussian process (GP) models, which appear in at least three distinct areas of great contemporary interest: as models for spatial and spatiotemporal processes, as surrogate models for computer experiments, and as limits of deep neural networks. My first work in this area concerns the design of a covariance function. The Matérn covariance function remains very popular in spatial statistics in part because of the control it affords the user on the mean squared differentiability of the GP realizations. However, the Matérn covariance possesses an exponentially decaying tail, which may not be the best choice for modeling in situations where distant observations can display high correlations. This problem can be remedied of course by using Cauchy or rational quadratic covariances, but this comes at a great cost: the control over smoothness is completely lost! Ma and Bhadra (2023, JASA) design a new covariance class called the *Confluent Hypergeometric (CH)* class as a mixture of the Matérn class that allows simultaneous flexibility on smoothness and polynomial tail decay via two distinct parameters. A key observation is made on the connection between Matérn and the normalizing constant of the generalized inverse Gaussian distribution of Barndorff-Nielsen. Yarger and Bhadra (2023, arXiv) provide valid multivariate generalizations of the CH covariance function. Fang and Bhadra (2023, arXiv) demonstrate Gaussian process priors with rescaled Matérn and CH covariance functions achieve the nonparametric minimax rate in estimation, even when the smoothness of the true function and that of the covariance function does not match. All of the above papers consider geospatial applications.

**Non-Gaussian infinite-width limits of Bayesian neural networks:** From the early works of Neal (1996), the infinite width Gaussian scaling limit of a Bayesian neural network with one hidden layer is a well known result, *provided the network weights have bounded prior variance*. The tractable properties of Gaussian processes then allow straightforward posterior uncertainty quantification. Neural network weights with unbounded variance, however, pose unique challenges. In this case, the classical central limit theorem breaks down and it is well known that the scaling limit is an α-stable process under suitable conditions. However, current literature is primarily limited to forward simulations under these processes and the problem of posterior inference under such a scaling limit remains largely unaddressed, unlike in the Gaussian process case. Loría and Bhadra (2024, UAI) provide a computationally feasible approach for fully probabilistic posterior uncertainty quantification in this setting for a network with one hidden layer. Extension to networks with multiple hidden layers with deep α-stable kernel machines is ongoing work.

**Computationally efficient inference and uncertainty quantification in probabilistic graphical models:** Probabilistic graphical models are a longstanding interest of mine, and I have several ongoing projects in this area. Sagar et al. (2024, EJS) establish posterior concentration results under global-local shrinkage priors on precision matrices, including the graphical horseshoe. Bhadra et al. (2022, arXiv) provide a resolution to the problem of computing evidence, or marginal likelihood, in Gaussian graphical models (GGMs), for a class of priors considerably broader than what was previously feasible. Sagar et al. (2024, Stat) develop a fast MAP estimation procedure using a novel local linear approximation scheme for GGMs. Likelihood-based inference in probabilistic graphical models with intractable likelihood, including partially observed cases such as the Boltzmann machines, is developed by Chen et al. (2024, arXiv). See also other related papers on my webpage.
**Selected Past Projects:**

** Global-local shrinkage and the horseshoe+ estimator:** The horseshoe estimator of Carvalho, Polson and Scott (2010, Biometrika) was one of the first works to demonstrate the power of "global-local" shrinkage in ultra-sparse Bayesian variable selection problems. Since then, multiple attractive theoretical properties of this estimator have been discovered. In a collaborative work with Nick Polson, Jyotishka Datta and Brandon Willard, we propose a new estimator, termed the "horseshoe+ estimator," that improves upon the horseshoe, both theoretically and empirically. Bhadra et al. (2017, BA) give the details. It also appears global-local shrinkage priors are good candidates for default priors for low-dimensional, nonlinear functions in a normal means model, where the so-called "flat" priors fail. Bhadra et al. (2016, Biometrika) demonstrate their use in a few such problems. Bhadra et al. (2020a, Sankhya A) demonstrate the use of two integral identities for generating global-local mixtures. Bhadra et al. (2019a, JMLR) formally demonstrate that the prediction performance for the class of global shrinkage regression methods (ridge regression, principal components regression etc.) can be improved by using local, component-specific shrinkage parameters. Bhadra et al. (2020b, Sankhya B) derive fast computational algorithms to perform feature selection using the non-convex horseshoe regularization penalty. Li, Craig and Bhadra (2019, JCGS) propose the use of the horseshoe prior in estimating the precision matrix for multivariate Gaussian data and Sagar et al. (2022+) provide relevant results on posterior concentration and Bayes-frequentist duality. Bhadra et al. (2019b, Stats. Sci.) is a review article summarizing the important developments in global-local shrinkage methods in linear models in the past decade. Bhadra et al. (2020c, ISR) is another review article focusing on recently emerging uses of horseshoe regularization in modern machine learning applications, specifically in complex and deep models.

** Bayesian models for joint mean-covariance estimation and for mixed discrete-continuous data:** Bayesian
variable and covariance selections have been treated separately in the
statistics literature for a long time. We do a combined analysis in the
context of a Gaussian
sparse seemingly unrelated regression (SSUR) model to infer jointly the
important sparse set of predictors as well as the important sparse set of
non-zero partial correlations in the responses. We apply our technique to
expression quantitative trait loci (eQTL) analysis where the expression level
of a gene (response) is typically affected by a set of
important SNPs (predictors) and the responses exhibit
conditional dependence among themselves. Both the number of predictors and the number of
correlated responses routinely exceed the sample size. We find that a
marginalization-based collapsed Gibbs sampler offers a computationally efficient solution. The first ideas appeared in Bhadra and Mallick (2013, Biometrics). Building on that, Feldman, Bhadra and Kirshner (2014, Stat) found a way to relax the need to be restricted to decomposable graphs. Bhadra and Baladandayuthapani (2013, GENSIPS) is an application of the methodology to brain cancer (glioblastoma) data. Bhadra, Rao and Baladandayuthapani (2018, Biometrics) developed a technique to perform network inference in presence of multivariate data that are of mixed discrete and continuous nature and Chakraborty et al. (2022+) extended it to chain graph models for multiplatform genomic data in lung cancer. Li et al. (2021, JMVA) perform joint mean-covariance estimation in SUR models combining the horseshoe and graphical horseshoe priors.

**Iterated filtering and its
applications in modeling infectious disease dynamics:** Iterated
filtering is a simulation-based technique for maximum likelihood inference in hidden
Markov models with intractable likelihood. Particle filters (i.e., sequential Monte Carlo filters) are
used in iterated filtering to devise a stochastic approximation scheme that
converges to the maximum likelihhod estimate of the model parameters. We
provide theoretical results on iterated filtering in Ionides et al. (2011, AoS), proving the method
results in consistent estimates and show it to be a special case of a broad
class of stochastic approximation techniques. We apply iterated filtering to
estimate parameters in a compartment model of epidemic malaria to capture the
spread of the disease in Northwest
India and answer scientific questions regarding role of rainfall in the spread
of the epidemic in Bhadra et al. (2011, JASA) and Laneri et al. (2010, PLoS Comp. Biol.). This is joint work
with Ed Ionides
and Mercedes Pascual, among others.

**Adaptive particle allocation for off-line iterated particle filters:**
In many off-line
sequential Monte Carlo (SMC) based techniques, the filter is used repeatedly to estimate
model parameters or likelihood of the data. Examples include iterated
filtering of Ionides et
al. or
particle MCMC of Andrieu et al . In the
off-line setting, we formulate a way to minimize
the variance of the likelihood estimate resulting from SMC given a constraint on
the total number of particles, i.e., the available computing power. Results in Bhadra and Ionides (2016, Stats. and Computing)
indicate up to 55% computational savings.

Home
| Curriculum Vitæ
| Research
| Presentations
| Teaching
| Software