Remembering Professor Herman Rubin
Writer(s): Anirban DasGupta
Herman Rubin, Professor of Statistics and Mathematics at Purdue University, passed away in West Lafayette, IN on April 23, 2018; he was 91. Herman Rubin was among the last remaining greatest polymaths of the twentieth century. To all who knew him or had heard about him, Herman was an inexplicable outlier in numerous ways. His unique ability to understand a new problem and arrive at the answer almost instantly baﬄed and puzzled even the wittiest mathematician. He never forgot a fact or a theorem or a proof that he has ever seen in his life. He would solve a complete stranger’s problem without ever expecting co-authorship or a thing in return. He would ﬁght for someone who opposed him at every corner. He would stand on his principles with the last drop of blood in his body. Herman Rubin’s death marks the nearness of the end of a unique era following the Second World War that saw the simultaneous emergence of a unique group of supremely talented statisticians who would mold the foundations of the subject for decades to come.
Herman obtained his PhD from the University of Chicago at a very young age; he was a student of Paul Halmos. After a stint at the Cowles Commission, he formed a productive intellectual aﬃnity with Ted Anderson, Charles Stein, and Ingram Olkin; at that same time, he also became professionally close to David Blackwell and Meyer Girshick. With Ted Anderson, he wrote two phenomenal papers on fundamental multivariate analysis that worked out the ﬁxed sample as well as asymptotic distribution theory of MLEs in factor analysis models and structural equation models. These results have entered into all standard multivariate analysis and econometrics texts and have remained there for more than half a century. Herman’s most famous and classic contribution to inference is the widely used and fundamental idea of monotone likelihood ratio families. Anyone who has ever taken a course on testing of hypotheses knows the absolute fundamentality of the idea and the results in the 1956 paper with Samuel Karlin. It was this work that led to Sam Karlin’s hugely inﬂuential TP2 and variation diminishing families with shadows of Isaac Schoenberg and Bill Studden lurking in the background.
Following this period of a series of fundamental papers in statistics, we see a drift. He makes novel entries into various aspects of probability and asymptotics. With J. Sethuraman, he does theory of moderate deviations. With Herman Chernoﬀ, he attacks the then completely novel problem of estimating locations of singularities, and how, precisely, the asymptotics are completely new. With Prakasa Rao as his student, he gets into the problem of cube root asymptotics for monotone densities. With C. R. Rao, he gives the classic Rao-Rubin characterization theorem. And, to many, the crown of the jewel is the invention of the idea of a Stratonovich integral.
Herman really did enjoy particular problems, as long as it was not a mundane particular problem. Classic examples are his papers with Rick Vitale, quite shocking, that sets of independent events characterize an underlying probability measure, degeneracies aside; the pretty work with Jeesen Chen and Burgess Davis on how nonuniform can a uniform sample look to the eye; with Tom Sellke on roots of smooth characteristic functions; the Bayesian formulation of quality control with Meyer Girshick; the hilariously bizarre but hard problem of estimating a rational mean; his papers with Andrew Rukhin on the positive normal mean; the work on the notorious Binomial N problem; Bayesian robustness of frequentist non-parametric tests. There are others. Herman never thought of who would cite or read a result; if he wanted to solve a problem, he did it.
Herman was probably one of the lifelong Bayesians; but a purely axiomatic Bayesian. He really did take most of the Savagian theory and axioms literally; he expanded on them, though later. An expansion was published in Statistics and Decisions (to my knowledge, with extremely active help from Jim Berger). He would not budge an inch from his conviction that the likelihood and the prior are inseparable. He would refuse to discuss what an appropriate loss function is; he would insist you ask the client. He would nevertheless want to see the full risk function of a procedure and would study Bayes through the lens of Bayes risk, and even exclusively Bayes risk, namely the double integral. On asymptotic behaviors of procedures, he did not appear to care for second order terms. He has shown his concern for only calculating a limit time and again. A glorious example of this is his work with J. Sethuraman on eﬃciency deﬁned through Bayes risks; this was so novel that it entered into the classic asymptotic text of Robert Serving. He came back to it many years later in joint work with Kai-Sheng Song in an Annals of Statistics article.
In certain ways, Herman came across as self-contradictory. He would publicly say only Bayes procedures should be used. But he would oppose the use of a single prior with all his teeth. He would be technically interested in robustness of traditional frequentist procedures, although he would portray them as coming out of wrong formulations. Well known examples are his well cited papers with Joe Gastwirth on the performance of the t-test under dependence. He did not have the personal desire to burn the midnight oil on writing a comprehensive review of some area; but he would be an invaluable asset if someone was writing one such. An example is his review of inﬁnitely divisible distributions with Arup Bose (and this writer). An all-time classic is his text on Equivalents of the Axiom of Choice jointly written with his wife Jean Rubin. Jim Berger thanks Herman profusely in the preface of his classic Springer book on decision theory and Bayesian analysis; Charles Stein acknowledges Herman and Herbert Robbins in his ﬁrst shrinkage paper.
There was a fairly long period in the history of statistics, when nearly every paper written in his home department had Herman’s contributions to it. He never asked for or got credit for them. He deﬁned the term scholar in its literal dictionary sense. With the passing of Herman Rubin, a shining beacon of knowledge and wisdom just moved. Herman was an absolute and consummate master of simulation, characteristic functions, and inﬁnitely divisible distributions. He kept to himself a mountain of facts and results on these and other topics that never saw the light of the day. There was never a person who did not respect Herman Rubin’s brain; even Paul Erdös did. He was an inaugural Fellow of the AMS and a Fellow of the IMS. Herman had sophisticated taste in music and literature. He was often seen at classical concerts and operas. He helped mathematical causes ﬁnancially. Herman was probably one of the very few people anywhere who could work out the NY Times crossword puzzle on any day in about an hour. Herman is survived by his son Arthur, and daughter Leonore.