Probability mass function


Cumulative distribution function


Parameters  n ? N_{0}  number of trials (real) (real) 

Support  k ? { 0, ..., n } 
pmf  
CDF  where _{3}F_{2}(a,b,k) is the generalized hypergeometric function = 
Mean  
Variance  
Skewness  
Ex. kurtosis  See text 
MGF  
CF 
In probability theory and statistics, the betabinomial distribution is a family of discrete probability distributions on a finite support of nonnegative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The betabinomial distribution is the binomial distribution in which the probability of success at each trial is not fixed but random and follows the beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data.
It reduces to the Bernoulli distribution as a special case when n = 1. For ? = ? = 1, it is the discrete uniform distribution from 0 to n. It also approximates the binomial distribution arbitrarily well for large ? and ?. The betabinomial is a onedimensional version of the Dirichletmultinomial distribution, as the binomial and beta distributions are univariate versions of the multinomial and Dirichlet distributions, respectively.
The Beta distribution is a conjugate distribution of the binomial distribution. This fact leads to an analytically tractable compound distribution where one can think of the parameter in the binomial distribution as being randomly drawn from a beta distribution. Namely, if
then
where Bin(n,p) stands for the binomial distribution, and where p is a random variable with a beta distribution.
then the compound distribution is given by
Using the properties of the beta function, this can alternatively be written
The betabinomial distribution can also be motivated via an urn model for positive integer values of ? and ?, known as the Polya urn model. Specifically, imagine an urn containing ? red balls and ? black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, then two black balls are returned to the urn. If this is repeated n times, then the probability of observing k red balls follows a betabinomial distribution with parameters n,? and ?.
Note that if the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a binomial distribution and if the random draws are made without replacement, the distribution follows a hypergeometric distribution.
The first three raw moments are
and the kurtosis is
Letting we note, suggestively, that the mean can be written as
and the variance as
where . The parameter is known as the "intra class" or "intra cluster" correlation. It is this positive correlation which gives rise to overdispersion.
The method of moments estimates can be gained by noting the first and second moments of the betabinomial namely
and setting these raw moments equal to the first and second raw sample moments respectively
and solving for ? and ? we get
Note that these estimates can be nonsensically negative which is evidence that the data is either undispersed or underdispersed relative to the binomial distribution. In this case, the binomial distribution and the hypergeometric distribution are alternative candidates respectively.
While closedform maximum likelihood estimates are impractical, given that the pdf consists of common functions (gamma function and/or Beta functions), they can be easily found via direct numerical optimization. Maximum likelihood estimates from empirical data can be computed using general methods for fitting multinomial Pólya distributions, methods for which are described in (Minka 2003). The R package VGAM through the function vglm, via maximum likelihood, facilitates the fitting of glm type models with responses distributed according to the betabinomial distribution. Note also that there is no requirement that n is fixed throughout the observations.
The following data gives the number of male children among the first 12 children of family size 13 in 6115 families taken from hospital records in 19th century Saxony (Sokal and Rohlf, p. 59 from Lindsey). The 13th child is ignored to assuage the effect of families nonrandomly stopping when a desired gender is reached.
Males  0  1  2  3  4  5  6  7  8  9  10  11  12 
Families  3  24  104  286  670  1033  1343  1112  829  478  181  45  7 
We note the first two sample moments are
and therefore the method of moments estimates are
The maximum likelihood estimates can be found numerically
and the maximized loglikelihood is
from which we find the AIC
The AIC for the competing binomial model is AIC = 25070.34 and thus we see that the betabinomial model provides a superior fit to the data i.e. there is evidence for overdispersion. Trivers and Willard posit a theoretical justification for heterogeneity (also known as "burstiness") in genderproneness among mammalian offspring (i.e. overdispersion).
The superior fit is evident especially among the tails
Males  0  1  2  3  4  5  6  7  8  9  10  11  12 
Observed Families  3  24  104  286  670  1033  1343  1112  829  478  181  45  7 
Fitted Expected (BetaBinomial)  2.3  22.6  104.8  310.9  655.7  1036.2  1257.9  1182.1  853.6  461.9  177.9  43.8  5.2 
Fitted Expected (Binomial p = 0.519215)  0.9  12.1  71.8  258.5  628.1  1085.2  1367.3  1265.6  854.2  410.0  132.8  26.1  2.3 
It is convenient to reparameterize the distributions so that the expected mean of the prior is a single parameter: Let
where
so that
The posterior distribution ?(?k) is also a beta distribution:
And
while the marginal distribution m(k?, M) is given by
Substituting back M and ?, in terms of and , this becomes:
which is the expected betabinomial distribution with parameters and .
We can also use the method of iterated expectations to find the expected value of the marginal moments. Let us write our model as a twostage compound sampling model. Let k_{i} be the number of success out of n_{i} trials for event i:
We can find iterated moment estimates for the mean and variance using the moments for the distributions in the twostage model:
(Here we have used the law of total expectation and the law of total variance.)
We want point estimates for and . The estimated mean is calculated from the sample
The estimate of the hyperparameter M is obtained using the moment estimates for the variance of the twostage model:
Solving:
where
Since we now have parameter point estimates, and , for the underlying distribution, we would like to find a point estimate for the probability of success for event i. This is the weighted average of the event estimate and . Given our point estimates for the prior, we may now plug in these values to find a point estimate for the posterior
We may write the posterior estimate as a weighted average:
where is called the shrinkage factor.
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information  helping you to save, discuss and share.
Visit defaultLogic's partner sites below: