The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.
Probability mass function
In general, if the random variable X follows the binomial distribution with parameters n ? N and p ? [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:
for k = 0, 1, 2, ..., n, where
is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows. k successes occur with probability pk and n - k failures occur with probability (1 - p)n - k. However, the k successes can occur anywhere among the n trials, and there are different ways of distributing k successes in a sequence of n trials.
In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement as
Looking at the expression ?(k, n, p) as a function of k, there is a k value that maximizes it. This k value can be found by calculating
and comparing it to 1. There is always an integer M that satisfies
?(k, n, p) is monotone increasing for k < M and monotone decreasing for k > M, with the exception of the case where (n + 1)p is an integer. In this case, there are two values for which ? is maximal: (n + 1)p and (n + 1)p - 1. M is the most probable (most likely) outcome of the Bernoulli trials and is called the mode. Note that the probability of it occurring can be fairly small.
If X ~ B(n, p), that is, X is a binomially distributed random variable, n being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of X is:
For example, if n = 100, and p = 1/4, then the average number of successful results will be 25.
Proof: We calculate the mean, ?, directly calculated from its definition
Proof: Let where all are independently Bernoulli distributed random variables. Since , we get:
Usually the mode of a binomial B(n, p) distribution is equal to , where is the floor function. However, when (n + 1)p is an integer and p is neither 0 nor 1, then the distribution has two modes: (n + 1)p and (n + 1)p - 1. When p is equal to 0 or 1, the mode will be 0 and n correspondingly. These cases can be summarized as follows:
For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for .
Let . We find
From this follows
So when is an integer, then and is a mode. In the case that , then only is a mode.
In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However several special results have been established:
If np is an integer, then the mean, median, and mode coincide and equal np.
Any median m must lie within the interval ?np? m np?.
A median m cannot lie too far away from the mean: }.
The median is unique and equal to m = round(np) in cases when either
When p = 1/2 and n is odd, any number m in the interval ½(n - 1) m n + 1) is a median of the binomial distribution. If p = 1/2 and n is even, then m = n/2 is the unique median.
Covariance between two binomials
If two binomially distributed random variables X and Y are observed together, estimating their covariance can be useful. The covariance is
In the case n = 1 (the case of Bernoulli trials) XY is non-zero only when both X and Y are one, and ?X and ?Y are equal to the two probabilities. Defining pB as the probability of both happening at the same time, this gives
and for n independent pairwise trials
If X and Y are the same variable, this reduces to the variance formula given above.
Sums of binomials
If X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables with the same probability p, then X + Y is again a binomial variable; its distribution is Z=X+Y ~ B(n+m, p):
This result was first derived by Katz et al in 1978.
Let p1 and p2 be the probabilities of success in the binomial distributions B(X,n) and B(Y,m) respectively. Let T = (X/n)/(Y/m).
Then log(T) is approximately normally distributed with mean log(p1/p2) and variance ((1/p1) - 1)/n + ((1/p2) - 1)/m.
If X ~ B(n, p) and, conditional on X, Y ~ B(X, q), then Y is a simple binomial variable with distribution Y ~ B(n, pq).
For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq).
Factoring and pulling all the terms that don't depend on out of the sum now yields
After substituting in the expression above, we get
Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields
and thus as desired.
The Bernoulli distribution is a special case of the binomial distribution, where n = 1. Symbolically, X ~ B(1, p) has the same meaning as X ~ B(p). Conversely, any binomial distribution, B(n, p), is the distribution of the sum of nBernoulli trials, B(p), each with the same probability p.
If n is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to B(n, p) is given by the normal distribution
and this basic approximation can be improved in a simple way by using a suitable continuity correction.
The basic approximation generally improves as n increases (at least 20) and is better when p is not near to 0 or 1. Various rules of thumb may be used to decide whether n is large enough, and p is far enough from the extremes of zero or one:
One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 1/3; that is, if
A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.
The rule is totally equivalent to request that
Moving terms around yields:
Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions:
Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3,
Subtracting the second set of inequalities from the first one yields:
and so, the desired first rule is satisfied,
Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.
Assume that both values and are greater than 9. Since , we easily have that
We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule:
The following is an example of applying a continuity correction. Suppose one wishes to calculate Pr(X X. If Y has a distribution given by the normal approximation, then Pr(X Y
For example, suppose one randomly samples n people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation
The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero. Therefore, the Poisson distribution with parameter ? = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n >= 20 and p n >= 100 and np
Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein.
The notation in the formula below differs from the previous formulas in two respects:
Firstly, zx has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the xth quantile of the standard normal distribution', rather than being a shorthand for 'the (1 - x)-th quantile'.
Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error = 0.05, so one gets the lower bound by using , and one gets the upper bound by using .
One way to generate random samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that P(X=k) for all values k from 0 through n. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples U[0,1] into discrete numbers by using the probabilities calculated in step one.
For k np, upper bounds for the lower tail of the distribution function can be derived. Recall that , the probability that there are at most k successes.
Moreover, these bounds are reasonably tight when p = 1/2, since the following expression holds for all k >= 3n/8
However, the bounds do not work well for extreme values of p. In particular, as p 1, value F(k;n,p) goes to zero (for fixed k, n with k < n)
while the upper bound above goes to a positive constant. In this case a better bound is given by
where D(a || p) is the relative entropy between an a-coin and a p-coin (i.e. between the Bernoulli(a) and Bernoulli(p) distribution):
Asymptotically, this bound is reasonably tight; see
 for details. An equivalent formulation of the bound is
Both these bounds are derived directly from the Chernoff bound.
It can also be shown that,
This is proved using the method of types (see for example chapter 12 of Elements of Information Theory by Cover and Thomas ).
We can also change the in the denominator to , by approximating the binomial coefficient with Stirling's formula.
This distribution was derived by James Bernoulli. He considered the case where p = r/(r + s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2.
^ abHamza, K. (1995). "The smallest uniform upper bound on the distance between the mean and the median of the binomial and Poisson distributions". Statistics & Probability Letters. 23: 21-25. doi:10.1016/0167-7152(94)00090-U.
^Katz D. et al.(1978) Obtaining confidence intervals for the risk ratio in cohort studies. Biometrics 34:469-474
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over $50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.
Taught Advanced Internet Marketing Strategies at the graduate level.
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.