Binomial Distribution Probabilities With Charts
Calculate exact and cumulative binomial probabilities for n trials with success probability p. Explore the distribution with interactive charts and key statistics.
Calculate exact and cumulative binomial probabilities for n trials with success probability p. Explore the distribution with interactive charts and key statistics.
Binomial distribution calculations require two parameters: n (number of trials) and p (success probability). Enter incorrect values and every downstream probability is wrong. Before running the calculator, confirm these assumptions hold.
The trial count n must be fixed in advance—known before the experiment begins. Flipping a coin 10 times means n = 10. Testing 50 products means n = 50. The binomial model breaks if you keep running trials until some stopping rule triggers (use negative binomial instead) or if n is unknown or varies randomly.
Each trial has identical success probability p, which stays constant across all n trials. A fair coin has p = 0.5 for heads. A manufacturing defect rate of 3% means p = 0.03. If success probability changes trial-to-trial—due to learning, fatigue, or batch differences—the binomial assumption fails and results become unreliable.
Trials must be independent: the outcome of one trial cannot affect another. Drawing cards with replacement qualifies—each draw has the same probabilities. Drawing without replacement violates independence because the deck composition changes. For sampling from finite populations without replacement, use the hypergeometric distribution instead.
Each trial produces exactly two outcomes: success or failure. Guessing on a multiple-choice question is either right (success) or wrong (failure). Rolling a die for "six" is either six (success) or not-six (failure). Problems with more than two categories need multinomial distributions.
Common error: Using percentages instead of decimals. Enter p = 0.20, not p = 20. The calculator expects values between 0 and 1.
Binomial queries come in two forms: exact probabilities for a specific outcome and cumulative probabilities that sum ranges. Selecting the wrong mode returns the wrong answer.
The probability of exactly k successes in n trials. The formula is P(X = k) = C(n,k) × pᵏ × (1−p)ⁿ⁻ᵏ, where C(n,k) counts the ways to arrange k successes among n positions. Use this mode for "exactly" questions: What is the probability of exactly 7 heads in 10 flips?
The probability of k or fewer successes—summing P(X = 0) through P(X = k). Use this mode for "at most" questions: What is the probability of at most 7 heads in 10 flips? The cumulative probability always exceeds the exact probability at the same k because it includes more outcomes.
The probability of k or more successes. Computed as 1 − P(X ≤ k−1). Use this mode for "at least" questions: What is the probability of at least 3 defectives in a batch of 50? This equals summing P(X = k) through P(X = n).
The probability of successes falling between a and b (inclusive). Computed as P(X ≤ b) − P(X ≤ a−1). Use this mode for interval questions: What is the probability of 5 to 8 conversions out of 20 visitors?
Quick check: Exact probabilities decrease as k moves away from the mean n × p. Cumulative probabilities monotonically increase toward 1 as k increases.
Three summary statistics describe where the distribution centers and how spread out it is. These formulas depend only on n and p—no summation required.
The expected number of successes over many repetitions. With n = 100 trials and p = 0.30, expect about 30 successes on average. The mean shifts linearly with both n and p—double n and the mean doubles; double p and the mean doubles.
Variance measures spread around the mean. It peaks when p = 0.5 (maximum uncertainty) and shrinks as p approaches 0 or 1 (near-certain outcomes). Larger n increases variance because more trials create more potential deviation from the mean.
The square root of variance puts spread in the same units as the success count. About 68% of outcomes fall within μ ± σ, and roughly 95% fall within μ ± 2σ. For n = 100, p = 0.5: μ = 50, σ ≈ 5, so most outcomes land between 40 and 60.
| Statistic | Formula | n=20, p=0.3 | n=100, p=0.5 |
|---|---|---|---|
| Mean | n × p | 6.0 | 50.0 |
| Variance | n × p × (1−p) | 4.2 | 25.0 |
| Std Dev | √(n × p × (1−p)) | ≈ 2.05 | 5.0 |
Before trusting calculator output, run quick sanity checks based on known binomial properties.
When p = 0.5, the distribution is symmetric around n/2. P(X = k) equals P(X = n − k). For n = 10 and p = 0.5, P(X = 3) equals P(X = 7). If your results violate this symmetry, recheck inputs.
When p < 0.5, the distribution is right-skewed—more mass on lower k values, tail extending right. When p > 0.5, the distribution is left-skewed—more mass on higher k values, tail extending left. Check the chart to confirm the expected skew direction.
If p = 0 (no chance of success), all probability concentrates at k = 0. If p = 1 (certain success), all probability concentrates at k = n. These degenerate cases collapse the distribution to a single spike.
For k < 0 or k > n, P(X = k) = 0 by definition. You cannot have negative successes or more successes than trials. The calculator should return zero for such inputs.
Summing P(X = 0) through P(X = n) must equal 1.0. Similarly, P(X ≤ n) must equal 1.0 and P(X ≥ 0) must equal 1.0. If the distribution table shows values that don't sum to 1, something is wrong.
Mode sanity: The mode (most likely k) sits near n × p. For large n with moderate p, the mode equals floor(n×p + p) or floor(n×p + p) − 1. If the highest probability bar in the chart is far from n × p, recheck your parameters.
The probability mass function (PMF) chart displays bars for each possible k from 0 to n, with bar heights proportional to P(X = k). Use the chart to visualize distribution shape and locate high-probability regions.
Each bar represents the exact probability of that specific k. The tallest bar marks the mode—the most likely outcome. Adjacent bars show how probability tapers as k moves away from the mode. Compare bar heights to gauge relative likelihoods.
Symmetric distributions (p = 0.5) show mirror-image bars around the center. Right-skewed distributions (p < 0.5) pile bars on the left with a long right tail. Left-skewed distributions (p > 0.5) pile bars on the right with a long left tail. The chart makes skewness immediately visible.
The distribution table accompanies the chart, showing both P(X = k) and P(X ≤ k) for each k. Watch the cumulative column climb toward 1.0 as k increases. The jump between consecutive cumulative values equals the corresponding exact probability.
Screenshot the PMF chart for reports, slides, or homework. The visual clarifies where probability mass concentrates—helpful when explaining results to stakeholders who find tables harder to parse than graphs.
C(n,k) counts the number of ways to choose k positions (for successes) out of n total trials. It equals n! / (k! × (n−k)!). For n = 10 and k = 3, C(10,3) = 120 means there are 120 different orderings that yield exactly 3 successes in 10 trials.
As n grows, the number of possible outcomes expands, spreading probability across more values of k. Exact probabilities shrink because each specific k captures a smaller share of total probability. Cumulative probabilities remain stable because they sum ranges.
For large n (typically n > 50) with moderate p (both n×p ≥ 5 and n×(1−p) ≥ 5), the binomial distribution approximates a normal with mean n×p and standard deviation √(n×p×(1−p)). Normal approximation is faster and avoids numerical overflow for very large n. Exact binomial is more accurate for small n or extreme p.
Dependent trials invalidate the binomial model. Sampling without replacement from finite populations requires hypergeometric distribution. Sequential processes with memory (each trial affects the next) need Markov chains or other dependent models. Do not force binomial on dependent data.
The mean μ = n×p tells you the expected success count on average. The standard deviation σ tells you typical deviation from that average. Most outcomes fall within μ ± 2σ. If μ = 30 and σ = 5, outcomes between 20 and 40 cover roughly 95% of probability.
Yes—binomial distributions underpin acceptance sampling plans. Given sample size n and defect probability p, compute the probability of finding k or fewer defects. Compare against accept/reject criteria to determine whether a lot passes inspection.
Fixed n required: The binomial model demands a predetermined trial count. Variable or unknown n requires alternative distributions (negative binomial, Poisson).
Independence assumed: Outcomes must not influence each other. Correlated trials or sampling without replacement violate this assumption.
Constant p: Success probability must stay identical across all trials. Learning effects, fatigue, or batch variability break the model.
Numerical limits: For very large n (> 170), factorials overflow standard precision. The calculator handles this via logarithms, but extreme values may show minor rounding.
Disclaimer: This calculator is for educational and informational purposes. Verify results with professional statistical software (R, Python SciPy, SAS, SPSS) for research, quality control, or critical decisions. Consult qualified statisticians for important analyses.
Common questions about binomial distributions, probability calculations, PMF and CDF formulas, assumptions, and how to use this calculator for homework and statistics practice.
The binomial distribution is used to model the number of successes in a fixed number of independent trials where each trial has only two possible outcomes (success or failure). Common applications include quality control (counting defects), clinical trials (counting positive responses), survey analysis (counting 'yes' answers), and game theory (counting wins in repeated games).
The binomial model assumes: (1) A fixed number of trials n, (2) Each trial is independent of the others, (3) Each trial has exactly two outcomes (success or failure), and (4) The probability of success p is constant for every trial. If these assumptions are violated, the binomial model may not be appropriate.
P(X = k) is the probability of getting exactly k successes—no more, no less. P(X ≤ k) is the cumulative probability of getting k or fewer successes, which includes all outcomes from 0 through k. The cumulative probability is always greater than or equal to the exact probability for the same k.
Use P(X ≥ k) when you want to know the probability of getting at least k successes. This is common in scenarios like 'What's the chance of passing if I need at least 7 correct answers?' P(X ≤ k) is for 'at most' questions, while P(X ≥ k) is for 'at least' questions.
The mean μ = n × p represents the expected number of successes if you were to repeat the experiment many times. For example, if you flip a fair coin 100 times (n=100, p=0.5), you expect about 50 heads on average. Individual experiments will vary around this expected value.
The standard deviation σ = √(n × p × (1-p)) measures the typical spread of the number of successes around the mean. About 68% of outcomes fall within one standard deviation of the mean, and about 95% fall within two standard deviations. A larger standard deviation means more variability in your results.
For very large n, the binomial distribution approaches the normal distribution (by the Central Limit Theorem). When n is extremely large, computing exact binomial probabilities can be slow and may have numerical precision issues. For n > 200, consider using the normal approximation instead.
When p = 0, every trial fails, so P(X = 0) = 1 and all other probabilities are 0. When p = 1, every trial succeeds, so P(X = n) = 1 and all other probabilities are 0. These are degenerate cases where there's no randomness—the outcome is certain.
This calculator is an educational tool designed to help you understand binomial distributions and verify your work. While it provides accurate calculations, you should use it to learn the concepts and check your manual calculations, not as a substitute for understanding the material. Always verify important results independently.
The calculator uses standard floating-point arithmetic with results displayed to 6 decimal places. For most practical purposes, this precision is more than sufficient. Very small probabilities (less than 10^-10) may be displayed as 0 due to numerical limitations.
Calculate probabilities under the normal curve
Convert between z-scores and probabilities
Compute various probability calculations
Build confidence intervals for means and proportions
Calculate mean, median, standard deviation, and more
Perform matrix calculations and linear algebra
Calculate probabilities for rare event occurrences
Apply linear transformations to random variables
Enter the number of trials (n), probability of success (p), and select a probability query to explore the binomial distribution.