Poisson Distribution Calculator
Calculate exact and cumulative Poisson probabilities given an average event rate λ. Explore the distribution with interactive charts and key statistics.
Calculate exact and cumulative Poisson probabilities given an average event rate λ. Explore the distribution with interactive charts and key statistics.
The Poisson distribution is one of the most important discrete probability distributions in statistics, modeling the number of events occurring in a fixed interval of time or space when events happen independently at a known constant average rate. Named after French mathematician Siméon Denis Poisson, this distribution is particularly useful for modeling rare events, arrival processes, and count data. This tool helps you calculate exact probabilities P(X = k), cumulative probabilities P(X ≤ k) and P(X ≥ k) for Poisson processes. Whether you're a student learning probability theory, a researcher analyzing count data, a quality control engineer monitoring defect rates, or a business professional modeling customer arrivals, understanding Poisson distributions enables you to quantify uncertainty, make predictions, and assess the likelihood of specific event counts in time or space intervals.
For students and researchers, this tool demonstrates practical applications of probability theory, exponential functions, and statistical modeling. The Poisson distribution calculation shows how the exponential function e^(-λ), power functions λ^k, and factorials k! combine to produce meaningful probability assessments. Students can use this tool to verify homework calculations, understand how the rate parameter λ affects probability distributions, and explore concepts like mean, variance, and standard deviation in discrete probability contexts. Researchers can apply Poisson distributions to analyze count data, model arrival processes, test hypotheses about event rates, and understand rare event probabilities in fields ranging from physics and biology to engineering and social sciences.
For business professionals and practitioners, Poisson distributions provide essential tools for decision-making under uncertainty. Call center managers use Poisson models to predict call volumes, determine staffing needs, and assess service levels. Quality control engineers use Poisson distributions to model defect rates, calculate acceptance probabilities for sampling plans, and determine whether production processes meet quality standards. Insurance actuaries use Poisson models to analyze claim frequencies, assess risk, and price insurance policies. Operations managers use Poisson distributions to model customer arrivals, optimize queue systems, and plan resource allocation. Healthcare professionals use Poisson models to analyze patient arrivals, model disease outbreaks, and assess treatment effectiveness.
For the common person, this tool answers practical probability questions: What's the chance of receiving exactly 5 phone calls in an hour? What's the probability of at most 3 defects in a batch of 100 products? What's the likelihood that at least 10 customers arrive at a store per hour? The tool calculates exact probabilities (P(X = k)), cumulative probabilities up to a value (P(X ≤ k)), and cumulative probabilities from a value (P(X ≥ k)), providing comprehensive probability assessments for any Poisson scenario. Taxpayers and budget-conscious individuals can use Poisson distributions to model financial events, assess risk in decision-making, and understand probability in everyday situations like customer arrivals, defect rates, and rare event occurrences.
A Poisson process is a stochastic process that models events occurring randomly in time or space with a constant average rate λ (lambda). The key characteristics are: (1) Events occur independently—the occurrence of one event doesn't affect the probability of another, (2) Constant rate—the average rate λ remains constant over the interval, (3) No simultaneous events—two events cannot occur at exactly the same instant, (4) Proportionality—the probability of an event in a small interval is proportional to the interval length. Examples include phone calls arriving at a call center, customers arriving at a store, defects occurring in manufacturing, bacteria appearing in a solution, or meteorites hitting Earth.
Lambda (λ) is the average rate of events per interval—it represents the expected number of occurrences in a fixed interval. Lambda can be any positive real number (not just integers). For example, if a call center receives an average of 10 calls per hour, λ = 10. If a manufacturing process produces an average of 2.5 defects per 100 units, λ = 2.5. Uniquely in the Poisson distribution, λ is both the mean (expected value) and the variance of the distribution, a property called "equidispersion." This means that as λ increases, both the expected number of events and the variability increase proportionally.
The probability mass function P(X = k) gives the probability of observing exactly k events in a fixed interval. The formula is: P(X = k) = (e^(-λ) × λ^k) / k!, where e is Euler's number (approximately 2.71828), λ is the average rate, and k! is the factorial of k. The term e^(-λ) represents the probability of no events occurring, λ^k represents the rate raised to the power of k, and k! normalizes the probability. The formula is derived from the Poisson process assumptions and provides the probability of exactly k events when the average rate is λ. For numerical stability, the calculation uses logarithmic methods: log(P(X=k)) = -λ + k×log(λ) - log(k!).
The cumulative distribution function P(X ≤ k) gives the probability of observing k or fewer events in a fixed interval. It's calculated by summing the PMF values from 0 to k: P(X ≤ k) = Σ(i=0 to k) P(X = i). This cumulative probability answers "at most" or "no more than" questions. For example, P(X ≤ 7) is the probability of getting 7 or fewer events. The complementary cumulative probability P(X ≥ k) gives the probability of getting k or more events, calculated as P(X ≥ k) = 1 - P(X ≤ k-1) = 1 - CDF(k-1). This answers "at least" or "no fewer than" questions. For example, P(X ≥ 3) is the probability of getting 3 or more events.
The mean (expected value) of a Poisson distribution is μ = λ, representing the average number of events you expect in a fixed interval. The variance is σ² = λ, which equals the mean—this unique property is called "equidispersion." The standard deviation is σ = √λ, which is easier to interpret since it's in the same units as the number of events. About 68% of outcomes fall within one standard deviation of the mean (λ ± √λ), and about 95% fall within two standard deviations (λ ± 2√λ). As λ increases, the distribution becomes more symmetric and approaches a normal distribution (by the Central Limit Theorem).
The shape of a Poisson distribution depends on the rate parameter λ. When λ is small (< 5), the distribution is noticeably right-skewed with most probability mass near 0 and a long tail extending to the right. When λ is moderate (5-20), the distribution becomes less skewed and more symmetric. When λ is large (> 20), the distribution becomes approximately symmetric and bell-shaped, closely approximating a normal distribution with mean μ = λ and standard deviation σ = √λ. The skewness decreases as λ increases, making normal approximation appropriate for large λ values.
The Poisson distribution is related to several other distributions: (1) From Binomial—Poisson is a limiting case of the binomial distribution when n is large, p is small, and λ = n×p is moderate (Law of Rare Events). (2) To Normal—For large λ (typically λ > 20), Poisson can be approximated by a normal distribution with μ = λ and σ = √λ. (3) Exponential Connection—If events follow a Poisson process with rate λ, the time between consecutive events follows an exponential distribution with parameter λ. (4) Gamma Connection—The sum of n independent exponential random variables (inter-arrival times) follows a gamma distribution. These relationships provide alternative calculation methods and approximations.
The tool supports three types of probability queries: (1) Exact Probability P(X = k)—the probability of observing exactly k events, useful for specific outcome questions like "What's the chance of exactly 5 calls in an hour?" (2) Cumulative Up To P(X ≤ k)—the probability of observing k or fewer events, useful for "at most" or "no more than" questions like "What's the chance of at most 3 defects per batch?" (3) Cumulative From P(X ≥ k)—the probability of observing k or more events, useful for "at least" or "no fewer than" questions like "What's the chance of at least 10 customers per hour?" Each query type answers different practical questions and requires different calculations.
Start by entering the average rate λ (lambda) in the "Average Rate (λ)" field. This is the expected number of events per interval. Lambda can be any positive real number (not just integers). For example, if a call center receives an average of 10 calls per hour, λ = 10. If a manufacturing process produces an average of 2.5 defects per 100 units, λ = 2.5. The tool accepts λ values from 0.01 to 1000. Make sure λ represents the average rate for your specific interval (time, space, or other measure).
Choose the type of probability you want to calculate: "Exact" for P(X = k), "Cumulative Up To" for P(X ≤ k), or "Cumulative From" for P(X ≥ k). Select the option that matches your question. For example, if you want to know the probability of exactly 5 events, choose "Exact". If you want to know the probability of at most 5 events, choose "Cumulative Up To". If you want to know the probability of at least 5 events, choose "Cumulative From".
Enter the value x (the number of events you're interested in) in the "Value (x)" field. This must be a non-negative integer (0, 1, 2, 3, ...). For example, if you want to know the probability of exactly 5 events, enter x = 5. If you want to know the probability of at most 7 events, enter x = 7. The tool will validate your input and show an error if x is negative or not an integer.
Click "Calculate" or submit the form to compute the requested probability. The tool displays the calculated probability (as both a decimal and percentage), key statistics (mean, variance, standard deviation), a complete distribution table showing probabilities for all possible values of k from 0 to approximately λ + 6√λ (covering 99.9%+ of probability mass), and an interactive chart visualizing the probability distribution. The interpretation summary explains what the probability means in practical terms, helping you understand the result in context.
Review the distribution table to see probabilities for all possible outcomes (k = 0 to approximately λ + 6√λ). Each row shows k (number of events), the exact probability P(X = k), and the cumulative probability P(X ≤ k). The chart visualizes the probability mass function, showing how probability is distributed across different values of k. Use the table and chart to understand the shape of the distribution, identify the most likely outcomes (highest probabilities), and see how probabilities change as k increases. The mean, variance, and standard deviation provide summary statistics that characterize the distribution's center and spread.
Use the interpretation summary to understand how λ affects the distribution shape. For small λ (< 5), the distribution is right-skewed with most probability near 0. For moderate λ (5-20), the distribution becomes less skewed. For large λ (> 20), the distribution becomes approximately symmetric and can be approximated by a normal distribution. The summary also provides the mean and standard deviation, which help you understand the expected number of events and typical variability around that expectation.
The PMF P(X = k) is calculated using the Poisson probability formula with logarithmic methods for numerical stability:
If k < 0 or k is not an integer: P(X = k) = 0
If λ = 0: P(X = k) = 1 if k = 0, else 0
Otherwise (logarithmic method):
log(P(X=k)) = -λ + k×log(λ) - log(k!)
If k ≤ 170: Calculate log(k!) exactly by summing log(2) + log(3) + ... + log(k)
If k > 170: Use Stirling's approximation: log(k!) ≈ k×log(k) - k + 0.5×log(2πk)
P(X = k) = exp(log(P(X=k))), clamped to [0, 1]
The logarithmic method avoids computing very large or very small numbers directly, which can cause numerical overflow or underflow. By working in log space, we add and subtract logarithms instead of multiplying and dividing probabilities, then exponentiate at the end. Stirling's approximation is used for large k to avoid computing very large factorials, providing accurate approximations when k > 170.
The CDF P(X ≤ k) is calculated by summing PMF values from 0 to k:
If k < 0: P(X ≤ k) = 0
Otherwise: P(X ≤ k) = Σ(i=0 to k) P(X = i)
The CDF is computed by iterating from i = 0 to k, calculating each PMF value P(X = i) using the logarithmic method, and summing them. The calculation stops early if the cumulative sum reaches 1 (for numerical stability). The result is clamped to [0, 1] to handle floating-point errors. For efficiency, the calculation can be optimized using recurrence relations, but the direct summation method is used here for clarity and accuracy.
The complementary cumulative probability P(X ≥ k) is calculated using the CDF:
P(X ≥ k) = 1 - P(X ≤ k-1) = 1 - CDF(k-1)
This formula uses the complement rule: the probability of getting k or more events equals 1 minus the probability of getting k-1 or fewer events. This is more efficient than summing PMF values from k to infinity, especially when k is large. The formula handles edge cases: if k = 0, P(X ≥ 0) = 1 (always true). For very large k, P(X ≥ k) approaches 0 as k increases beyond λ + several standard deviations.
The mean, variance, and standard deviation are calculated using standard formulas:
Mean (Expected Value): μ = λ
Variance: σ² = λ
Standard Deviation: σ = √λ
These formulas are derived from the properties of the Poisson distribution and don't require summing over all possible outcomes. The mean represents the expected number of events, the variance equals the mean (equidispersion property), and the standard deviation provides an interpretable measure of variability in the same units as the number of events.
The distribution table includes values from k = 0 to approximately λ + 6√λ:
maxK = min(ceil(λ + 6×√λ), 200)
Stop early if: cumulativeSum > 0.99999 and k > λ
The range λ + 6√λ covers approximately 99.9%+ of the probability mass (by Chebyshev's inequality and normal approximation properties). The table is capped at k = 200 for performance reasons. The calculation stops early if the cumulative sum exceeds 0.99999 and k is greater than λ, indicating that essentially all probability mass has been captured. This optimization reduces computation time for large λ values while maintaining accuracy.
Let's calculate the probability of receiving exactly 7 calls in an hour when the average rate is 5 calls per hour:
Given: λ = 5, k = 7
Step 1: Calculate log(P(X = 7))
log(P(X=7)) = -5 + 7×log(5) - log(7!)
= -5 + 7×1.6094 - log(5040)
= -5 + 11.2658 - 8.5252 = -2.2594
Step 2: Calculate P(X = 7)
P(X = 7) = exp(-2.2594) ≈ 0.1044 (10.44%)
Step 3: Calculate Summary Statistics
Mean: μ = 5.0
Variance: σ² = 5.0
Standard Deviation: σ = √5 ≈ 2.24
Interpretation:
The probability of receiving exactly 7 calls in an hour when the average rate is 5 calls per hour is approximately 10.44%. You expect 5 calls on average, with a standard deviation of about 2.24 calls.
Now let's calculate the cumulative probability P(X ≤ 7):
Calculate P(X ≤ 7)
P(X ≤ 7) = P(X = 0) + P(X = 1) + ... + P(X = 7)
Using the PMF formula for each value and summing:
P(X ≤ 7) ≈ 0.8666 (86.66%)
Interpretation:
The probability of receiving 7 or fewer calls in an hour when the average rate is 5 calls per hour is approximately 86.66%. This means there's an 86.66% chance you'll receive at most 7 calls.
This example demonstrates how the Poisson distribution models the probability of events in fixed intervals. The exact probability P(X = 7) = 10.44% is relatively low because getting exactly 7 calls is a specific outcome, while the cumulative probability P(X ≤ 7) = 86.66% is much higher because it includes all outcomes from 0 to 7. The mean of 5 calls per hour represents the expected value, and the standard deviation of 2.24 calls represents typical variability around that expectation.
A student studying operations research needs to model call arrivals at a call center that receives an average of 10 calls per hour. They want to know the probability of receiving exactly 7 calls in an hour. Using the tool with λ=10 and "Exact" mode with x=7, the tool calculates P(X = 7) ≈ 0.0901 (9.01%). The student learns that the probability of exactly 7 calls is 9.01%, which helps them understand arrival variability. The mean is 10 (expected calls), variance is 10, and standard deviation is about 3.16, showing that most outcomes cluster around 10 calls with typical variability of about 3 calls.
A quality control engineer monitors a production line where the historical defect rate is 2.5 defects per 100 units. They want to know the probability of finding at most 3 defects in a batch of 100 units. Using the tool with λ=2.5 and "Cumulative Up To" mode with x=3, the tool calculates P(X ≤ 3) ≈ 0.7576 (75.76%). The engineer learns that there's a 75.76% chance of finding 3 or fewer defects, which helps assess whether the production process is meeting quality standards. The mean is 2.5 (expected defects), variance is 2.5, and standard deviation is about 1.58.
A hospital administrator models patient arrivals at an emergency department where the average arrival rate is 8 patients per hour. They want to know the probability that at least 12 patients arrive in an hour. Using the tool with λ=8 and "Cumulative From" mode with x=12, the tool calculates P(X ≥ 12) ≈ 0.1119 (11.19%). The administrator learns that the probability of 12 or more arrivals is 11.19%, which helps plan staffing levels and assess capacity needs. The mean is 8 (expected arrivals), variance is 8, and standard deviation is about 2.83.
A person receives an average of 5 emails per hour and wants to know the probability of receiving exactly 3 emails in the next hour. Using the tool with λ=5 and "Exact" mode with x=3, the tool calculates P(X = 3) ≈ 0.1404 (14.04%). The person learns that the probability of exactly 3 emails is 14.04%, which helps them understand email arrival patterns. They can also check the distribution table to see probabilities for all outcomes (0 to approximately 20 emails) and understand that 5 emails is the most likely outcome (mean = 5.0).
A store manager models customer arrivals where the average arrival rate is 15 customers per hour. They want to know the probability that at most 10 customers arrive in an hour. Using the tool with λ=15 and "Cumulative Up To" mode with x=10, the tool calculates P(X ≤ 10) ≈ 0.1185 (11.85%). The manager learns that the probability of 10 or fewer arrivals is relatively low (11.85%), which suggests that 10 or fewer customers is an unusual outcome when the average is 15. This helps assess staffing needs and understand arrival variability. The mean is 15 (expected arrivals), variance is 15, and standard deviation is about 3.87.
A researcher studies meteorite impacts where the average rate is 0.1 impacts per year globally. They want to know the probability of at least 2 impacts in a year. Using the tool with λ=0.1 and "Cumulative From" mode with x=2, the tool calculates P(X ≥ 2) ≈ 0.0047 (0.47%). The researcher learns that the probability of 2 or more impacts is very low (0.47%), which demonstrates the rarity of such events. The mean is 0.1 (expected impacts), variance is 0.1, and standard deviation is about 0.32. This example shows how Poisson distributions model rare events effectively.
A user wants to understand how changing λ affects probabilities. They compare three scenarios: (1) λ=2, x=3 gives P(X=3) ≈ 0.1804 (18.04%), (2) λ=5, x=3 gives P(X=3) ≈ 0.1404 (14.04%), (3) λ=10, x=3 gives P(X=3) ≈ 0.0076 (0.76%). The user learns that as λ increases, the probability of a specific low value (like 3) decreases because the distribution shifts toward higher values. When λ=2, the mean is 2, so x=3 is close to the mean. When λ=10, the mean is 10, so x=3 is far from the mean and has much lower probability. This helps them understand how λ affects the distribution shape and probability calculations.
The Poisson distribution requires that events occur independently—the occurrence of one event must not affect the probability of another. Don't use Poisson distributions for dependent events, such as events that cluster together, events that repel each other, or situations where one event influences the next. If events are dependent, consider using other models like negative binomial (for overdispersion) or consider the dependence structure in your model. Always verify that events are independent before applying Poisson models.
The Poisson distribution assumes that the average rate λ remains constant over the interval. Don't use Poisson distributions when λ varies significantly, such as during peak hours vs. off-peak hours, seasonal variations, or time-dependent rates. If the rate varies, consider using time-varying Poisson models, segmenting your analysis by periods with constant rates, or using other distributions that account for rate variation. Always verify that the rate is approximately constant before applying Poisson models.
P(X = k) is the probability of exactly k events, while P(X ≤ k) is the probability of k or fewer events (cumulative). Don't confuse these—P(X ≤ k) is always greater than or equal to P(X = k) because it includes all outcomes from 0 to k. For example, P(X = 7) might be 0.1044, while P(X ≤ 7) might be 0.8666. Make sure you select the correct query type ("Exact" vs "Cumulative Up To" vs "Cumulative From") based on whether your question asks for "exactly", "at most", or "at least".
The Poisson distribution models count data (non-negative integers: 0, 1, 2, 3, ...). Don't use Poisson distributions for continuous variables, negative values, or non-integer counts. For continuous variables, use appropriate continuous distributions like normal, exponential, or others. For negative values or non-integer counts, Poisson is not applicable. Always verify that your data represents counts of events before applying Poisson models.
The mean μ = λ is the expected value (average over many repetitions), not necessarily the most likely single outcome. Don't assume that the mean is always the outcome with the highest probability. For small λ, the mode (most likely outcome) is often floor(λ) or floor(λ)-1, which may differ from λ. For example, with λ=2.5, the mode is 2 (not 2.5, since counts must be integers). Check the distribution table to find the mode (most likely outcome), which may differ from the mean, especially for small λ.
For large λ (typically λ > 20), the Poisson distribution becomes approximately symmetric and can be approximated by a normal distribution with mean μ = λ and standard deviation σ = √λ. Don't use exact Poisson calculations when λ is very large if normal approximation would be faster and sufficiently accurate. Normal approximation is appropriate when λ > 20 and provides good approximations for large λ, making calculations easier and avoiding numerical precision issues with very large factorials.
Make sure your inputs are within valid ranges: λ must be a positive real number (0.01 to 1000 in this tool), and x must be a non-negative integer (0, 1, 2, 3, ...). Don't enter negative values, λ = 0, or non-integer x values. The tool validates inputs, but understanding valid ranges helps you avoid errors and interpret results correctly. Invalid inputs will produce errors or meaningless results. Lambda can be any positive number (including decimals like 2.5), but x must always be an integer.
After calculating a specific probability, review the complete distribution table to see probabilities for all possible outcomes (k = 0 to approximately λ + 6√λ). This helps you understand the shape of the distribution, identify the most likely outcomes (highest probabilities), see how probabilities change as k increases, and understand the relationship between exact and cumulative probabilities. The table provides context for your specific calculation and helps you interpret results in the broader distribution.
Use the mean μ = λ to understand the expected number of events, and the standard deviation σ = √λ to understand typical variability. About 68% of outcomes fall within one standard deviation of the mean (λ ± √λ), and about 95% fall within two standard deviations (λ ± 2√λ). This helps you understand not just the expected value, but also the range of likely outcomes. For example, if λ = 10 and σ = 3.16, you expect most outcomes to fall between 7 and 13 events.
The shape of the Poisson distribution depends on λ: when λ is small (< 5), the distribution is right-skewed with most probability near 0; when λ is moderate (5-20), the distribution becomes less skewed; when λ is large (> 20), the distribution becomes approximately symmetric and can be approximated by a normal distribution. Understanding the shape helps you interpret probabilities—for right-skewed distributions, probabilities are higher for lower values of k, while for symmetric distributions, probabilities are more balanced around the mean. Use the chart visualization to see the distribution shape.
For large λ (typically λ > 20), consider using normal approximation instead of exact Poisson calculations. The normal approximation uses mean μ = λ and standard deviation σ = √λ, and provides good approximations when λ is large. Normal approximation is faster, avoids numerical precision issues with very large factorials, and is justified by the Central Limit Theorem. Use normal approximation when λ > 20 and you need quick estimates, but use exact Poisson calculations when λ is small or when high precision is required.
Use complementary probabilities to verify calculations: P(X ≥ k) = 1 - P(X ≤ k-1) and P(X ≤ k) = 1 - P(X ≥ k+1). For example, if you calculate P(X ≤ 7) = 0.8666, you can verify by calculating P(X ≥ 8) = 1 - P(X ≤ 7) = 1 - 0.8666 = 0.1334, and checking that P(X ≤ 7) + P(X ≥ 8) = 1. This helps catch calculation errors and ensures probabilities sum correctly. Note that P(X ≤ k) + P(X ≥ k+1) = 1, not P(X ≤ k) + P(X ≥ k) = 1.
Remember that Poisson is a limiting case of the binomial distribution when n is large, p is small, and λ = n×p is moderate (Law of Rare Events). If you have a binomial scenario with large n and small p, you can approximate it using Poisson with λ = n×p. This approximation is useful when n is very large and exact binomial calculations are slow or numerically unstable. Use Poisson approximation when n ≥ 20, p ≤ 0.05, and n×p ≤ 10, or when n ≥ 100 and p ≤ 0.1.
Read the interpretation summary provided by the tool, which explains what your calculated probability means in practical terms. The summary includes the mean and standard deviation, explains the calculated probability in context, and provides insights about distribution shape (right-skewed, symmetric, or approaching normal). This helps you understand not just the number, but what it means for your specific scenario and how to use it in decision-making. The summary also helps you understand whether your result is expected or unusual given the average rate.
• Constant Rate Assumption: The Poisson distribution assumes events occur at a constant average rate (λ) throughout the observation interval. Time-varying rates, seasonal patterns, or rate changes due to external factors violate this assumption and require non-homogeneous Poisson or other time-series models.
• Independence of Events: Events must occur independently—one event cannot trigger, prevent, or influence subsequent events. Clustering effects, contagion processes, or self-exciting phenomena (where one event increases the likelihood of another) violate this assumption.
• Equidispersion Requirement: The Poisson distribution requires that the variance equals the mean (σ² = λ). Real count data often exhibit overdispersion (variance > mean) or underdispersion (variance < mean), requiring negative binomial, quasi-Poisson, or other alternative models.
• Single-Event Occurrence: The model assumes events cannot occur simultaneously at exactly the same instant. Scenarios with batch arrivals, simultaneous occurrences, or events with duration may require compound Poisson or other specialized distributions.
Important Note: This calculator is strictly for educational and informational purposes only. It does not provide professional statistical consulting, operations research advice, or scientific conclusions. The Poisson model is an approximation—real-world arrival processes, count data, and rare events often deviate from model assumptions. Results should be verified using professional statistical software (R, Python SciPy, SAS, SPSS) for any research, queueing analysis, reliability engineering, or professional applications. For critical decisions in healthcare capacity planning, call center staffing, insurance claims modeling, or quality control, always consult qualified statisticians or operations researchers who can validate model assumptions and recommend appropriate methodologies.
The mathematical formulas and statistical concepts used in this calculator are based on established probability theory and authoritative academic sources:
Common questions about Poisson distributions, lambda parameter, PMF and CDF formulas, assumptions, relationship to other distributions, and how to use this calculator for homework and statistics practice.
The Poisson distribution models the number of times an event occurs in a fixed interval when events happen independently at a constant average rate. It's commonly used for: counting arrivals (customers, calls, emails), quality control (defects per batch), biology (bacteria counts), insurance (claims per period), and any scenario involving rare, random events in time or space.
Lambda (λ) is the average rate of events per interval—it's the expected number of occurrences. For example, if a call center receives an average of 10 calls per hour, λ = 10. Uniquely in the Poisson distribution, λ is both the mean and the variance of the distribution.
The Poisson distribution is appropriate when: (1) events occur one at a time, (2) events are independent of each other, (3) the average rate is constant, and (4) two events cannot occur simultaneously. If events are clustered, correlated, or the rate varies significantly, the Poisson model may not fit well.
P(X = k) is the probability of exactly k events occurring—no more, no less. P(X ≤ k) is the cumulative probability of k or fewer events, which includes all outcomes from 0 through k. For example, P(X = 3) might be 15%, while P(X ≤ 3) includes P(X=0) + P(X=1) + P(X=2) + P(X=3) and might be 65%.
The Poisson distribution is a limiting case of the binomial distribution. When you have many trials (large n), a small probability of success (small p), and a moderate expected count (λ = n×p stays constant), the binomial distribution approaches the Poisson. This is called the 'Law of Rare Events' or 'Poisson limit theorem.'
This is a unique property of the Poisson distribution called 'equidispersion.' It arises from the mathematical derivation of the distribution from the Poisson process. In practice, if your data shows variance significantly different from the mean, it suggests the Poisson model may not fit well (overdispersion or underdispersion).
For large λ (typically λ > 20), the Poisson distribution becomes approximately symmetric and bell-shaped, so you can use a normal distribution with mean μ = λ and standard deviation σ = √λ. This makes calculations easier and is justified by the Central Limit Theorem.
When λ is small (< 5), the distribution is noticeably right-skewed with most probability mass near 0. When λ is large (> 20), the distribution becomes more symmetric and approaches a normal distribution. The calculator handles both cases, but very large λ may truncate the display for performance.
Yes! Lambda can be any positive real number. For example, if you observe 2.7 events per hour on average, λ = 2.7 is perfectly valid. The outputs k (number of events) must be non-negative integers, but the rate λ can be any positive number.
This calculator is an educational tool designed to help you understand the Poisson distribution and verify your work. While it provides accurate calculations, you should use it to learn the concepts and check your manual calculations, not as a substitute for understanding the material. Always verify important results independently.
Calculate binomial probabilities for n trials
Calculate probabilities under the normal curve
Compute various probability calculations
Convert between z-scores and probabilities
Build confidence intervals for means and proportions
Calculate mean, median, standard deviation, and more
Apply linear transformations to random variables
Enter the average event rate (λ) and a target value to explore the Poisson distribution and calculate probabilities.