Normal Distribution Calculator
Calculate Z-Score, PDF, and CDF of a normal distribution. Visualize the bell curve and shaded probability area.
Calculate Z-Score, PDF, and CDF of a normal distribution. Visualize the bell curve and shaded probability area.
The normal distribution, also called the Gaussian distribution, is the most fundamental continuous probability distribution in statistics. It models random variables that cluster around a central mean value μ (mu) with spread determined by the standard deviation σ (sigma). The distribution is perfectly symmetric and bell-shaped, with the familiar curve that appears everywhere from test scores to measurement errors to natural phenomena like height and weight distributions.
A normal distribution is fully characterized by just two parameters: the mean μ (which centers the distribution) and the standard deviation σ (which controls its width). The mathematical formula for the probability density function (PDF) is f(x) = (1 / (σ√(2π))) × e^(-(x-μ)²/(2σ²)), but you don't need to memorize this—our calculator handles all the complex math automatically.
PDF (Probability Density Function): The PDF gives the height of the bell curve at any point x. The peak occurs at the mean μ, and the curve is symmetric on both sides. The total area under the curve equals 1, representing 100% probability. While the PDF height itself is not a probability, the area under the curve between two points represents the probability of observing a value in that range.
CDF (Cumulative Distribution Function): The CDF at a point x gives the probability that a random variable X is less than or equal to x, written as P(X ≤ x). It represents the area under the PDF curve from negative infinity up to x. The CDF starts at 0 (for x far below the mean) and approaches 1 (for x far above the mean). The CDF is the integral of the PDF and is what you use to answer "what's the probability of getting a value below x?"
z-Score (Standard Score): The z-score is a standardized measure that tells you how many standard deviations a value x is away from the mean. The formula is z = (x - μ) / σ. A z-score of 0 means the value equals the mean, z = 1 means one standard deviation above the mean, z = -2 means two standard deviations below the mean. Converting to z-scores allows you to compare values from different normal distributions on a common scale—the standard normal distribution with μ = 0 and σ = 1.
Quantile / Inverse CDF: This is the reverse operation of the CDF. Given a probability p (often called a percentile or tail area), the quantile function returns the value x (or z) where the cumulative probability equals p. For example, the 95th percentile returns the value below which 95% of the data falls. This is essential for finding critical values in hypothesis testing and constructing confidence intervals.
Left-Tail (Lower-Tail): The probability that X is less than or equal to a specific value x. This is simply the CDF value: P(X ≤ x). Use this when you care about values below a threshold, such as "What's the probability a student scores below 70?"
Right-Tail (Upper-Tail): The probability that X is greater than a specific value x, written as P(X > x). This equals 1 - CDF(x). Use this when you care about values above a threshold, such as "What's the probability a measurement exceeds the upper specification limit?"
Two-Tailed: The probability that X falls outside a symmetric interval around the mean, either below -|z| or above +|z|. For a given z-score, the two-tailed probability is p = 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. This is commonly used in hypothesis testing when you care about deviations in either direction, such as "Is this result significantly different from the expected mean?"
Between-Bounds (Interval Probability): The probability that X falls between two values a and b, calculated as P(a ≤ X ≤ b) = CDF(b) - CDF(a). Our calculator supports this directly—just enter lower and upper bounds to see the shaded area and numeric probability without manual subtraction.
For any normal distribution, approximately 68% of values fall within one standard deviation of the mean (μ ± σ), 95% within two standard deviations (μ ± 2σ), and 99.7% within three standard deviations (μ ± 3σ). This rule provides quick mental estimates: if you know μ and σ, you can immediately gauge where most of the data will lie. Values beyond ±3σ are rare (less than 0.3% probability) and often considered outliers in quality control or scientific measurements.
The PDF value is the height of the bell curve at a specific point x. It is not a probability itself, but rather a density. The units are "probability per unit of x." For example, if x is measured in inches and PDF(x) = 0.12, that means the density is 0.12 per inch at that point. To get an actual probability, you integrate (sum) the PDF over an interval—this is what the CDF and between-bounds calculations do. The PDF is highest at the mean μ and decreases symmetrically as you move away from the mean in either direction.
CDF(x) gives the probability that a random variable X is less than or equal to x: P(X ≤ x). This is the area under the PDF curve from negative infinity up to x. CDF values range from 0 to 1. A CDF of 0.5 at x means 50% of values fall below x (this occurs at x = μ for a normal distribution). A CDF of 0.975 means 97.5% of values fall below x, leaving 2.5% in the right tail. The right-tail probability is simply 1 - CDF(x).
The probability that X falls between two values a and b is P(a ≤ X ≤ b) = CDF(b) - CDF(a). Our calculator computes this directly when you enter lower and upper bounds. The chart shades the interval from a to b, and the numeric output displays the probability. This is the most intuitive way to answer range questions like "What's the chance of scoring between 80 and 90 on the test?" or "What fraction of parts will fall within the tolerance zone?"
The z-score standardizes any value x to the number of standard deviations it is from the mean: z = (x - μ) / σ. The reverse conversion is x = μ + zσ. Our calculator handles both directions automatically. Use z when you want to compare values from different distributions (e.g., comparing scores from two different exams) or when working with published critical values. Use x when you need results in the original units (e.g., dollars, inches, test points). The probabilities are identical for corresponding z and x values—only the scale changes.
The two-tailed probability for a z-score represents the chance of observing a value at least as extreme in either direction from the mean. Mathematically, for a given |z|, the two-tailed p-value is p = 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. For example, z = 1.96 has a two-tailed probability of approximately 0.05 (5%), meaning there's a 5% chance of observing a z-score below -1.96 or above +1.96 under the null hypothesis. This is the basis for the classic 95% confidence interval and two-sided hypothesis tests at α = 0.05.
The quantile (inverse CDF) function solves the reverse problem: given a probability p and a tail type, find the value x (or z) where the cumulative probability equals p. For example, the 95th percentile (p = 0.95, left-tail) returns the x value below which 95% of the distribution falls. In hypothesis testing, critical values are quantiles that define rejection regions. For a one-tailed test at α = 0.05, the critical z is approximately 1.645 (right-tail at p = 0.95). For a two-tailed test at α = 0.05, the critical z is ±1.96 (each tail has 2.5%).
Our calculator uses high-precision numerical approximations for the normal CDF (Φ) and its inverse, based on well-established statistical algorithms. These methods are accurate to many decimal places and suitable for all typical statistical work, including research, quality control, and finance. Extremely far into the tails (e.g., probabilities below 10⁻¹⁰), small numerical errors may appear, but for practical purposes (p-values down to 0.0001 or beyond), the results are reliable and match published statistical tables and software like R, Python SciPy, or Excel.
The normal distribution is appropriate when data are continuous, symmetric, and unimodal. Real-world data often deviate from perfect normality—skewed distributions, outliers, or heavy tails may require transformations or alternative distributions (e.g., log-normal, t-distribution for small samples, or non-parametric methods). Always verify the normality assumption using histograms, Q-Q plots, or formal tests (Shapiro-Wilk, Anderson-Darling) before relying on normal-based calculations for critical decisions. For small sample sizes (n < 30) or unknown population standard deviation, use the t-distribution instead of the normal distribution for inference.
• Normality Assumption: This calculator assumes your data follows a normal (Gaussian) distribution. Real-world data often deviate from perfect normality due to skewness, outliers, or heavy tails. Always verify normality using histograms, Q-Q plots, or formal tests (Shapiro-Wilk, Anderson-Darling) before relying on results.
• Population vs. Sample: Results assume you know the true population mean (μ) and standard deviation (σ). If working with sample data, use sample statistics and consider the t-distribution for small samples (n < 30).
• Numerical Precision: Calculations use high-precision algorithms accurate to many decimal places. Extremely far into the tails (probabilities below 10⁻¹⁰), small numerical errors may appear, but results are reliable for all practical statistical work.
• Independence: Statistical inference using normal distribution assumes observations are independent. Correlated or dependent data require specialized methods.
Important Note: This calculator is strictly for educational and informational purposes only. It does not provide professional statistical consulting, research validation, or decision-making guidance. The normal distribution model is a mathematical abstraction—real-world phenomena may not perfectly follow this distribution. Results should be verified independently using professional statistical software (R, Python SciPy, SAS, SPSS) for any research, academic, business, or critical applications. Always consult with qualified statisticians or data scientists for important analytical decisions. This tool cannot account for data quality issues, sampling bias, measurement error, or domain-specific considerations that affect real statistical analyses.
The mathematical formulas and statistical concepts used in this calculator are based on established statistical theory and authoritative academic sources:
Common questions about normal distribution calculations, PDF vs CDF, z-scores, tail probabilities, and statistical accuracy.
PDF (Probability Density Function) is the height of the bell curve at a specific point x—it measures density, not probability. The CDF (Cumulative Distribution Function) is the probability that X is less than or equal to x, calculated as the area under the PDF curve from negative infinity up to x. While PDF gives you the curve shape, CDF gives you actual probabilities. For example, if CDF(85) = 0.84, there's an 84% chance the value is 85 or below. Areas under the PDF correspond to probabilities, while the CDF returns that area directly.
Right-tail probability is the chance that X is greater than x, calculated as 1 - CDF(x). For example, if CDF(90) = 0.95, then P(X > 90) = 1 - 0.95 = 0.05 (5% right-tail). Two-tailed probability for a z-score represents extreme values in either direction from the mean. For a given |z|, the two-tailed p-value is 2 × (1 - CDF(|z|)). For instance, z = 1.96 has a two-tailed probability of approximately 0.05, meaning there's a 5% chance of observing z ≤ -1.96 or z ≥ +1.96. Our calculator provides tail type options and shades the corresponding area on the chart.
Use z-scores (standardized scale with mean 0, standard deviation 1) when you want to compare values from different normal distributions or work with published critical values (e.g., z = 1.96 for 95% confidence). Use x (original scale) when you need results in the actual units of your data (e.g., test points, dollars, millimeters). The probabilities are identical—only the scale changes. The conversion is z = (x - μ) / σ or x = μ + zσ. If you're working directly with z-scores from a research paper or textbook, set μ = 0 and σ = 1 in the calculator, or use z-mode if available.
Yes. Enter lower and upper bounds to compute P(lower ≤ X ≤ upper) = CDF(upper) - CDF(lower). The calculator shades the interval on the bell curve and displays the numeric probability directly, so you don't need to manually subtract CDF values. This is the most intuitive way to answer range questions like 'What's the probability a score is between 70 and 85?' or 'What fraction of measurements fall within the tolerance zone of 49.5 to 50.5?' The chart visually confirms the shaded region, making it easy to interpret and communicate results.
Yes. Given a test statistic z (or convert x to z using z = (x - μ) / σ), compute the one-tailed or two-tailed p-value as appropriate for your hypothesis test. For a one-tailed test, the p-value is the right-tail or left-tail probability (depending on the alternative hypothesis). For a two-tailed test, the p-value is 2 × (1 - CDF(|z|)). Compare the p-value to your significance level α (commonly 0.05) to decide whether to reject the null hypothesis. Note: For small samples (n < 30) or unknown population σ, use the t-distribution calculator instead of the normal distribution for more accurate inference.
We use high-precision numerical approximations for the normal CDF (Φ) and its inverse, based on well-established statistical algorithms (similar to those in R, Python SciPy, and Excel). The results are accurate to many decimal places and suitable for all typical statistical work, including research, quality control, finance, and engineering. For probabilities down to 0.0001 or even lower, the accuracy matches published statistical tables. Extremely far into the tails (probabilities below 10⁻¹⁰), small numerical errors may appear, but for practical purposes, the calculator is highly reliable and exceeds the precision needed for most applications.
Explore other calculators to help with hypothesis testing, confidence intervals, regression analysis, and statistical computations.
Convert z-scores to p-values for one-tailed and two-tailed tests. Find critical z-values for common significance levels (0.05, 0.01) and compute exact probabilities.
Calculate confidence intervals for population means with known or unknown standard deviation. Choose confidence levels (90%, 95%, 99%) and see margin of error.
Determine required sample size for hypothesis tests and estimation based on power, significance level (α), effect size, and desired precision.
Fit linear and polynomial regression models, compute R², residuals, and coefficients. Visualize scatter plots with fitted lines and prediction intervals.
Compute mean, median, mode, standard deviation, variance, quartiles, and outliers. Visualize distributions with histograms and box plots.
Calculate probabilities for common discrete and continuous distributions (binomial, Poisson, uniform, exponential) with interactive visualizations.
Access essential scientific calculators including stats quick calc, molarity, and percentage calculators for everyday calculations.
Enter your values above to calculate the normal distribution probabilities and visualize the bell curve.
You'll get: