PDF(x) — Probability Density
The PDF value is the height of the bell curve at a specific point x. It is not a probability itself, but rather a density. The units are "probability per unit of x." For example, if x is measured in inches and PDF(x) = 0.12, that means the density is 0.12 per inch at that point. To get an actual probability, you integrate (sum) the PDF over an interval—this is what the CDF and between-bounds calculations do. The PDF is highest at the mean μ and decreases symmetrically as you move away from the mean in either direction.
CDF(x) — Cumulative Probability
CDF(x) gives the probability that a random variable X is less than or equal to x: P(X ≤ x). This is the area under the PDF curve from negative infinity up to x. CDF values range from 0 to 1. A CDF of 0.5 at x means 50% of values fall below x (this occurs at x = μ for a normal distribution). A CDF of 0.975 means 97.5% of values fall below x, leaving 2.5% in the right tail. The right-tail probability is simply 1 - CDF(x).
Between-Bounds Probability
The probability that X falls between two values a and b is P(a ≤ X ≤ b) = CDF(b) - CDF(a). Our calculator computes this directly when you enter lower and upper bounds. The chart shades the interval from a to b, and the numeric output displays the probability. This is the most intuitive way to answer range questions like "What's the chance of scoring between 80 and 90 on the test?" or "What fraction of parts will fall within the tolerance zone?"
z ↔ x Conversion
The z-score standardizes any value x to the number of standard deviations it is from the mean: z = (x - μ) / σ. The reverse conversion is x = μ + zσ. Our calculator handles both directions automatically. Use z when you want to compare values from different distributions (e.g., comparing scores from two different exams) or when working with published critical values. Use x when you need results in the original units (e.g., dollars, inches, test points). The probabilities are identical for corresponding z and x values—only the scale changes.
Two-Tailed Probability
The two-tailed probability for a z-score represents the chance of observing a value at least as extreme in either direction from the mean. Mathematically, for a given |z|, the two-tailed p-value is p = 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. For example, z = 1.96 has a two-tailed probability of approximately 0.05 (5%), meaning there's a 5% chance of observing a z-score below -1.96 or above +1.96 under the null hypothesis. This is the basis for the classic 95% confidence interval and two-sided hypothesis tests at α = 0.05.
Quantile / Critical Value
The quantile (inverse CDF) function solves the reverse problem: given a probability p and a tail type, find the value x (or z) where the cumulative probability equals p. For example, the 95th percentile (p = 0.95, left-tail) returns the x value below which 95% of the distribution falls. In hypothesis testing, critical values are quantiles that define rejection regions. For a one-tailed test at α = 0.05, the critical z is approximately 1.645 (right-tail at p = 0.95). For a two-tailed test at α = 0.05, the critical z is ±1.96 (each tail has 2.5%).
Accuracy and Numerical Precision
Our calculator uses high-precision numerical approximations for the normal CDF (Φ) and its inverse, based on well-established statistical algorithms. These methods are accurate to many decimal places and suitable for all typical statistical work, including research, quality control, and finance. Extremely far into the tails (e.g., probabilities below 10⁻¹⁰), small numerical errors may appear, but for practical purposes (p-values down to 0.0001 or beyond), the results are reliable and match published statistical tables and software like R, Python SciPy, or Excel.
Limitations and Assumptions
The normal distribution is appropriate when data are continuous, symmetric, and unimodal. Real-world data often deviate from perfect normality—skewed distributions, outliers, or heavy tails may require transformations or alternative distributions (e.g., log-normal, t-distribution for small samples, or non-parametric methods). Always verify the normality assumption using histograms, Q-Q plots, or formal tests (Shapiro-Wilk, Anderson-Darling) before relying on normal-based calculations for critical decisions. For small sample sizes (n < 30) or unknown population standard deviation, use the t-distribution instead of the normal distribution for inference.