Z-Score Interpretation
The z-score tells you how many standard deviations a value is from the mean. A z-score of 0 means the value equals the mean. Positive z-scores indicate values above the mean, negative z-scores indicate values below. The magnitude tells you the distance: |z| = 1 is one standard deviation away, |z| = 2 is two standard deviations, and so on. By the empirical rule, about 68% of values fall within |z| < 1, 95% within |z| < 2, and 99.7% within |z| < 3. Values with |z| > 3 are rare and often considered outliers.
One-Tailed P-Values
Left-tail p-value: This is P(Z ≤ z), the area under the standard normal curve to the left of your z-score. It represents the probability of observing a value less than or equal to z under the standard normal model. For example, if z = -1.5 gives a left-tail p = 0.0668, there's about a 6.68% chance of getting a z-score of -1.5 or lower.
Right-tail p-value: This is P(Z ≥ z) = 1 - P(Z ≤ z), the area to the right of your z-score. It represents the probability of observing a value greater than or equal to z. For example, if z = 2.0 gives a right-tail p = 0.0228, there's about a 2.28% chance of getting a z-score of 2.0 or higher. This is commonly used in one-sided tests where you're looking for evidence of an increase.
Two-Tailed P-Value
The two-tailed p-value represents the probability of observing a z-score at least as extreme in either direction from the mean. It's calculated as 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. For example, if |z| = 1.96, the two-tailed p ≈ 0.05 (5%). This means there's a 5% combined chance of observing z ≤ -1.96 or z ≥ +1.96. Two-tailed p-values are used in non-directional hypothesis tests where you care about deviations in either direction, not just increases or decreases.
Critical Z from P-Value
When you input a p-value and select a tail type, the calculator solves the inverse problem: finding the z-value where the cumulative probability equals p (or 1-p for right-tail, or the symmetric value for two-tailed). For example, entering p = 0.05 with two-tailed gives z = ±1.96, the critical values that define the rejection region for a 95% confidence interval or α = 0.05 hypothesis test. These critical values are the boundaries: if your test statistic exceeds them, you reject the null hypothesis.
Between Two Values
The between-bounds probability is P(a ≤ Z ≤ b) = Φ(b) - Φ(a), where Φ is the standard normal CDF. This tells you the fraction of the distribution that falls within the interval [a, b]. For example, P(-1 ≤ Z ≤ 1) ≈ 0.68 (68% of values), P(-1.96 ≤ Z ≤ 1.96) ≈ 0.95 (95%), and P(-2.576 ≤ Z ≤ 2.576) ≈ 0.99 (99%). When working with raw x values instead of z-scores, the calculator first standardizes them using z = (x - μ) / σ, then computes the interval probability.
Shaded Chart Area
The interactive bell curve chart visually represents your calculation. The shaded area corresponds to the probability region:
- Left-tail: shaded from the left edge up to your z-value
- Right-tail: shaded from your z-value to the right edge
- Two-tailed: shaded in both tails symmetrically around the mean
- Between-bounds: shaded only in the interval between lower and upper values
The shaded area's size corresponds to the probability (p-value). Always verify the shading matches your intent before using the numeric results.
Numerical Precision and Limitations
Our calculator uses high-precision numerical approximations for the standard normal CDF (Φ) and its inverse, based on established statistical algorithms. The results are accurate to many decimal places and match published z-tables and statistical software like R, Python SciPy, and Excel. For typical statistical work (p-values down to 0.0001 or beyond), the accuracy is more than sufficient. Extremely far into the tails (probabilities below 10⁻¹⁰), tiny numerical errors may appear, but these are well beyond the range of practical statistical inference. The z-distribution assumes exact normality; if your data are not normally distributed, consider transformations, non-parametric methods, or bootstrapping.