Z-Score / P-Value Calculator
Convert X to Z and find p-values, or get critical Z from p. Supports standard and custom normal distributions with shaded graph.
Convert X to Z and find p-values, or get critical Z from p. Supports standard and custom normal distributions with shaded graph.
A z-score (also called a standard score) is a statistical measure that describes how many standard deviations a data point is from the mean of a distribution. It standardizes any value from a normal distribution N(μ, σ) to the standard normal distribution N(0, 1), making it possible to compare values from different distributions on a common scale. The formula for calculating a z-score is:
Where x is the raw score, μ (mu) is the population mean, and σ (sigma) is the population standard deviation. A z-score of 0 means the value equals the mean, a z-score of +1 means one standard deviation above the mean, and a z-score of -2 means two standard deviations below the mean. Large absolute values of z (e.g., |z| > 3) indicate the value is far from the mean and may be considered unusual or an outlier.
A p-value (probability value) is the probability, under the assumption that a specified statistical model (the null hypothesis) is true, of observing a test statistic at least as extreme as the value that was actually observed. In the context of z-scores and the normal distribution, the p-value represents the area under the standard normal curve in the tail(s) beyond the observed z-score.
P-values are fundamental to hypothesis testing and help us make decisions about whether observed data are consistent with a null hypothesis. A small p-value (typically < 0.05 or < 0.01) suggests that the observed result is unlikely under the null model, providing evidence against the null hypothesis and in favor of the alternative.
Left-Tailed (Lower-Tail) P-Value: This is the probability that a value is less than or equal to the observed z-score, calculated as P(Z ≤ z). It represents the area under the standard normal curve to the left of z. Use left-tailed tests when your alternative hypothesis states that the parameter is less than the null value (e.g., "the new process reduces defect rate").
Right-Tailed (Upper-Tail) P-Value: This is the probability that a value is greater than or equal to the observed z-score, calculated as P(Z ≥ z) = 1 - P(Z ≤ z). It represents the area under the curve to the right of z. Use right-tailed tests when your alternative hypothesis states that the parameter is greater than the null value (e.g., "the treatment increases average response time").
Two-Tailed P-Value: This represents the probability of observing a z-score at least as extreme in either direction from the mean. It's calculated as 2 × min(P(Z ≤ z), P(Z ≥ z)) or equivalently 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. Two-tailed tests are used when the alternative hypothesis is "not equal to" rather than directional (e.g., "the average differs from the claimed value" without specifying higher or lower).
Our Z-Score / P-Value Calculator offers multiple calculation modes to handle different statistical tasks:
The calculator works with both the standard normal distribution (μ = 0, σ = 1) for direct z-score calculations, and custom normal distributions with any specified μ and σ for converting raw scores to z-scores and probabilities.
The z-score tells you how many standard deviations a value is from the mean. A z-score of 0 means the value equals the mean. Positive z-scores indicate values above the mean, negative z-scores indicate values below. The magnitude tells you the distance: |z| = 1 is one standard deviation away, |z| = 2 is two standard deviations, and so on. By the empirical rule, about 68% of values fall within |z| < 1, 95% within |z| < 2, and 99.7% within |z| < 3. Values with |z| > 3 are rare and often considered outliers.
Left-tail p-value: This is P(Z ≤ z), the area under the standard normal curve to the left of your z-score. It represents the probability of observing a value less than or equal to z under the standard normal model. For example, if z = -1.5 gives a left-tail p = 0.0668, there's about a 6.68% chance of getting a z-score of -1.5 or lower.
Right-tail p-value: This is P(Z ≥ z) = 1 - P(Z ≤ z), the area to the right of your z-score. It represents the probability of observing a value greater than or equal to z. For example, if z = 2.0 gives a right-tail p = 0.0228, there's about a 2.28% chance of getting a z-score of 2.0 or higher. This is commonly used in one-sided tests where you're looking for evidence of an increase.
The two-tailed p-value represents the probability of observing a z-score at least as extreme in either direction from the mean. It's calculated as 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. For example, if |z| = 1.96, the two-tailed p ≈ 0.05 (5%). This means there's a 5% combined chance of observing z ≤ -1.96 or z ≥ +1.96. Two-tailed p-values are used in non-directional hypothesis tests where you care about deviations in either direction, not just increases or decreases.
When you input a p-value and select a tail type, the calculator solves the inverse problem: finding the z-value where the cumulative probability equals p (or 1-p for right-tail, or the symmetric value for two-tailed). For example, entering p = 0.05 with two-tailed gives z = ±1.96, the critical values that define the rejection region for a 95% confidence interval or α = 0.05 hypothesis test. These critical values are the boundaries: if your test statistic exceeds them, you reject the null hypothesis.
The between-bounds probability is P(a ≤ Z ≤ b) = Φ(b) - Φ(a), where Φ is the standard normal CDF. This tells you the fraction of the distribution that falls within the interval [a, b]. For example, P(-1 ≤ Z ≤ 1) ≈ 0.68 (68% of values), P(-1.96 ≤ Z ≤ 1.96) ≈ 0.95 (95%), and P(-2.576 ≤ Z ≤ 2.576) ≈ 0.99 (99%). When working with raw x values instead of z-scores, the calculator first standardizes them using z = (x - μ) / σ, then computes the interval probability.
The interactive bell curve chart visually represents your calculation. The shaded area corresponds to the probability region:
The shaded area's size corresponds to the probability (p-value). Always verify the shading matches your intent before using the numeric results.
Our calculator uses high-precision numerical approximations for the standard normal CDF (Φ) and its inverse, based on established statistical algorithms. The results are accurate to many decimal places and match published z-tables and statistical software like R, Python SciPy, and Excel. For typical statistical work (p-values down to 0.0001 or beyond), the accuracy is more than sufficient. Extremely far into the tails (probabilities below 10⁻¹⁰), tiny numerical errors may appear, but these are well beyond the range of practical statistical inference. The z-distribution assumes exact normality; if your data are not normally distributed, consider transformations, non-parametric methods, or bootstrapping.
• Standard Normal Assumption: Z-scores and p-values are only meaningful when your underlying data follows a normal distribution. Non-normal data may produce misleading results—always verify normality before interpreting z-scores.
• Known Parameters: Z-score calculations assume you know the population mean (μ) and standard deviation (σ). For sample data with unknown population parameters, use the t-distribution instead.
• Hypothesis Testing Context: P-values indicate the probability of observing data as extreme as yours under the null hypothesis—they do NOT indicate the probability that the null hypothesis is true or false.
• Multiple Comparisons: When performing multiple hypothesis tests, p-values must be adjusted (Bonferroni, FDR, etc.) to control for increased false positive rates. This calculator does not perform such adjustments.
Important Note: This calculator is strictly for educational and informational purposes only. It does not provide professional statistical consulting, research validation, or scientific conclusions. P-values are frequently misinterpreted—a p-value below 0.05 does not prove an effect exists, and a p-value above 0.05 does not prove absence of effect. Statistical significance differs from practical significance. Results should be verified using professional statistical software (R, Python SciPy, SAS, SPSS) for any research, academic, or professional applications. Always consult qualified statisticians for important analytical decisions, especially in medical research, clinical trials, regulatory submissions, or any context where statistical conclusions have real-world consequences.
The mathematical formulas and statistical concepts used in this calculator are based on established statistical theory and authoritative academic sources:
Common questions about z-scores, p-values, tail types, hypothesis testing, and statistical accuracy.
A z-score is the number of standard deviations a value lies from the mean of a normal distribution. It standardizes any value using the formula z = (x - μ) / σ. A z-score of 0 means the value equals the mean, z = 1 means one standard deviation above the mean, z = -2 means two standard deviations below. Large absolute values (|z| > 2) indicate the value is far from the mean and relatively rare. By the empirical rule, about 95% of values fall within |z| < 2, and 99.7% within |z| < 3. Z-scores allow you to compare values from different distributions on a common scale and determine how unusual a value is.
Left-tail p-value is P(Z ≤ z), the probability of getting a value less than or equal to z—used when testing if a parameter is lower than expected (H₁: μ < μ₀). Right-tail p-value is P(Z ≥ z), the probability of getting a value greater than or equal to z—used when testing if a parameter is higher than expected (H₁: μ > μ₀). Two-tailed p-value is 2 × min{P(Z ≤ z), P(Z ≥ z)} or equivalently 2 × (1 - Φ(|z|)), representing the probability of observing a value at least as extreme in either direction—used for non-directional tests (H₁: μ ≠ μ₀). The choice depends on your research hypothesis: one-sided hypotheses use left or right tail, while two-sided hypotheses use two-tailed.
To find the probability that a value falls between two bounds a and b, use the between-bounds feature by entering lower and upper values. The calculator computes P(a ≤ Z ≤ b) = Φ(b) - Φ(a), where Φ is the standard normal CDF. For example, to find what fraction of values fall between z = -1 and z = 1, enter lower = -1 and upper = 1 to get approximately 0.68 (68%). If working with raw scores instead of z-scores, enter your x-values along with μ and σ, and the calculator will standardize them first. The chart shades the exact interval, making it easy to visualize and communicate the result.
Use the standard normal distribution (μ = 0, σ = 1) when you already have z-scores or standardized test statistics from hypothesis tests, or when you want to work with published z-tables and critical values. Use custom normal (specify μ and σ) when working with raw scores from a population with known mean and standard deviation—for example, test scores with μ = 75 and σ = 10, or manufacturing measurements with μ = 50.0 and σ = 0.3. The custom option automatically converts your raw x-values to z-scores using z = (x - μ) / σ, then computes probabilities. Both approaches yield the same probabilities for corresponding values; the difference is just the scale.
The p-value is the probability, assuming the null hypothesis is true, of observing a test statistic at least as extreme as the one you actually observed. It's NOT the probability that the null hypothesis is true. A small p-value (typically < 0.05 or < 0.01) suggests that your observed data would be unlikely under the null model, providing evidence against the null hypothesis. Compare the p-value to your chosen significance level α: if p < α, reject the null hypothesis; if p ≥ α, fail to reject. Remember that 'statistically significant' doesn't always mean 'practically important'—effect size and context matter. Also, failing to reject the null doesn't prove it's true; it just means you lack sufficient evidence against it.
We use high-precision numerical approximations for the standard normal CDF (Φ) and its inverse, based on well-established statistical algorithms similar to those in R, Python SciPy, and Excel. The results are accurate to many decimal places and suitable for all typical statistical work, including research, quality control, finance, and engineering. For p-values down to 0.0001 or even lower, the calculator matches published statistical tables. Extremely far into the tails (probabilities below 10⁻¹⁰), small numerical errors may appear, but these are well beyond the range of practical statistical inference. The z-distribution assumes perfect normality; if your data are not normally distributed, consider data transformations, non-parametric methods, or robust statistics.
Explore other calculators to help with hypothesis testing, confidence intervals, regression analysis, and statistical computations.
Interactive bell curve with PDF/CDF calculations, z ↔ x conversion, quantiles, and between-bounds probabilities. Visualize shaded areas and compute probabilities for any normal distribution.
Calculate confidence intervals for population means with known or unknown standard deviation. Choose confidence levels (90%, 95%, 99%) and see margin of error with detailed explanations.
Determine required sample size for hypothesis tests and estimation based on statistical power, significance level (α), effect size, and desired precision.
Fit linear and polynomial regression models, compute R², residuals, coefficients, and standard errors. Visualize scatter plots with fitted lines and prediction intervals.
Compute mean, median, mode, standard deviation, variance, quartiles, range, and identify outliers. Visualize data distributions with histograms and box plots.
Calculate probabilities for common discrete and continuous distributions (binomial, Poisson, uniform, exponential, geometric) with interactive visualizations and cumulative functions.
Calculate probabilities for count-based events occurring within a fixed interval. Model rare events and compute cumulative probabilities.
Propagate measurement uncertainties through mathematical formulas. Calculate combined standard deviation for derived quantities.
Choose your calculation mode and enter your values above to compute z-scores, p-values, or critical values with visual representation.
You can: