Skip to main content

Z-Score / P-Value Calculator

Convert X to Z and find p-values, or get critical Z from p. Supports standard and custom normal distributions with shaded graph.

Last Updated: November 23, 2025

Understanding Z-Scores and P-Values

A z-score (also called a standard score) is a statistical measure that describes how many standard deviations a data point is from the mean of a distribution. It standardizes any value from a normal distribution N(μ, σ) to the standard normal distribution N(0, 1), making it possible to compare values from different distributions on a common scale. The formula for calculating a z-score is:

z = (x - μ) / σ

Where x is the raw score, μ (mu) is the population mean, and σ (sigma) is the population standard deviation. A z-score of 0 means the value equals the mean, a z-score of +1 means one standard deviation above the mean, and a z-score of -2 means two standard deviations below the mean. Large absolute values of z (e.g., |z| > 3) indicate the value is far from the mean and may be considered unusual or an outlier.

What is a P-Value?

A p-value (probability value) is the probability, under the assumption that a specified statistical model (the null hypothesis) is true, of observing a test statistic at least as extreme as the value that was actually observed. In the context of z-scores and the normal distribution, the p-value represents the area under the standard normal curve in the tail(s) beyond the observed z-score.

P-values are fundamental to hypothesis testing and help us make decisions about whether observed data are consistent with a null hypothesis. A small p-value (typically < 0.05 or < 0.01) suggests that the observed result is unlikely under the null model, providing evidence against the null hypothesis and in favor of the alternative.

One-Tailed vs Two-Tailed P-Values

Left-Tailed (Lower-Tail) P-Value: This is the probability that a value is less than or equal to the observed z-score, calculated as P(Z ≤ z). It represents the area under the standard normal curve to the left of z. Use left-tailed tests when your alternative hypothesis states that the parameter is less than the null value (e.g., "the new process reduces defect rate").

Right-Tailed (Upper-Tail) P-Value: This is the probability that a value is greater than or equal to the observed z-score, calculated as P(Z ≥ z) = 1 - P(Z ≤ z). It represents the area under the curve to the right of z. Use right-tailed tests when your alternative hypothesis states that the parameter is greater than the null value (e.g., "the treatment increases average response time").

Two-Tailed P-Value: This represents the probability of observing a z-score at least as extreme in either direction from the mean. It's calculated as 2 × min(P(Z ≤ z), P(Z ≥ z)) or equivalently 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. Two-tailed tests are used when the alternative hypothesis is "not equal to" rather than directional (e.g., "the average differs from the claimed value" without specifying higher or lower).

What This Calculator Supports

Our Z-Score / P-Value Calculator offers multiple calculation modes to handle different statistical tasks:

  • X → Z → p: Given a raw score x, population mean μ, and standard deviation σ, compute the z-score and corresponding p-value for the selected tail type.
  • Z → p: Given a z-score directly, compute the p-value for left-tail, right-tail, or two-tailed tests. This is useful when working with standardized test statistics.
  • p → Critical Z: Given a desired significance level (p-value), find the critical z-value that defines the rejection region. This is essential for setting up hypothesis test thresholds.
  • Between-Bounds Probability: Enter lower and upper bounds (in z-scores or raw x values) to compute the probability that a value falls within that interval: P(lower ≤ Z ≤ upper) = Φ(upper) - Φ(lower).

The calculator works with both the standard normal distribution (μ = 0, σ = 1) for direct z-score calculations, and custom normal distributions with any specified μ and σ for converting raw scores to z-scores and probabilities.

How to Use the Z-Score / P-Value Calculator

  1. Choose Your Calculation Mode: Select the type of calculation you need based on what information you have and what you're trying to find:
    • Z → p: If you already have a z-score (standardized value) and want to find the corresponding p-value (probability). This is common when working with test statistics from hypothesis tests.
    • p → Z: If you have a significance level or probability and need to find the critical z-value. For example, finding the z-value for α = 0.05 in a hypothesis test.
    • X → z → p: If you have a raw score x and the population parameters (μ and σ) and want to standardize it to a z-score and find the probability.
  2. Select Distribution Type: Choose between:
    • Standard Normal (μ = 0, σ = 1): Use this when working directly with z-scores or when your data has already been standardized. This is the default for most statistical tables and is what you'll use for hypothesis testing with standardized test statistics.
    • Custom Normal (specify μ and σ): Use this when working with raw scores from a population with known mean and standard deviation. For example, if you're analyzing test scores with μ = 75 and σ = 10, or measurements with μ = 50.0 and σ = 0.3.
  3. Pick the Tail Type: Select the appropriate tail based on your hypothesis or question:
    • Left-tailed: P(Z ≤ z) — Use when testing if a value is significantly lower than expected (e.g., H₁: μ < μ₀).
    • Right-tailed: P(Z ≥ z) — Use when testing if a value is significantly higher than expected (e.g., H₁: μ > μ₀).
    • Two-tailed: 2 × P(|Z| ≥ |z|) — Use when testing for any significant difference in either direction (e.g., H₁: μ ≠ μ₀). This is the most conservative choice and is commonly used in scientific research.
  4. Enter Values: Input the required values based on your selected mode:
    • For Z → p: Enter the z-score value.
    • For p → Z: Enter the p-value (probability) as a decimal between 0 and 1 (e.g., 0.05 for 5%).
    • For X → z → p: Enter the raw score x, mean μ, and standard deviation σ.
    • For between-bounds: Enter both lower and upper values (as z-scores or x values depending on distribution type).
  5. Click Calculate: Once you've entered all required values, click the Calculate button. The calculator will display:
    • An interactive bell curve chart with the relevant area shaded (left tail, right tail, both tails, or interval)
    • The z-score (if you entered x) or the x-value (if you entered z)
    • The p-value for the selected tail type
    • Critical values if you're finding z from p
    • For between-bounds, the probability that a value falls in that range
  6. Interpret and Use Results: Review the visual representation on the chart to confirm the shaded region matches your intended question. Use the numeric results for hypothesis testing (compare p-value to α), confidence interval construction (use critical z-values), or probability calculations. If available, use the share or copy link feature to save your calculation with all inputs and results encoded in the URL for easy reference or sharing with colleagues.

Strategies & Tips

  • Match the Hypothesis to the Tail Type: Your alternative hypothesis determines which tail to use. One-sided tests (H₁: μ < μ₀ or H₁: μ > μ₀) use left or right tail respectively, while two-sided tests (H₁: μ ≠ μ₀) use two-tailed. Choosing the wrong tail can lead to incorrect conclusions. Always align your tail selection with the scientific or practical question you're asking.
  • Memorize Common Critical Values: For hypothesis testing and confidence intervals, certain z-values appear repeatedly:
    • 90% confidence (two-tailed, α = 0.10): z = ±1.645
    • 95% confidence (two-tailed, α = 0.05): z = ±1.96
    • 99% confidence (two-tailed, α = 0.01): z = ±2.576
    • One-tailed α = 0.05: z = 1.645 (right) or -1.645 (left)
    • One-tailed α = 0.01: z = 2.326 (right) or -2.326 (left)
    Knowing these values by heart speeds up hypothesis testing and allows you to quickly estimate rejection regions without a calculator.
  • Use Custom μ, σ for Raw Scores: When working with real-world measurements that haven't been standardized, use the custom distribution option. For example, if you're analyzing classroom exam scores with a known mean of 75 and standard deviation of 10, enter those values and your actual score to get the z-score and percentile (p-value). This converts your specific context to the universal standard normal scale.
  • Between-Bounds is Fastest for Interval Probabilities: Instead of manually computing Φ(upper) - Φ(lower), use the between-bounds feature by entering lower and upper values directly. This is especially useful for questions like "What's the probability a randomly selected value falls between 80 and 90?" or "What fraction of parts will be within specification limits of 49.5 to 50.5 mm?" The calculator handles the subtraction and shades the exact interval.
  • Small Samples or Unknown σ? Use t-Distribution: The z-distribution assumes you know the population standard deviation σ and works best with large samples (n ≥ 30). For small samples (n < 30) or when using the sample standard deviation s instead of σ, use the t-distribution calculator instead. The t-distribution has heavier tails and provides more conservative (wider) confidence intervals and higher p-values, accounting for the additional uncertainty from estimating σ.
  • P-Values Are Not Error Probabilities: A common misconception is that a p-value of 0.05 means there's a 5% chance the null hypothesis is true. This is incorrect. The p-value is the probability of observing data as extreme as yours if the null hypothesis were true, not the probability that the null is true given your data. Keep this distinction in mind when communicating results: "The data are unlikely under the null model" rather than "The null is unlikely."
  • Visualize Before Deciding: Always look at the shaded region on the bell curve to confirm it matches your intended question. If you selected left-tail but the shading is on the right, you may have made an input error. The visual feedback helps catch mistakes before you use the results in important decisions or reports.
  • Context Matters for Significance Levels: While α = 0.05 is conventional in many fields, it's not universal. Medical trials often use α = 0.01 for higher confidence, physics uses 5σ (p < 0.0000003) for discovery claims, and exploratory research might use α = 0.10. Choose your significance level based on the consequences of Type I error (false positive) vs Type II error (false negative) in your specific context.

Understanding Your Results

Z-Score Interpretation

The z-score tells you how many standard deviations a value is from the mean. A z-score of 0 means the value equals the mean. Positive z-scores indicate values above the mean, negative z-scores indicate values below. The magnitude tells you the distance: |z| = 1 is one standard deviation away, |z| = 2 is two standard deviations, and so on. By the empirical rule, about 68% of values fall within |z| < 1, 95% within |z| < 2, and 99.7% within |z| < 3. Values with |z| > 3 are rare and often considered outliers.

One-Tailed P-Values

Left-tail p-value: This is P(Z ≤ z), the area under the standard normal curve to the left of your z-score. It represents the probability of observing a value less than or equal to z under the standard normal model. For example, if z = -1.5 gives a left-tail p = 0.0668, there's about a 6.68% chance of getting a z-score of -1.5 or lower.

Right-tail p-value: This is P(Z ≥ z) = 1 - P(Z ≤ z), the area to the right of your z-score. It represents the probability of observing a value greater than or equal to z. For example, if z = 2.0 gives a right-tail p = 0.0228, there's about a 2.28% chance of getting a z-score of 2.0 or higher. This is commonly used in one-sided tests where you're looking for evidence of an increase.

Two-Tailed P-Value

The two-tailed p-value represents the probability of observing a z-score at least as extreme in either direction from the mean. It's calculated as 2 × (1 - Φ(|z|)), where Φ is the standard normal CDF. For example, if |z| = 1.96, the two-tailed p ≈ 0.05 (5%). This means there's a 5% combined chance of observing z ≤ -1.96 or z ≥ +1.96. Two-tailed p-values are used in non-directional hypothesis tests where you care about deviations in either direction, not just increases or decreases.

Critical Z from P-Value

When you input a p-value and select a tail type, the calculator solves the inverse problem: finding the z-value where the cumulative probability equals p (or 1-p for right-tail, or the symmetric value for two-tailed). For example, entering p = 0.05 with two-tailed gives z = ±1.96, the critical values that define the rejection region for a 95% confidence interval or α = 0.05 hypothesis test. These critical values are the boundaries: if your test statistic exceeds them, you reject the null hypothesis.

Between Two Values

The between-bounds probability is P(a ≤ Z ≤ b) = Φ(b) - Φ(a), where Φ is the standard normal CDF. This tells you the fraction of the distribution that falls within the interval [a, b]. For example, P(-1 ≤ Z ≤ 1) ≈ 0.68 (68% of values), P(-1.96 ≤ Z ≤ 1.96) ≈ 0.95 (95%), and P(-2.576 ≤ Z ≤ 2.576) ≈ 0.99 (99%). When working with raw x values instead of z-scores, the calculator first standardizes them using z = (x - μ) / σ, then computes the interval probability.

Shaded Chart Area

The interactive bell curve chart visually represents your calculation. The shaded area corresponds to the probability region:

  • Left-tail: shaded from the left edge up to your z-value
  • Right-tail: shaded from your z-value to the right edge
  • Two-tailed: shaded in both tails symmetrically around the mean
  • Between-bounds: shaded only in the interval between lower and upper values

The shaded area's size corresponds to the probability (p-value). Always verify the shading matches your intent before using the numeric results.

Numerical Precision and Limitations

Our calculator uses high-precision numerical approximations for the standard normal CDF (Φ) and its inverse, based on established statistical algorithms. The results are accurate to many decimal places and match published z-tables and statistical software like R, Python SciPy, and Excel. For typical statistical work (p-values down to 0.0001 or beyond), the accuracy is more than sufficient. Extremely far into the tails (probabilities below 10⁻¹⁰), tiny numerical errors may appear, but these are well beyond the range of practical statistical inference. The z-distribution assumes exact normality; if your data are not normally distributed, consider transformations, non-parametric methods, or bootstrapping.

Limitations & Assumptions

• Standard Normal Assumption: Z-scores and p-values are only meaningful when your underlying data follows a normal distribution. Non-normal data may produce misleading results—always verify normality before interpreting z-scores.

• Known Parameters: Z-score calculations assume you know the population mean (μ) and standard deviation (σ). For sample data with unknown population parameters, use the t-distribution instead.

• Hypothesis Testing Context: P-values indicate the probability of observing data as extreme as yours under the null hypothesis—they do NOT indicate the probability that the null hypothesis is true or false.

• Multiple Comparisons: When performing multiple hypothesis tests, p-values must be adjusted (Bonferroni, FDR, etc.) to control for increased false positive rates. This calculator does not perform such adjustments.

Important Note: This calculator is strictly for educational and informational purposes only. It does not provide professional statistical consulting, research validation, or scientific conclusions. P-values are frequently misinterpreted—a p-value below 0.05 does not prove an effect exists, and a p-value above 0.05 does not prove absence of effect. Statistical significance differs from practical significance. Results should be verified using professional statistical software (R, Python SciPy, SAS, SPSS) for any research, academic, or professional applications. Always consult qualified statisticians for important analytical decisions, especially in medical research, clinical trials, regulatory submissions, or any context where statistical conclusions have real-world consequences.

Sources & References

The mathematical formulas and statistical concepts used in this calculator are based on established statistical theory and authoritative academic sources:

Frequently Asked Questions

Common questions about z-scores, p-values, tail types, hypothesis testing, and statistical accuracy.

What is a z-score and how do I interpret it?

A z-score is the number of standard deviations a value lies from the mean of a normal distribution. It standardizes any value using the formula z = (x - μ) / σ. A z-score of 0 means the value equals the mean, z = 1 means one standard deviation above the mean, z = -2 means two standard deviations below. Large absolute values (|z| > 2) indicate the value is far from the mean and relatively rare. By the empirical rule, about 95% of values fall within |z| < 2, and 99.7% within |z| < 3. Z-scores allow you to compare values from different distributions on a common scale and determine how unusual a value is.

What's the difference between left, right, and two-tailed p-values?

Left-tail p-value is P(Z ≤ z), the probability of getting a value less than or equal to z—used when testing if a parameter is lower than expected (H₁: μ < μ₀). Right-tail p-value is P(Z ≥ z), the probability of getting a value greater than or equal to z—used when testing if a parameter is higher than expected (H₁: μ > μ₀). Two-tailed p-value is 2 × min{P(Z ≤ z), P(Z ≥ z)} or equivalently 2 × (1 - Φ(|z|)), representing the probability of observing a value at least as extreme in either direction—used for non-directional tests (H₁: μ ≠ μ₀). The choice depends on your research hypothesis: one-sided hypotheses use left or right tail, while two-sided hypotheses use two-tailed.

How do I compute the area between two values?

To find the probability that a value falls between two bounds a and b, use the between-bounds feature by entering lower and upper values. The calculator computes P(a ≤ Z ≤ b) = Φ(b) - Φ(a), where Φ is the standard normal CDF. For example, to find what fraction of values fall between z = -1 and z = 1, enter lower = -1 and upper = 1 to get approximately 0.68 (68%). If working with raw scores instead of z-scores, enter your x-values along with μ and σ, and the calculator will standardize them first. The chart shades the exact interval, making it easy to visualize and communicate the result.

When should I use standard normal vs custom μ, σ?

Use the standard normal distribution (μ = 0, σ = 1) when you already have z-scores or standardized test statistics from hypothesis tests, or when you want to work with published z-tables and critical values. Use custom normal (specify μ and σ) when working with raw scores from a population with known mean and standard deviation—for example, test scores with μ = 75 and σ = 10, or manufacturing measurements with μ = 50.0 and σ = 0.3. The custom option automatically converts your raw x-values to z-scores using z = (x - μ) / σ, then computes probabilities. Both approaches yield the same probabilities for corresponding values; the difference is just the scale.

How do I interpret a p-value in hypothesis testing?

The p-value is the probability, assuming the null hypothesis is true, of observing a test statistic at least as extreme as the one you actually observed. It's NOT the probability that the null hypothesis is true. A small p-value (typically < 0.05 or < 0.01) suggests that your observed data would be unlikely under the null model, providing evidence against the null hypothesis. Compare the p-value to your chosen significance level α: if p < α, reject the null hypothesis; if p ≥ α, fail to reject. Remember that 'statistically significant' doesn't always mean 'practically important'—effect size and context matter. Also, failing to reject the null doesn't prove it's true; it just means you lack sufficient evidence against it.

How accurate are the numerical computations?

We use high-precision numerical approximations for the standard normal CDF (Φ) and its inverse, based on well-established statistical algorithms similar to those in R, Python SciPy, and Excel. The results are accurate to many decimal places and suitable for all typical statistical work, including research, quality control, finance, and engineering. For p-values down to 0.0001 or even lower, the calculator matches published statistical tables. Extremely far into the tails (probabilities below 10⁻¹⁰), small numerical errors may appear, but these are well beyond the range of practical statistical inference. The z-distribution assumes perfect normality; if your data are not normally distributed, consider data transformations, non-parametric methods, or robust statistics.

Related Math & Statistics Tools

Explore other calculators to help with hypothesis testing, confidence intervals, regression analysis, and statistical computations.

Normal Distribution Calculator

Interactive bell curve with PDF/CDF calculations, z ↔ x conversion, quantiles, and between-bounds probabilities. Visualize shaded areas and compute probabilities for any normal distribution.

Try Calculator

Confidence Interval Calculator

Calculate confidence intervals for population means with known or unknown standard deviation. Choose confidence levels (90%, 95%, 99%) and see margin of error with detailed explanations.

Try Calculator

Sample Size Calculator

Determine required sample size for hypothesis tests and estimation based on statistical power, significance level (α), effect size, and desired precision.

Try Calculator

Regression Analysis

Fit linear and polynomial regression models, compute R², residuals, coefficients, and standard errors. Visualize scatter plots with fitted lines and prediction intervals.

Try Calculator

Descriptive Statistics Calculator

Compute mean, median, mode, standard deviation, variance, quartiles, range, and identify outliers. Visualize data distributions with histograms and box plots.

Try Calculator

Probability Calculator

Calculate probabilities for common discrete and continuous distributions (binomial, Poisson, uniform, exponential, geometric) with interactive visualizations and cumulative functions.

Try Calculator

Poisson Distribution Calculator

Calculate probabilities for count-based events occurring within a fixed interval. Model rare events and compute cumulative probabilities.

Try Calculator

Error Propagation Calculator

Propagate measurement uncertainties through mathematical formulas. Calculate combined standard deviation for derived quantities.

Try Calculator

How helpful was this calculator?

Z-Score / P-Value Calculator | Z ↔ p, Critical Z, One-/Two-Tailed (2025) | EverydayBudd