Understanding Bayesian Updating
What is Bayesian Inference?
Bayesian inference is a method of statistical inference where you update your beliefs about a parameter as you observe new data. It combines your prior belief (what you thought before seeing data) with the likelihood (how probable the data is given different parameter values) to produce a posterior belief (your updated belief after seeing data).
Bayes' Theorem
The posterior distribution is proportional to the product of the prior and likelihood. This captures how our beliefs should rationally update when we observe new evidence.
The Beta-Binomial Model
This visualizer uses a conjugate prior setup: a Beta distribution prior with Binomial (or Bernoulli) likelihood. This is one of the most common Bayesian models because the math works out elegantly.
Prior: Beta(α, β)
The Beta distribution models a probability between 0 and 1. Parameters α and β control the shape.
- • Beta(1, 1): Uniform, no prior preference
- • Beta(2, 2): Weak preference for θ ≈ 0.5
- • Beta(10, 10): Strong belief θ ≈ 0.5
- • Beta(5, 1): Belief θ is high
Posterior: Beta(α′, β′)
After observing successes and failures, the posterior is also a Beta distribution:
β′ = β + failures
The posterior parameters simply add the observed counts to the prior parameters!
Key Concepts
Credible Interval
A Bayesian credible interval gives you a direct probability statement: "Given the prior and data, there is a 95% probability θ lies in this range." This differs from frequentist confidence intervals, which are about long-run coverage properties.
Prior Sensitivity
With little data, your posterior is heavily influenced by your prior choice. With lots of data, the likelihood dominates and the posterior becomes less sensitive to the prior. Try different priors to see how much they matter!
Posterior Mean vs Mode
Mean: E[θ] = α / (α + β) — the expected value.
Mode: (α - 1) / (α + β - 2) — the most probable value (requires α, β > 1).
For symmetric priors around 0.5 and balanced data, these are similar.
Sequential Updating
Bayesian updating can be done sequentially. Today's posterior becomes tomorrow's prior. As you collect more data, just keep adding to α and β. The order of observations doesn't matter—only the total counts.
Common Applications
A/B Testing
Estimate conversion rates and compare variants with natural uncertainty quantification.
Clinical Trials
Track treatment success rates as patients are enrolled, with credible intervals.
Quality Control
Monitor defect rates in manufacturing, incorporating prior knowledge from historical data.
Sports Analytics
Estimate batting averages, win probabilities, or player skill levels as more games are played.
Important Notes
Prior choice matters: Document and justify your prior. Try sensitivity analysis with different priors to see how robust your conclusions are.
Model assumptions: This model assumes independent, identically distributed trials. If your data violates this (e.g., trending conversion rates), the model may not be appropriate.
Educational tool: This visualizer is for learning and exploration, not for production decision-making in medicine, finance, or other critical domains.
Frequently Asked Questions
Related Calculators
Binomial Distribution Calculator
Calculate binomial probabilities for discrete success/failure trials
Sample Size for Proportions
Calculate required sample size for proportion-based statistical tests
Normal Distribution
Calculate probabilities and z-scores under the normal curve
Confidence Interval Calculator
Build frequentist confidence intervals for means and proportions
Poisson Distribution Calculator
Calculate probabilities for count-based rare events