PROBABILITYProbability TheoryStatistics Calculator
📊

Update Beliefs with Prior, Likelihood, Evidence

From medical diagnosis to spam filters to courtroom evidence — Bayesian reasoning updates beliefs with evidence. Master prior, likelihood, and posterior.

Concept Fundamentals
P(A|B)·P(B)/P(A)
Bayes' Rule
Posterior probability
P(A) — initial belief
Prior
Before evidence
P(B|A)
Likelihood
Evidence given hypothesis
Updated belief
Posterior
After evidence
Calculate PosteriorPrior, Likelihood, Evidence — Step-by-Step Bayesian Reasoning

Why This Statistical Analysis Matters

Why: Beliefs should update when new evidence arrives. Bayes' theorem formalizes this: posterior ∝ likelihood × prior. Base rate neglect and the prosecutor's fallacy are common errors.

How: Enter P(A) prior, P(B|A) likelihood, and P(B) evidence (or P(B|¬A) in full mode). Posterior P(A|B) = P(B|A)×P(A) / P(B).

  • P(A|B) ≠ P(B|A) — prosecutor's fallacy
  • Base rate matters for rare conditions
  • Likelihood ratio measures evidence strength
📐
BAYESIAN INFERENCEPrior, Likelihood, Evidence — Step-by-Step

Update Beliefs with Prior, Likelihood, Evidence — Step-by-Step Bayesian Reasoning

From medical diagnosis to spam filters to courtroom evidence — Bayesian reasoning updates beliefs with evidence. Master prior, likelihood, and posterior.

Calculation Mode

Real-World Scenarios — Click to Load

Inputs

e.g., disease prevalence
e.g., sensitivity
e.g., 1 - specificity
bayes.sh
CALCULATED
P(A|B) — Posterior
16.10%
Prior P(A)
1.00%
Posterior P(A|B)
16.10%
P(B)
5.9000%
Likelihood Ratio
19.0000
Prior Odds
0.0101
Posterior Odds
0.1919
Bayes Factor
19.0000
Share:
Bayesian Inference
P(A|B) — Posterior Probability
16.10%
Prior: 1.0%Posterior: 16.1%LR: 19.00
numbervibe.com/calculators/statistics/bayes-theorem-calculator

Prior vs Posterior

Posterior Probability

🌳 Probability Tree

Start
A: 1.0%
→ B|A: 95.0% = 0.95%
→ ¬B|A: 5.0% = 0.05%
¬A: 99.0%
→ B|¬A: 5.0% = 4.95%
→ ¬B|¬A: 95.0% = 94.05%
P(B) = P(B|A)P(A) + P(B|¬A)P(¬A) = 0.0095 + 0.0495 = 5.9000%

📊 Interpretation

Low posterior (16.1%): even with positive evidence B, A remains unlikely. This illustrates base rate neglect — a "positive" result doesn't guarantee A when the prior is very low.

1. Prior P(A)
P(A)=0.0100P(A) = 0.0100
2. Likelihood P(B|A)
P(BA)=0.9500P(B|A) = 0.9500
3. Evidence P(B)
P(B)=P(BA)P(A)+P(B¬A)P(¬A)=0.009500+0.049500=0.059000P(B) = P(B|A)P(A) + P(B|¬A)P(¬A) = 0.009500 + 0.049500 = 0.059000
4. Bayes' Theorem
P(AB)=P(BA)×P(A)P(B)=0.0095000.059000P(A|B) = \frac{P(B|A) \times P(A)}{P(B)} = \frac{0.009500}{0.059000}
5. Posterior
P(AB)=0.1610(16.10P(A|B) = 0.1610 (16.10%)

For educational and informational purposes only. Verify with a qualified professional.

📋 Key Takeaways

  • • Bayes' theorem updates a belief (prior) when new evidence arrives to produce a revised belief (posterior)
  • • P(A|B) ≠ P(B|A) — confusing these is called the "prosecutor's fallacy" and has led to wrongful convictions
  • • Base rate neglect: ignoring how rare a condition is leads to overestimating the meaning of positive tests
  • • The likelihood ratio measures how much stronger evidence is under one hypothesis vs another
  • • Bayesian thinking is iterative — each posterior becomes the next prior as new evidence arrives

💡 Did You Know

🏛️Rev. Thomas Bayes (1702-1761) never published his theorem — his friend Richard Price found it in his papers after his death
⚖️The prosecutor's fallacy (confusing P(evidence|innocent) with P(innocent|evidence)) has contributed to wrongful convictions worldwide
🤖All modern spam filters, recommendation engines, and language models use Bayesian reasoning at their core
📊During WWII, Alan Turing used Bayesian methods to break the Enigma code, estimating the probability of rotor settings
🧬23andMe and other genetic testing services use Bayes' theorem to calculate your probability of having specific genetic traits
🏥A positive mammogram with 80% sensitivity and 1% prevalence only means ~7.5% chance of actual cancer — base rate matters!
🔍Nate Silver's election models (FiveThirtyEight) are fundamentally Bayesian, continuously updating predictions with new polling data

📖 How It Works

1. Prior Probability

Your initial belief before seeing evidence. P(A) — e.g., disease prevalence in the population.

2. Likelihood

How probable the evidence is if the hypothesis is true. P(B|A) — e.g., test sensitivity.

3. Evidence

Total probability of observing the evidence. P(B) = P(B|A)P(A) + P(B|¬A)P(¬A) via law of total probability.

4. Posterior

Updated belief after seeing evidence. P(A|B) — the answer we seek.

5. Iterative Updating

Each posterior becomes the next prior as new evidence arrives. Bayesian reasoning is inherently sequential.

🎯 Expert Tips

Always consider base rates

A 99% accurate test for a 1-in-10,000 disease still gives mostly false positives.

Use the odds form

Posterior odds = Prior odds × Likelihood ratio is easier for sequential updates.

Beware the prosecutor's fallacy

P(evidence|hypothesis) ≠ P(hypothesis|evidence).

Think about what P(B|¬A) means

The false positive rate dramatically affects the posterior when the prior is low.

⚖️ This Calculator vs. Other Tools

FeatureThis CalculatorProbability CalculatorWolfram AlphaRManual
Dedicated Bayes focus⚠️ Bayes mode only⚠️ Manual
Prior vs Posterior charts
Law of total probability mode⚠️
Likelihood ratio, odds⚠️⚠️
Educational content⚠️
Step-by-step LaTeX⚠️⚠️
Example presets
Copy & share

❓ Frequently Asked Questions

What is the difference between Bayes' theorem and regular probability?

Regular probability computes P(A) or P(A and B). Bayes' theorem reverses conditionals: given P(B|A), it finds P(A|B). It's the mathematical framework for updating beliefs with evidence.

Why does the base rate matter so much?

When a condition is rare (low P(A)), even a highly accurate test produces many false positives. P(B|¬A) × (1-P(A)) can dominate the denominator, keeping P(A|B) surprisingly low.

What is the prosecutor's fallacy?

Confusing P(evidence|innocent) with P(innocent|evidence). A 1-in-a-million DNA match doesn't mean 1-in-a-million chance of innocence — it depends on the prior and the size of the suspect pool.

How is Bayes' theorem used in medicine?

Diagnostic tests: given sensitivity P(+|disease), specificity (1 - P(+|healthy)), and prevalence P(disease), Bayes computes P(disease|+). Essential for interpreting mammograms, COVID tests, genetic screening.

What is the difference between prior and posterior?

Prior P(A) is your belief before evidence. Posterior P(A|B) is your updated belief after observing B. Bayes' theorem is the update rule.

What is the likelihood ratio and why does it matter?

LR = P(B|A)/P(B|¬A) measures how much more likely the evidence is under A vs ¬A. LR=10 means evidence is 10× more likely if A is true. Posterior odds = Prior odds × LR.

Can Bayes' theorem be applied iteratively?

Yes. After computing P(A|B₁), use it as the new prior for the next piece of evidence B₂: P(A|B₁,B₂) ∝ P(B₂|A,B₁)P(A|B₁). Each posterior becomes the next prior.

How is Bayesian reasoning used in machine learning?

Naive Bayes classifiers, Bayesian networks, probabilistic graphical models. Modern LLMs use transformer attention that can be interpreted as soft Bayesian inference over context. Spam filters, recommendation systems, and A/B testing all rely on Bayesian updating.

📊 Bayes by the Numbers

1763
Bayes' Paper Published
7.5%
True + from 80% Test, 1% Disease
10⁶
Spam Emails Filtered Daily
P≠P
P(A|B) ≠ P(B|A)

⚠️ Disclaimer: This calculator provides accurate Bayesian computations for educational and professional reference. For medical diagnosis, legal proceedings, or critical decision-making, consult qualified experts. Base rates and test characteristics vary by population and context.

👈 START HERE
⬅️Jump in and explore the concept!
AI

Related Calculators