Bayes' Theorem Calculator

Apply Bayes' theorem to update probabilities with new evidence — compute posterior probability, likelihood ratios, and confusion matrices for medical tests.

About the Bayes' Theorem Calculator

Bayes' theorem updates a hypothesis probability after new evidence arrives. Given a prior probability, the likelihood of the evidence if the hypothesis is true, and the likelihood of the evidence if it is false, the calculator computes the posterior probability.

This page supports both a medical-test mode and a general conditional-probability mode. In medical mode it uses sensitivity and specificity; in general mode it uses the raw conditional probabilities directly. The output includes posterior probability, likelihood ratios, and a confusion matrix scaled to 10,000 people so the numbers stay concrete.

That makes the page useful anywhere a base rate and an observed result need to be combined into one updated probability.

Why Use This Bayes' Theorem Calculator?

Bayesian updating is the cleanest way to combine prior belief with new evidence, but it is easy to misread if you focus on the test result alone. The base rate often matters more than intuition suggests.

Putting the posterior, likelihood ratios, and confusion matrix together makes it easier to see why a positive result can still leave substantial uncertainty.

How to Use This Calculator

  1. Choose medical test mode (sensitivity/specificity) or general mode (raw conditional probabilities).
  2. Enter the prior probability P(A) — the base rate or prevalence before observing evidence.
  3. In medical mode, enter sensitivity (true positive rate) and specificity (true negative rate).
  4. In general mode, enter P(B|A) and P(B|¬A) directly.
  5. Read the posterior probability P(A|B) from the output — this is the updated probability after positive evidence.
  6. Review the confusion matrix to understand true/false positives in a population of 10,000.
  7. Check the sensitivity analysis table to see how different base rates change the result.

Formula

P(A|B) = [P(B|A) × P(A)] / [P(B|A) × P(A) + P(B|¬A) × P(¬A)]. Positive Likelihood Ratio = Sensitivity / (1 − Specificity). Posterior Odds = Prior Odds × LR+.

Example Calculation

Result: P(Disease | Positive Test) ≈ 0.0876 (8.76%)

With 1% prevalence, 95% sensitivity, and 90% specificity: P(+) = 0.95×0.01 + 0.10×0.99 = 0.1085. Posterior = (0.95×0.01)/0.1085 ≈ 8.76%. Even with a positive test, there's only about a 9% chance of actually having the disease.

Tips & Best Practices

The Base Rate Fallacy

The most famous illustration of Bayes' theorem is the medical screening paradox. A disease affecting 1% of the population is screened with a 95% sensitive, 90% specific test. Most people guess a positive result means ~95% chance of disease. The actual answer is under 9%. This counterintuitive result occurs because false positives from 99% of healthy individuals vastly outnumber true positives from 1% of sick individuals.

Bayesian vs. Frequentist Thinking

Frequentist statistics evaluates the probability of data given a fixed hypothesis. Bayesian statistics flips this — it evaluates the probability of a hypothesis given observed data. Bayes' theorem is the mathematical bridge between these perspectives.

Serial Testing and Prior Updating

In clinical practice, a second test isn't independent — the patient's prior has been updated by the first test. Use the posterior from the first positive test as the prior for the second test. Two consecutive positives with independent tests dramatically increase the posterior probability.

Sources & Methodology

Last updated:

Frequently Asked Questions

What is Bayes' theorem in plain English?

It's a formula that tells you how to update your belief after seeing new evidence. If you think there's a 1% chance of something, and you get a positive signal, Bayes' theorem tells you the new (higher) probability.

Why is the posterior so low even with a 95% accurate test?

Because the base rate matters enormously. With 1% prevalence and 90% specificity, the 10% false positive rate applied to 99% of healthy people generates far more false positives than the 95% sensitivity finds true positives.

What's the difference between sensitivity and specificity?

Sensitivity is the probability a test is positive when disease is present (TP rate). Specificity is the probability a test is negative when disease is absent (TN rate). Both need to be high, but specificity matters more with rare conditions.

Can I use this for non-medical scenarios?

Absolutely. Use general mode and supply any P(B|A) and P(B|¬A). Common uses include spam detection, fraud analysis, DNA evidence evaluation, and machine learning classification.

What is a likelihood ratio?

The positive likelihood ratio (LR+) is sensitivity divided by the false positive rate. It tells you how much a positive result increases the odds of the hypothesis. LR+ > 10 is strong evidence; LR+ < 2 is weak.

How do I interpret the confusion matrix?

It shows how 10,000 people would be classified. Green cells are correct (true positives and true negatives). Red cells are errors (false positives and false negatives). The ratio of TP to (TP+FP) is the PPV.

Related Pages