Apply Bayes' theorem to update probabilities with new evidence — compute posterior probability, likelihood ratios, and confusion matrices for medical tests.
Bayes' theorem updates a hypothesis probability after new evidence arrives. Given a prior probability, the likelihood of the evidence if the hypothesis is true, and the likelihood of the evidence if it is false, the calculator computes the posterior probability.
This page supports both a medical-test mode and a general conditional-probability mode. In medical mode it uses sensitivity and specificity; in general mode it uses the raw conditional probabilities directly. The output includes posterior probability, likelihood ratios, and a confusion matrix scaled to 10,000 people so the numbers stay concrete.
That makes the page useful anywhere a base rate and an observed result need to be combined into one updated probability.
Bayesian updating is the cleanest way to combine prior belief with new evidence, but it is easy to misread if you focus on the test result alone. The base rate often matters more than intuition suggests.
Putting the posterior, likelihood ratios, and confusion matrix together makes it easier to see why a positive result can still leave substantial uncertainty.
P(A|B) = [P(B|A) × P(A)] / [P(B|A) × P(A) + P(B|¬A) × P(¬A)]. Positive Likelihood Ratio = Sensitivity / (1 − Specificity). Posterior Odds = Prior Odds × LR+.
Result: P(Disease | Positive Test) ≈ 0.0876 (8.76%)
With 1% prevalence, 95% sensitivity, and 90% specificity: P(+) = 0.95×0.01 + 0.10×0.99 = 0.1085. Posterior = (0.95×0.01)/0.1085 ≈ 8.76%. Even with a positive test, there's only about a 9% chance of actually having the disease.
The most famous illustration of Bayes' theorem is the medical screening paradox. A disease affecting 1% of the population is screened with a 95% sensitive, 90% specific test. Most people guess a positive result means ~95% chance of disease. The actual answer is under 9%. This counterintuitive result occurs because false positives from 99% of healthy individuals vastly outnumber true positives from 1% of sick individuals.
Frequentist statistics evaluates the probability of data given a fixed hypothesis. Bayesian statistics flips this — it evaluates the probability of a hypothesis given observed data. Bayes' theorem is the mathematical bridge between these perspectives.
In clinical practice, a second test isn't independent — the patient's prior has been updated by the first test. Use the posterior from the first positive test as the prior for the second test. Two consecutive positives with independent tests dramatically increase the posterior probability.
Last updated:
It's a formula that tells you how to update your belief after seeing new evidence. If you think there's a 1% chance of something, and you get a positive signal, Bayes' theorem tells you the new (higher) probability.
Because the base rate matters enormously. With 1% prevalence and 90% specificity, the 10% false positive rate applied to 99% of healthy people generates far more false positives than the 95% sensitivity finds true positives.
Sensitivity is the probability a test is positive when disease is present (TP rate). Specificity is the probability a test is negative when disease is absent (TN rate). Both need to be high, but specificity matters more with rare conditions.
Absolutely. Use general mode and supply any P(B|A) and P(B|¬A). Common uses include spam detection, fraud analysis, DNA evidence evaluation, and machine learning classification.
The positive likelihood ratio (LR+) is sensitivity divided by the false positive rate. It tells you how much a positive result increases the odds of the hypothesis. LR+ > 10 is strong evidence; LR+ < 2 is weak.
It shows how 10,000 people would be classified. Green cells are correct (true positives and true negatives). Red cells are errors (false positives and false negatives). The ratio of TP to (TP+FP) is the PPV.