Post-Test Probability Calculator

Calculate post-test probability from sensitivity, specificity, and prevalence with likelihood ratios, confusion matrix, sequential testing, and PPV/NPV sensitivity analysis.

About the Post-Test Probability Calculator

The post-test probability calculator shows how a diagnostic test result changes the probability of disease. Starting from a pre-test probability, it combines sensitivity and specificity to estimate the probability after a positive or negative result.

This is a direct application of Bayes' theorem to medical testing. It shows why positive predictive value and negative predictive value depend heavily on prevalence: the same test can look strong in one setting and weak in another.

Enter the test characteristics and prevalence to see the confusion matrix, likelihood ratios, sequential-testing effect, and how predictive values change across different prevalence levels.

Why Use This Post-Test Probability Calculator?

This calculator is useful when the question is not just whether a test is positive, but what that result actually means in context. Prevalence can dominate the answer, so the same sensitivity and specificity can produce very different post-test probabilities.

Showing the confusion matrix, likelihood ratios, and predictive values together makes it easier to see whether a test result is genuinely informative or just looks persuasive on the surface.

How to Use This Calculator

  1. Enter the test's sensitivity (true positive rate) as a percentage.
  2. Enter the test's specificity (true negative rate) as a percentage.
  3. Enter the pre-test probability or disease prevalence.
  4. Set the number of sequential tests to see how repeated positives increase certainty.
  5. Review the confusion matrix per 10,000 people to understand false positive/negative counts.
  6. Study the PPV/NPV vs prevalence table to see prevalence effects.
  7. Use presets for common medical tests.

Formula

LR+ = Sensitivity / (1 − Specificity). LR− = (1 − Sensitivity) / Specificity. Post-test odds = Pre-test odds × LR. PPV = (Sens×Prev) / (Sens×Prev + (1−Spec)×(1−Prev)).

Example Calculation

Result: PPV = 48.6%, NPV = 99.4%, LR+ = 18

At 5% prevalence, a positive result raises probability from 5% to 48.6%. A negative result lowers it to 0.55%. LR+ of 18 is a strong diagnostic tool — each positive multiplies the odds by 18.

Tips & Best Practices

The Base Rate Fallacy in Screening

Mass screening for rare diseases suffers from the base rate fallacy. Even with a 99% sensitive and 99% specific test, screening for a disease with 0.1% prevalence produces 10× more false positives than true positives (PPV ≈ 9%). This is why targeted testing based on clinical risk factors is preferred over universal screening.

ROC Curves and Optimal Thresholds

The ROC curve plots sensitivity vs (1−specificity) across all possible test thresholds. The area under the ROC curve (AUC) summarizes overall test performance. This calculator evaluates a single point on the ROC curve — the chosen threshold that determines the specific sensitivity/specificity trade-off.

Sequential and Parallel Testing Strategies

Sequential testing (test again if positive) maximizes specificity with each step but may miss cases. Parallel testing (confirm if either positive) maximizes sensitivity. The optimal strategy depends on the cost of false positives vs false negatives in the clinical context.

Sources & Methodology

Last updated:

Frequently Asked Questions

Why is PPV so low when prevalence is low?

At 1% prevalence, 99% of people are disease-free. Even a 95% specific test produces false positives in 5% of 99 = ~5 people, while catching 1% of 1 person with disease. Most positives are false — hence low PPV.

What's the difference between sensitivity and PPV?

Sensitivity asks: of those WITH disease, what fraction tests positive? PPV asks: of those who TEST positive, what fraction has disease? Sensitivity is a test property; PPV depends on prevalence.

How do likelihood ratios work?

LR transforms pre-test odds to post-test odds. Convert probability to odds (p/(1−p)), multiply by LR, convert back. LR+ applies to positive results, LR− to negative results.

Can I use this for non-medical tests?

Yes! The same framework applies to any binary classifier: spam detection, quality control, fraud detection. Sensitivity = recall, PPV = precision in machine learning terminology.

What if I run the same test twice?

If the two tests are independent and measure the result separately, sequential testing multiplies likelihood ratios. If it is just the same test repeated, the independence assumption is weaker.

What is the Fagan nomogram?

A graphical tool connecting pre-test probability, likelihood ratio, and post-test probability on three scales. Drawing a line from pre-test through LR gives post-test probability. This calculator provides the same information numerically.

Related Pages