Chronology Current Month Current Thread Current Date [Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

# [Phys-L] Bayes" Theorem and Medical Tests

I recently read a very interesting opinion piece in the Washington Post by
Dr. Daniel Morgan associate professor of epidemiology, public health and
infectious diseases at the University of Maryland School of Medicine and
chief of hospital epidemiology at the Baltimore VA Medical Center. His
article was entitled "What the Tests Don't Show", Oct. 5, 2018. His main
point was that medical doctors frequently misread test results because they
don't understand probability theory. He gave an example of disease X which
occurs, on average, in 1 out of 1000 patients, and the test to detect it has
a false-positive rate of 5%. If a patient's test comes back positive, a
2014 survey found that almost half of the doctors surveyed felt that the
patient had a 95% chance of having disease X. However, the correct answer
is 2%. He gave a simple explanation of how to arrive at the correct answer,
as follows. Out of 1000 randomly chosen people only one will, on average,
have the disease, but the 5% false positive rate means that almost 50 of the
remaining 999 people will test positive. Adding in the one correct positive
result (assuming the test has probability 1 of detecting X, if X is present)
yields 51 positive results, so the probability of actually having disease X,
given a positive result, is 1 out of 51 or about 2%.

This rang the "Bayes' Theorem bell" for me, i.e.:

P(X|T) = p(X)p(T|X)/(p(X)p(T|X) + p(not X)p(T|not X))

Where:

P(X|T) = a posteriori probability of X given positive test result T

P(X) = a priori probability of X = 0.001

P(T|X) = probability of positive test result given X (implicitly assumed to
be 1 in the above argument) = 1

P(not X) = probability of not X = 0.999

P(T|not X) = probability of T given not X = 0.05

This yields p(X|T) = 0.02 or 2%

It was nice to see that a little training in probability theory could come
in handy. However, my physics training up to the PhD level occurred in the
60s. We were not required to take (and most, like me, did not take) a
dedicated course in probability theory. Whatever probability theory we
needed for quantum or statistical mechanics came from the physics professor
or the physics text book. I think most of us then, after just receiving our
PhDs, would have answered 95% (or some other incorrect response) to the
above question. In my case, it was a required course in probability and
statistics that I took to get a masters in engineering administration (post
PhD) that introduced me to concepts like Bayes' Theorem.