Evolution has prepared us well to estimate probabilities. In split seconds ...
This is an article from the book "Five-minute mathematics" by Ehrhard Behrends which was published in 2008 by the American Mathematical Society (AMS). It is reproduced here with the kind permission of the AMS.
Evolution has prepared us well to estimate probabilities. In split seconds we evaluate a situation and decide: fight or flight? try to extinguish the flames or run to safety? We are also adept at understanding the effect of new information on the probability of some occurrence. If, for example, you are wondering whether your new acquaintance is interested in classical music, you will likely decide that the odds are diminished if you discover that he confuses Schumann with Schubert.
These rather vague ideas can be made mathematically precise under the rubric of conditional probability. As a mathematical example let us consider the probability of rolling an even number on the throw of a single fair die. It is certainly [1/2]. However, if one has the information that the number rolled is a prime number, this probability sinks to [1/3], since of the three prime numbers between 1 and 6, namely 2,3,5, only one of them, 2, is even.
There is a mathematical formula, the renowned Bayesian formula, which allows conditional probabilities to be inverted. Imagine a bartender who knows from experience the percentage of customers who leave a tip. Say the average is 40%, and that among tourists, the average rises to 80%. Therefore, the information that a particular customer is a tourist increases the probability that he will leave a tip. The Bayesian formula allows a converse inference to be made: from the fact that a tip was left, one can compute the probability that the customer was a tourist.
Admittedly, customer tipping probabilities do not constitute a problem of fundamental importance. However, the same techniques are applicable to far more significant questions. A famous example is the efficacy of medical tests. What is the probability that I have a certain disease if the test for that disease comes up positive? To all those who may at some time experience such a positive result, mathematics can reassure them that the probability is much less than they might naively suppose. In this case, evolution has programmed us to be much too pessimistic.
We had quite a bit to say on the topic of conditional probabilities and the Bayesian formula in Chapter (the goat problem). Here we summarize the most important points:
Now we can make our medical example more precise. Suppose we are concerned with the diagnosis of a rare disease. Let it not be cancer or AIDS. Suppose it is measles. One morning you observe a red pustule on your face and would like to know whether you have contracted measles. The doctor performs a measles test, and the result is positive. Do you have the disease or not?
For our analysis, let us suppose that A denotes the outcome "the measles test comes back positive," and B is the event "I have measles." To use the Bayesian formula, we need the numbers P(B), P(A | B), and P(A | ¬B). The first of these is the probability of someone having measles. The disease is rare among adults, and we may set the probability at P(B)=0.05, or 5%.
The probability P(A | B) describes the reliability of the test: what is the probability that those who are sick with measles have a positive test? If the test were perfect, this probability would be 1.0, or 100%. However, there are no such tests, and one can only hope to approach the ideal. Let us optimistically set this probability at 0.98.
Finally, we need P(A | ¬B): what is the probability that I have a positive measles test even though I don't have measles. Here one would hope for the answer to be zero, but such a goal is unachievable. A realistic probability of a "false positive" result is P(A | ¬B)=0.20.
Now we can do the calculations. We would like to know P(B | A), the probability that a positive test result implies that one has measles. Using the Bayesian formula, we obtain
The probability of really being sick is a comforting twenty percent. This result is surprising. Most people would expect a higher number. The reason for this is that in estimating the probability, one tends to neglect the fact that the disease itself occurs only rarely.
To clarify why we fail at estimating these probabilities, consider the rectangle in the figure:
The rectangle symbolizes all the possible outcomes of interest. The small dark circle stands for outcome B: "measles." The circle is very small because measles is rare. The other circle represents outcome A: "test result positive." It cuts deeply into the small circle, since in the case of actual disease, the test almost always returns a positive result. The portion of the small circle outside the larger circle is small, since we are assuming a negligible number of "false negatives."
However, despite these conditions, the portion of the A circle taken by the B circle is not large: a positive test result does not mean that one almost certainly has measles.