Bayesian reasoning assigns probabilities of certainty/uncertainty to hypotheses about various parts of reality. Then it uses any additional new information to update the probabilities of these hypotheses.
Keith Stanovich brought our rationality and dysrationality into focus in his book Rationality Quotient. Systems rationality, my term, includes the forms of rationality congruent with systems thinking. In Stanovich’s catalogue of reasoning, lack of Bayesian reasoning accounts for the errors we routinely make about medical tests and, I would contend, for many errors in judgement we make every day.
Bayesian reasoning is especially helpful under uncertain circumstances, which is practically all the time. It is congruent with systems thinking and antithetical to single cause-single effect thinking. The idea that a single cause produces a single effect or that a single effect has a single cause is hard to find evidence for in living systems, unless you blind yourself to parts of the available evidence.
Thinking about living systems basically asserts that single causes have multiple effects. And single effects are produced by multiple causes, which often are working together.
A major example where single cause – single effect thinking produces errors is in the interpretation of medical diagnostic tests. A test commonly may be touted as highly accurate. e.g. a hypothetical test for a disease is said to have high accuracy, meaning that in a sample of people with that disease, say 90% of these people score positive on the test and, say, 85% of those without the disease score negative on the test. Single cause – single effect thinking will immediately reason that if I score positive on the test, I have a 90% chance of having the disease.
Bayesian reasoning concludes otherwise. Single cause – single effect reasoning will produce grossly wrong conclusions about a test in situations where the prevalence of the disease is low. An example: schizophrenia has a prevalence of less than 2%. Let’s construct a hypothetical test, presence of a schizophrenia gene. And the test is highly accurate. For argument’s sake, use the previous numbers. Of those with schizophrenia, 90% have that schizophrenia gene and 85% of those without schizophrenia don’t have that gene. Single cause – single effect thinking would contend that if you have that gene, you probably would have schizophrenia.
What is wrong with this reasoning? First, there are many exceptions to the test. 10% of the schizophrenics don’t have that gene. So, the test isn’t perfect. And 15% of those who aren’t schizophrenic will actually have the gene.
Schizophrenia has low prevalence; over 98% of the population don’t have schizophrenia. 15% of that non-schizophrenic population will have the gene. These are false positives which dwarf the number of true positives, those with schizophrenia who have the gene.
Let’s translate that into numbers of people. Suppose we draw 1000 people from the general population. A priori we know that about 20 individuals will be schizophrenic and 980 not schizophrenic. Then assess everybody for that schizophrenic gene. 90%, 18 of the 20 schizophrenics will have the gene. 15% of the 980 non-schizophrenics, 147 individuals will also have the gene. So, you see that learning that you have the gene doesn’t increase your certainty about the prediction all that much. If you apply the Bayesian Theorem to these numbers, you find that having the gene increases your probability of having schizophrenia from 2% to 10.9%. Having the gene increases your chances of being schizophrenic, but not all that much. A very different conclusion from the crude cause-effect over-reliance on the test for the gene. You need a lot more information.
Be careful. Radio, TV and other media will report this as “If you have this gene, you will be 5.5 times more likely to have schizophrenia.” This way of reporting appeals to the automatic thinking of single cause – single effect. The reporting is correct but neglects to point out the prevalence of schizophrenia, 2%. 5.5 times a small amount is still fairly small.
Bayesian reasoning is a philosophy of uncertainty. There are some things that are more certain and many that are not. Bayesian reasoning gives a way to work with realistic uncertainty.
In this example, the prevalence of schizophrenia, 2%, is the information you have before taking the test, which gives you new information. Does the person have this “schizophrenia gene”? Bayesian reasoning asks how much will the test information change your estimate of whether the person has schizophrenia.
Bayesian reasoning requires you to start with a realistic appraisal of your ignorance and then update that appraisal with new facts. Living systems are full of phenomena about which one has only partial certainty and of new information that gives only small amounts of additional certainty. Bayesian reasoning helps deal with the endemic partial certainty of living systems.