Wednesday, September 28, 2011

Skeptics and Bayesian epistemology

A few prominent skeptics have been arguing that science and medicine should rely upon Bayesian epistemology.  Massimo Pigliucci, in his book Nonsense on Stilts, on the Rationally Speaking podcast, and in his column in the Skeptical Inquirer, has suggested that scientists should best proceed with a Bayesian approach to updating their beliefs.  Steven Novella and Kimball Atwood at the Science-Based Medicine blog (and at the Science-Based Medicine workshops at The Amazing Meeting) have similarly argued that what distinguishes Science-Based Medicine from Evidence-Based Medicine is the use of a Bayesian approach in accounting for the prior plausibility of theories is superior to simply relying upon the outcomes of randomized controlled trials to determine what's a reasonable medical treatment.  And, in the atheist community, Richard Carrier has argued for a Bayesian approach to history, and in particular for assessing claims of Christianity (though in the linked-to case, this turned out to be problematic and error-ridden).

It's worth observing that Bayesian epistemology has some serious unresolved problems, including among them the problem of prior probabilities and the problem of considering new evidence to have a probability of 1 [in simple conditionalization].  The former problem is that the prior assessment of the probability of a hypothesis plays a huge factor in the outcome of whether a hypothesis is accepted, and whether that prior probability is based on subjective probability, "gut feel," old evidence, or arbitrarily selected to be 0.5 can produce different outcomes and doesn't necessarily lead to concurrence even over a large amount of agreement on evidence. So, for example, Stephen Unwin has argued using Bayes' theorem for the existence of God (starting with a prior probability of 0.5), and there was a lengthy debate between William Jefferys and York Dobyns in the Journal of Scientific Exploration about what the Bayesian approach yields regarding the reality of psi which didn't yield agreement. The latter problem, of new evidence, is that a Bayesian approach considers new evidence to have a probability of 1, but evidence can itself be uncertain.

And there are other problems as well--a Bayesian approach to epistemology seems to give special privilege to classical logic, not properly account for old evidence [(or its reduction in probability due to new evidence)] or the introduction of new theories, and not be a proper standard for judgment of rational belief change of human beings for the same reason on-the-spot act utilitarian calculations aren't a proper standard for human moral decision making--it's not a method that is practically psychologically realizable.

The Bayesian approach has certainly been historically useful, as Desiree Schell's interview with Sharon Bertsch McGrane, author of The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy, demonstrates.  But before concluding that Bayesianism is the objective rational way for individuals or groups to determine what's true, it's worth taking a look at the problems philosophers have pointed out for making it the central thesis of epistemology.  (Also see John L. Pollock and Joseph Cruz, Contemporary Theories of Knowledge, 2nd edition, Rowman & Littlefield, 1999, which includes a critique of Bayesian epistemology.)

UPDATE (August 6, 2013): Just came across this paper by Brandon Fitelson (PDF) defending Bayesian epistemology against some of Pollock's critiques (in Pollock's Nomic Probability book, which I've read, and in his later Thinking About Acting, which I've not read).  A critique of how Bayesianism (and not really Bayesian epistemology in the sense defended by Fitelson) is being used by skeptics is here.


Anonymous said...

I don't understand why you would say that "a Bayesian approach considers new evidence to have a probability of 1". This means that the denominator of Bayes theorem is always 1, which is nonsense!

The total evidence is the marginal probability of E, or p(E) = p(E|H)p(H) + p(E|-H)p(-H), and this is rarely equal to 1!

For instance, take two mutually exclusive and exhaustive hypotheses and set the prior of each to 0.5. Assume that one of them predicts the data perfectly, p(E|H)=1 and that the other predicts it poorly, p(E|-H)=0. So p(E)= .5*1+.5*0 = .5, a value far from 1! The so-called 'problem of old evidence' is a non-problem for exactly the same reason.

Jim Lippard said...

My description was pretty poor, and there's really a whole cluster of problems around conditionalization and the treatment of evidence in Bayesian epistemology. What I called the problem of new evidence is just the objection to simple conditionalization viewing the acquisition of new evidence as coming to a position of certainty about that evidence, which the Stanford Encyclopedia of Philosophy discusses under the name "the problem of uncertain evidence." The problem of old evidence is a different issue and is due to Clark Glymour. See sections 6.2 A & B in that article. You might also find John Pollock's "Problems for Bayesian Epistemology" (PDF) of interest, especially pp. 14-15.