Greg Mayer posted at Jerry Coyne’s blog on “Why I am not a Bayesian.” In his explanation, he goes wrong at three key points. And they are illustrative of common mistakes people make in trying to understand or apply Bayesian reasoning. In reality, Mayer is a Bayesian. He just doesn’t understand why. Here is the breakdown.

Error the First

Mayer’s assertion (quoting Royall) “pure ignorance cannot be represented by a probability distribution” is false. It is in fact self-refuting. “I don’t know whether h is more likely than ~h” is by definition a probability distribution (it entails P(h|b) = 0.50). I made this point already in Proving History, and there provide a demonstration refuting Mayer-Royall style claims generally (pp. 83-85 and 110-14).

Although one need merely point out: if you cannot define the prior probability, then you can never know if any likelihood ratio warrants belief. Ever. For example, saying a claim is ten times more likely to be true given a new piece of evidence doesn’t tell you if it’s even likely at all. For instance, ten times 1% is only 10% … which means the claim remains false. Thus likelihood ratios are logically useless in the absence of a prior probability to update.

But more damning is the fact that all prior probabilities are the posterior probabilities of prior likelihood ratios. In other words, every prior is the outcome of a preceding likelihood ratio. So if you believe in likelihood ratios (as Mayer says, “only the likelihoods make a difference … So, why not just use the likelihoods?”), you have to believe in priors. Because the one creates the other, and by transitive logic, the other can be reverse-engineered into the one. (See “iteration” in the index of Proving History.) It’s just a needless waste of time to do that all the way down to the raw uninterpreted data of human sensation (as is what you would do, if you wanted to build a Bayesian argument all the way down to its foundational ratios in undeniable data).

Because we don’t need to. Science conclusively establishes priors all the time (and so can human experience of all varieties and in all fields, especially when employing a fortiori reasoning: see “a fortiori” in the index to Proving History). In fact prior probabilities in the sciences are called base rates. The prior probability that you have cancer, for example, has a well documented value. Therefore we know the prior probability that you have cancer when a certain test for cancer is applied. The results of that test then tell you the updated probability that you have cancer given that new piece of information: the test coming up positive or negative. And how much the resulting likelihood ratio alters the prior then depends on the false positive and false negative rates of the test.

Ignoring base rates—ignoring priors, in other words, as Mayer wants—is actually an established logical error called the Base Rate Fallacy.

Scientists should not be promoting fallacious reasoning on science blogs.

Just saying.

Error the Second

Mayer writes that “In the end, only the likelihoods make a difference; but this is less a defense of Bayesianism than a surrender to likelihood” because adding in priors means to “boldly embrace subjectivity,” but “then, since everyone has their own prior, the only thing we can agree upon are the likelihoods. So, why not just use the likelihoods?”

That last question I already answered above: the answer is, because you can’t. It’s logically impossible to know how likely h is with only the likelihoods. For even a million-to-one favorable Bayes factor can still end up giving you a posterior probability of under 1%, in other words a guaranteed false claim. So even saying “we have a million-to-one likelihood ratio in favor of h” tells you nothing about whether you should believe h.

But that’s his first mistake, which I already addressed.

Mayer’s second mistake appears here at the point where he complains, in effect, that priors are so subjective anyone can make up any prior they want, willy nilly (tell that to oncologists; get laughed at in the face). Again I already refuted that in Proving History (pp. 81-85). Key point there: he is falling victim to an equivocation fallacy, conflating “subjective” with “arbitrary.” There can be subjective elements in estimating a starting prior (not always significantly, though, e.g. base rates of cancer), but you can’t just start with any prior. You have to justify your priors with prior data (e.g. Proving History, pp. 229-56; data is even extendable with previously confirmed hypotheses: pp. 257-65)—elsewise, all hypotheses start out equally likely. Which is a prior probability.

You can’t escape the consequences of your own reasoning.

Uncertainty is then accounted for with margins of error, which you use to account for which possible prior probabilities the data available can and cannot support (Proving History, pp. 265-80, with pp. 257-65). So it is simply not the case that “everyone has their own prior” in any sense sufficient to carry Mayer’s conclusion. Though people can sometimes differ on where their intuition puts a prior, everyone who is basing their intuition on actual data (as in background knowledge, the b in a Bayesian equation on which all the probabilities in it are conditioned—including the likelihoods, incidentally!) will be plotting priors in the same region. In other words, within the same margins of error objectively supported by the data. (See “disagreement” in the index to Proving History.)

The same subjectivity also operates on assigning likelihoods, too. But the same rules apply there as well.

The crucial lesson, though, is that Bayesian reasoning forces you to make this fact explicit. Whereas all other methods conceal it, and thus allow people like Mayer to pretend they aren’t being just as subjective as Bayesianism requires. Bayesianism is honest. Everything else is in some measure or other a lie. Maybe a lie you tell yourself, but a lie all the same. And that fact is illustrated by Mayer’s third mistake…

Error the Third

Mayer says:

The problem with Bayesianism is that it asks the wrong question. It asks, ‘How should I modify my current beliefs in the light of the data?’, rather than ‘Which hypotheses are best supported by the data?’. Bayesianism tells me (and me alone) what to believe, while likelihood tells us (all of us) what the data say.

He mistakenly thinks these are saying different things. They are not. And that betrays the fact that he doesn’t really understand Bayesian reasoning. What does it mean to say “this hypothesis (h) is better supported by this evidence (e)”? That sentence is vacuous. Unless you can explain how e makes h more likely. Because if e does not make h more likely than it does ~h, there can be no intelligible sense in which e supports h over ~h. Ooops. Guess what that means. ‘Which hypotheses are best supported by the data?’ = ‘How should I modify my current beliefs in the light of the data?’.

That Mayer doesn’t notice this, is kind of embarrassing.

Similarly, “Bayesianism tells me (and me alone) what to believe, while likelihood tells us (all of us) what the data say” is likewise vacuous. Not only because likelihoods are just as subjective as priors (for all the reasons I’ve surveyed throughout this article already), but more importantly because: if you can’t explain to someone why b + e tells “you” what to believe, then it doesn’t in fact tell you what to believe. Sorry, but it doesn’t. Whereas if you can explain to someone why b + e tells “you” what to believe, then you’ve just told them what to believe. Ooops. Guess what that means. ‘Bayesianism tells me (and me alone) what to believe’ = ‘likelihood tells us (all of us) what the data say’.

Because there is no way data can tell you “alone” what to believe—unless you have access to data others do not, in which case obviously your job as a scientist is to provide others with that data. And then it’s no longer telling you “alone” what to believe. And when you can’t do that, you are facing a universal problem in epistemology that has nothing to do with Bayesianism: sometimes you know stuff other people don’t, and therefore sometimes you are warranted in believing something others are not. That’s just true. You can’t escape it by denying Bayesianism.

Hence, if you can explain to someone why b + e tells “you” what to believe, then you’ve just told them what to believe, unless they can show you that you have left something out of b or e, or inserted something in them that doesn’t actually exist. In which case your conclusion will change into alignment with theirs, once you adjust your b + e to align with theirs. (Again, see “disagreement” in the index to Proving History.) This is in fact what is actually wrong with bad uses of Bayes’ Theorem, as for example to prove God exists: always they are fucking with the data. Stop fucking with the data, and Bayesianism gives you atheism. Every time. The problem with bad Bayesian arguments is not Bayesianism. The problem with bad Bayesian arguments is that they are bad. (Proving History, pp. 67, 79-80, 91, and 305 n. 33.)

So much for the distinctions Mayer thought he was making. They dissolve on simple analysis.

The Major Point

Mayer needs to learn why Prior Assumptions Matter. [See also The Fallacy of Arbitrary Priors.]

The real embarrassment here is how he is already always a Bayesian in everything he does—and doesn’t know it. Why, for example, for any study does he not consider the likelihood that the CIA or aliens meddled with the experiment or observations and thus the data was unknowingly faked? Well, because he assumes—subjectively!—that those hypotheses have vanishingly small priors. So Mayer is already relying on his feared subjective priors; he just won’t admit it or doesn’t realize it. (Proving History, pp. 104-05.)

How would Mayer defend his refusal to take those hypotheses seriously, even though they have exactly the same likelihoods as any theory any scientist tests? (Because, after all, they are engineered to predict exactly all the same evidence: see “gerrymandering” in the index of Proving History.) He would appeal to background knowledge (i.e., b) which establishes a very low prior frequency of the CIA doing that, or of aliens doing anything, much less that. He would, in other words, admit to relying on subjective priors. Only, he would then have to admit they are not arbitrarily subjective: if someone came along and said “I have my own prior for alien meddling, and it’s 90%” he would duly point out that they have no data on which to base such a high base rate for alien meddling in human affairs at all (much less in science), whereas Mayer has tons of data (a vast database of human experience, including his own) in which the evident frequency of alien meddling is effectively zilch. So if it’s happening, it’s happening exceedingly rarely.

Attempting to bypass that with a Cartesian Demon then fails because it reduces the prior probability even further by adding too many improbable unevidenced assumptions to the hypothesis (and the prior probability of each multiplies against the rest to produce diminishing probabilities overall). (See “Ockham’s Razor” in the index to Proving History.) And note only the prior is thus reduced (unless you dig all the way down to raw sensory data before running your Bayesian series). Hence Mayer’s proposed “likelihoodism” can’t explain why Cartesian Demons aren’t the cause of everything ever. That’s a serious epistemological defect. He might want to tend to that.

But to bring this point back to reality, there is a hypothesis the Mayers of the world fail to take into account in published research, and which Frequentism can’t take into account, only Bayesianism can do it: human fraud.

Guess what, that’s a thing. Numerous high profile examples have appeared in science news and journals in just the last three years. Unlike aliens and a meddlesome CIA, scientific fraud has a real measurable frequency. It therefore has a base rate. And a minimum one at that, since per the Conjunction Fallacy (famously illustrated with the Linda problem), necessarily there must be more than has been caught. One then must use broader background data to try and estimate how much un-caught fraud there is, and come up with a defensible, data-based, high-confidence margin of error for it. And until that’s been done, guess what? Ooops. You have to subjectively guestimate what it is. Otherwise, no conclusion in science is defensible, because it can all be fraudulent (as Creationists and Climate Denialists would love to hear). So Mayer must be assuming the total base rate of fraud in science—in other words, its Bayesian prior probability—is sufficiently low as to sustain his trust in published scientific conclusions. And he has to be assuming that subjectively.

Because this certainly isn’t being calculated for in science papers. They might tell you that the null hypothesis is to be rejected with 95% certainty and therefore we can be kinda-sorta 95% sure the hypothesis is verified by the data. But that’s simply never actually true, because they didn’t calculate for the probability of fraud. Add that in, and the probability that their conclusion is true is not what any paper ever says, but somewhat lower. How much lower? Well, Mayer has to rely on his subjective intuition to say. But it won’t be arbitrary. Mayer would argue against someone who claimed to have “their own prior” for fraud that’s 90% (like, say, Creationists and Denialists). And he would argue the point with data. And he would be right. Just as in the case of the aliens and meddling CIA. And so we all do. For everything. All the time. In science and out.

Thus, we are all Bayesians. Including Mayer. He’s just, you know, in denial.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading