Bayes’ Theorem is just a logical formula. Like any logic, it can be used to argue silly things (like Sheldon on The Big Bang Theory trying to predict the future of physics on a whiteboard). Because bad premises, always lead to bad conclusions, even with straightforward syllogistic logic. As atheists well know when they face-palm at William Lane Craig’s continuing obsession with the Kalam Cosmological Argument. Which is perfectly valid. It’s just its premises are all bogus. But that being the case, does not discredit logic. It’s no argument to say, “logic gave us that Kalam nonsense, therefore we should reject logic.” So it’s also no argument to say, “so-and-so used Bayes’ Theorem to prove God exists, therefore we should reject Bayes’ Theorem.”

I’ll be writing an article on some crank uses of Bayes’ Theorem to “prove God exists” later this month. The problem with them is not Bayes’. Just as the problem with the Kalam is not logic. Today, I’m going to cover how it actually works, and how not to fuck it up.

Why Should We All Be Using It?

If you believe in evidence-based-reasoning, if you believe in rationality, you must agree all your conclusions should be logically valid, and derived from premises you can be fairly certain are true (not just speculated…otherwise, it’s just speculation in, speculation out). But all conclusions from evidence are probabilistic. You never know anything about the facts of the world for certain. You only know such things to some degree of probability. But probability is mathematics. You therefore can never have a valid conclusion about what’s probable, without doing math. Never. Not ever. Sorry, mathophobes. Deal. Because that’s the fact of it: You are doing math. The only question left is, Are you doing it correctly or not?

Since there is no way to reach a logically valid conclusion about what’s probable, without doing math, what math should you be doing? There is only one formula for reaching a logically valid conclusion about what’s probable that is accessible to the average high school graduate today. And that’s Bayes’ Theorem. To be accessible to such an average person, generally necessitates requiring nothing more than sixth grade math (by 20th century U.S. education standards). You may have forgotten everything you were taught about math in the sixth grade; but trust me, you can re-learn it with ease. All you have to do is care to. It doesn’t even require a textbook. Although this one is fun. But sixth grade math. That’s all you need to use Bayes’ Theorem.

There are more advanced applications of Bayes that require more advanced mathematical knowledge. And they have their uses. But the average person doesn’t need them. Those applications are routine now in science, engineering, business, marketing, government, internet security, spam filtering, cognitive science, AI, economics, political & military intelligence, the insurance industry, search and rescue operations; nearly everywhere now. Alan Turing used it to crack the Enigma code. Nate Silver uses it now for political & economic forecasting. There are strong cases being made that it should be introduced into the legal system and replace over-reliance on mere frequentism in all the sciences. In fact, it already is being used in some legal systems, and many courts are only skeptical now when “the underlying statistics” aren’t firmly established…in other words, when the premises suck.

But that more basic formula?

You are already using it. Every time you reach any conclusion about how probable something is. You are unconsciously assuming a prior probability. You are unconsciously assuming a likelihood of the evidence. You are unconsciously feeling whether the one is enough to outweigh the other. And then that causes you to feel confident that something is true or false. Or more likely the one than the other. Or else you feel you aren’t confident either way. And that feeling? A product of Bayes’ Theorem. Already running in your head. Only, like all intuition, by not examining it, you often fuck it up.

That’s why we know intuition is highly prone to biases and errors. One way to start bypassing or controlling for those, is to get serious about understanding the logic of what you are already doing. So you can be more certain you are doing it correctly.

Bayesian Reasoning about the Past

This is why I use Bayes’ Theorem to analyze highly uncertain problems in history. All historians use it, unknowingly, to generate every claim they make about history. But by not examining whether they are using it correctly (often because they don’t even know they are using it at all), they are highly susceptible to being wrong, when the data is not overwhelmingly clear.

Of course, when the data is overwhelmingly clear, you don’t need to do the math anyway. You could. But it’s a needless waste of headache and time. Just like when you assume an asteroid won’t crash into your house tomorrow…that is a mathematical calculation you just did in your head, whether you realize it or not; you made an informed but obviously imprecise guess about the frequency of that happening to people like you, based on a vast body of data. But you didn’t need to “do the math.” Because no matter what a precise working out of the math would get you, you can already tell it’s going to get you some extremely low probability. And that’s all you need to know to plan your party tomorrow.

But then, if the news suddenly reports that indeed, an asteroid is going to vaporize your neighborhood tomorrow, you know the probability of that being mistaken is lower than the probability of it happening in general, and therefore the probability of it happening is now nearly certain. The exact opposite of what you concluded in the absence of that evidence. Because otherwise, the news would not likely report it with such confidence, without being gainsaid by the science community, which you responsibly would check for. And based on vast background data, you know the science community is not so flippant as to not have already done the math on this, and accounted for the prior probability of a collision in general, and of a specific collision zone. You feel confident the odds of that are lower than the odds of just anyone being hit by an asteroid ever.

That’s Bayesian reasoning.

And accordingly, archaeologists are using it more explicitly now (Richard Carrier, Proving History, p. 49), and philosophers are concluding all historians are already using it (Aviezer Tucker, Our Knowledge of the Past). They just don’t know it’s what they’re doing. Most of the time, they don’t need to. Their intuition gets it right, well enough to warrant their confidence, when the data is overwhelming or at least strong. Because the variables there are so extreme, they wash out most if not all biases. At least often enough to relegate resistance to an obviously irrational fringe. Hence, the data for the Holocaust is vast. So resisting the conclusion that it happened, so obviously requires denying and explaining away so many thousands of pieces of really good evidence, that even a layperson can tell that’s insane. But when the data is highly scarce, uncertain, and problematic, the innate biases that drive all humans can easily overwhelm the intuited math anyone is doing unconsciously in their head. And that’s when you need to pop the hood and look inside at what logic your intuition is actually following, so you can draw out the math and see what you are actually doing. And thereby evaluate whether you are doing it correctly.

Which requires you to know how to do that. Complaining that you don’t like math is not an excuse. It amounts to saying “thinking correctly is too hard; therefore I just won’t bother.” Which is simply admitting you aren’t thinking correctly. Or don’t know if you are. Which, if you wish to act rationally, warrants agnosticism about any claim you are intuiting in that case. Because you cannot rationally be confident in any conclusion, when you don’t even know whether you are arriving at it correctly.

How Do You Use Bayes’ Theorem

I’ve written on this before. You can peruse my archive; read my book. But the shortest version is this: the odds that a claim is true, equal the prior odds it’s true, times the likelihood ratio. Meaning:

  • The prior odds are the odds on any such claim being true. That’s the odds you’d always intuitively assign the moment you hear a claim, before you check or hear any of the evidence for or against it. If you are behaving rationally, you will base that assignment on your past experience and knowledge, of what’s typical and what’s not. If you are not behaving rationally, you will simply codify your biases and false beliefs, and substitute them for facts at this point. And then it’s just garbage in, garbage out.
  • Meanwhile, the likelihood ratio, is the ratio of two probabilities: (1) how likely is all the evidence we have (including the evidence we don’t have) if the claim is true; and (2) how likely is all that same evidence, if the claim is false. Which means, if the claim is false, something else caused the evidence to be that way; so you are always comparing different explanations of how the evidence got to be the way it is. Bias and error can arise here, when you either fail to consider a plausible alternative explanation of the evidence, or you grossly misestimate how well the evidence we have fits what you’d really expect on each competing explanation.

Now, all that? That’s always true. Always. On matters of fact, you have never reached a valid conclusion in your life, that didn’t follow exactly that formula: [prior odds] x [likelihood ratio] = [odds a claim is true]. That is the only valid formula for arriving at the probability of any claim to fact (other than more complicated formulas that still only model the same mathematical relationship). So it certainly helps to know that formula, so you can correct or avoid mistakes in applying it. Just as it certainly helps to know logic, so you can correct or avoid mistakes in applying that. Same principle. Same point. Only, logic is almost useless, really; since it only deals in deductive certainty, which never exists for facts. Almost all assertions you make and conclusions you reach, are on matters of fact, which means matters of probability. And when you want to work out how to reach conclusions about that logically, Bayes’ Theorem is what you get.

The intuitive reason you always rely on, already does this. If you are at all good at critical thought, then the more bizarre a claim you hear, the more it goes against what you know to be true, the more skeptical of it you are. That is an intuitive estimate of prior probability. “Usually, it’s this,” is a statement about priors. Likewise, the more expected the evidence is on one claim, than on any other explanation of how that evidence came about, the more that evidence supports that claim, over against all competing alternatives. When you think like that, when you see a body or item of evidence as “strongly favoring” one claim over another, you are intuitively “feeling” your brain’s estimate of the likelihood ratio. How you put them together, then depends on the logic of Bayes’, and that you can either do well, or poorly. Better to learn how to do it well.

Every time in your life you’ve been right about something, and it wasn’t simply by blind chance that you were right but because you credibly reasoned out what was correct, you did so using Bayesian reasoning, by importing credible premises, and deriving the correct conclusion from their conjunction. Credible premises means a prior odds and a likelihood ratio that actually make sense on the evidence of the world actually available to you. Those premises become defensible when you can actually articulate that that’s the case; that is, when you can lay out why you are concluding a claim is initially unlikely or likely, or by however much, and the evidence of the world matches what you’re saying (without having to lie about that evidence, or make any of it up); and likewise when you can lay out why you are concluding the evidence is so many times more likely on one claim than on any other competing explanation…and the evidence of the world matches what you’re saying (without having to lie about that evidence, or make any of it up).

Basically, for example, if the prior odds really are 2 to 1 against a claim, but the evidence really is 4 to 1 more likely on that claim than on any other, then it’s factually the case that the odds are 2 to 1 that the claim is true (1/2 x 4/1 = 4/2 = 2/1). And this requires no precise knowledge. You can be amply certain that the prior odds are at least 2 to 1 against a claim…as for example, when it’s obvious the real odds (if you surveyed the database of human knowledge and worked out all the math) would easily be beyond even 10 to 1. And likewise, you can be amply certain that the likelihood ratio can’t be more than 4 to 1…as for example, when it’s obvious the real ratio (on any closer examination), would easily fall below 3 to 1. So you can say “the prior odds are at least 2 to 1 against a claim, but the evidence is no more than 4 to 1 more likely on that claim than on any other; therefore it’s factually the case that the odds are no more than 2 to 1 that the claim is true.” You can be as sure of that, as you are that the prior can’t be better than 2 to 1 against and the evidence can’t be better than 4 to 1 in favor. And this is literally what you’ve always been doing, the whole of your life.

The odds you end up with, are really just a measure of your confidence in the claim. If you feel a claim is only twice as likely to be true as false, what you are feeling is that you’ve been wrong, or will be wrong, one out of every three times you face a similar situation of evidence. Which is a bit too high a rate of failure to gamble on. But when you feel highly confident, then what you are feeling is something more like a 99 to 1 chance you’re right—which means, only once out of a hundred comparable situations, would you expect to turn out to be wrong. The probabilities in Bayesian reasoning do start with estimates of the frequencies of events and outcomes; but they end with a frequency of your being wrong about the claim being true (were you to assert it’s true). Because the latter frequency, is directly a product of the other.

Doing It Better & Testing Your Models

I mentioned you can go wrong at any of three places. You can be illogical, and incorrectly multiply your prior and your likelihood. For instance, you might estimate the prior is low and the likelihood is weak, and yet still incorrectly conclude the claim is true, when in fact those two premises wouldn’t validly entail that. In short, you can ignore the premises, and just believe whatever you want. That’s failing at math. Or you can grossly misestimate the prior. Or you can grossly misestimate the likelihood ratio.

For instance, if you have no evidence that a particular kind of cause of the evidence you are observing is frequent, and even plenty of evidence it very much isn’t usually the cause of such evidence, and yet still assign it a high prior. As when Christians argue from an empty tomb to a resurrection. They actually are ignoring the vast database of evidence that when bodies go missing, it is rarely because of a resurrection (if ever). They are misestimating the priors. They are ignoring reality. And that’s failing at reality-based reasoning. Likewise, when those same Christians insist “it’s unlikely there’d be no evidence of a theft if a missing body was stolen, therefore we can reject the thesis that the body was stolen.” Which is a Bayesian argument from likelihood ratio: unexpected evidence on a given theory, entails a likelihood ratio that makes odds against that theory. But this again ignores tons of reality.

First, we know from vast background experience with people and the world, that when the person advocating a claim controls all the evidence that gets to be preserved, it is not likely that evidence against their claim would survive (if ever there were any), exactly the contrary conclusion from the Christian’s. Moreover, we all well know, throughout all history—and especially before modern forensics—most thieves are never caught. That’s why there are still thieves. Also exactly the contrary conclusion from the Christian’s. Once again, their probability estimates are in defiance of reality. Rather than based on reality.

Second, there is evidence of a theft: the Christians themselves recorded it—inadvertently, by trying to claim an eyewitness report of the theft was a lie (28:11-15), which claim itself is more likely to be a lie, because to claim that, they had to claim knowledge of a conversation they weren’t even present at! A secret conversation among a select few of their enemies, regarding a conspiracy nowhere else attested, is among the least likely things a Christian could be telling the truth about. Thus, failing to take into account the reality-based likelihoods of the evidence, will of course get you false premises—and thus false conclusions. As in any other logic.

Over-estimating or under-estimating frequencies is common (whether for priors or likelihood ratios). But you can only evaluate whether that’s happening, when someone admits what frequency they are estimating. Thus, we need to be able to articulate our “intuitive” reasoning in Bayesian form, so we can actually spell out what frequency assumptions we are making. Only then can we vet those assumptions against reality, and thus know if they are plausible or ridiculous. And whatever confidence you can maintain at that point, logically transfers to the conclusion. “I am reasonably sure the frequencies (of the priors and likelihoods) cannot be more than X and Y” will get you a logically valid conclusion that “I am reasonably sure the probability this claim is true cannot be more than Z.”

So what makes you reasonably sure of those frequencies? What makes you reasonably sure the frequency with which bodies go missing because of resurrections rather than theft, misplacement, or faked or misdiagnosed deaths, is extremely low? What makes you reasonably sure the frequency of having evidence against a false claim is extremely low, when no one against that claim had any control of what evidence you get to see? And so on. The answer will be appeals to real world facts and experience. And lots of it. That’s how you get robust premises into a Bayesian formula. The formula then necessarily entails the conclusion. If those premises are true, then so is the conclusion.

How Misusing Bayes’ Theorem Sustains Delusions

On the matter of avoiding error in all this, two lessons are so basic yet I keep finding myself having to school people who resort to them. Because they are so ubiquitously at the heart of delusional thinking: wanting to believe a thing, despite overwhelming evidence against it. Desire-based, rather than evidence-based, belief. The deluded fail to recognize two facts (or at least one of them).

No probability in matters of fact is ever zero. The only things that can ever have a zero probability are things that are logically impossible; and yet even they cannot have a zero probability, because there is always a nonzero probability we are wrong about something being logically impossible! So if you think it makes sense to ever plug a zero into the math, you are wrong. So don’t. That’s acting exactly like creationists, anti-vaxxers, holocaust and climate-science denialists, and every crank ever: they are immune to evidence. No amount of evidence, no matter how vast, ever convinces them. Which is precisely what happens when you adopt, like they do, a prior probability of zero (or always a likelihood of the evidence of zero) for any alternatives to their own belief. Being immune to any quantity of evidence is irrational. Thus, assigning a zero probability anywhere in any Bayesian equation is irrational. A probability can be cosmically, even absurdly small. But never zero.

And then…

Making excuses for why a claim fails to predict the evidence we observe, does not rescue it. One of the most common and illogical ways people try to avoid the conclusions of sound logic, is to make up reasons to reject them. Reasons that don’t work logically. But that sound satisfying….to fucked up brains that don’t know how to think. When presented with the fact that all observations contradict your pet theory (like, that God exists), you will be tempted to invent a dozen excuses for why, actually, your theory did predict all those observations all along. In Bayesian terms, what you are trying to do is get the likelihood ratio back to where you want it, to make the evidence not be unlikely on your theory. But the problem is, every single “excuse” you add to your theory, reduces your theory’s prior probability. You can’t gain a better likelihood that way, without “paying for it” with a lower prior. Which leaves you back where you started. Or worse.

This is because every excuse you make up has its own probability of being true. If it’s almost certain to be true already (as one should be able to show on background evidence), then it will have negligible effect and is fine to presume. But if you have no evidence for it, then its probability of being true can’t be better than 50/50; and if there is even evidence against it, then it’s probability must be lower than even that. And that gets multiplied by the prior you started with before you made that up. Which means even a single ad hoc excuse cuts your prior in half. Two of them will cut it to a quarter. And so on. In geometric progression. Excuses that are actually improbable, cut it far more still. But because people don’t know how Bayesian reasoning works, they intuitively think they can stack up excuses to rescue any theory they want to believe in, with no penalty. Because their unconscious brain doesn’t know how to compensate for the trick their conscious brain just pulled. So their intuition continues giving an output as if those stacked up excuses weren’t affecting the prior but only the likelihoods.

This is how people delude themselves. All by failing at Bayes.

Lessons for the Historicity Debate

All too often critics of my argument in On the Historicity of Jesus neither understand the math nor make any attempt to correct it. Yet, if I am wrong in my conclusions in that book, then I must be wrong in the math. You therefore should be able to show that. If you can’t, then you can’t claim to know I’m wrong. So far, critics of OHJ just make up excuses to ignore the math, or make up different math on no evidence whatever, all just to rationalize the result they want—rather than critically examining if what they want the answer to be is wrong.

You certainly can’t use bad Bayesian arguments to defeat good ones. For example, all too often a critic will say “the prior probability of the historicity of Rank-Raglan heroes can’t be 1 in 3, because it’s possible for a Rank Raglan hero to be historical.” That statement is 100% illogical. Formally speaking, it’s a non sequitur. If the prior probability of the historicity of Rank-Raglan heroes is 1 in 3, then this premise already asserts that 1 in 3 of them are historical! It therefore cannot be contradicted by claiming some of them are historical. People who make this argument are just like someone saying “the prior probability of winning a lottery cannot be a million to one against, because there are people who win the lottery.” And if you don’t catch the absurdity there, keep rereading that sentence until you do.

If you want to assert that the prior is not 1 in 3, you have to show that it’s something else. There is no other way to get around it. Even if you try to insist “we have no idea what the prior is,” you are saying it’s 1 to 1—because if you admit you don’t know what it is, then you can’t say it’s higher or lower, which leaves you with equal odds for all possibilities so far as you know. And if you wish to deny even that, then it’s even worse for you. Since the probability of a thing is always the prior odds time the likelihood ratio, if you assert no one knows the prior odds, then you are asserting no one knows the final odds either. Which entails agnosticism: you can’t claim to know Jesus probably existed, if you are claiming not to know even the prior probability that Jesus existed. Likewise if you claim no one knows the likelihood ratios. If you don’t know, you don’t know. And that means you don’t know Jesus existed.

So the only way to get to “Jesus probably existed,” is to assert a prior probability that he did. And if you wish to assert it’s different than I find in Chapter 6 of OHJ, you need to actually show that. What critics tend to do is either make irrational arguments like “we don’t know the probabilities, therefore we know Jesus was probable,” or they make shit up, to rationalize their prior assumptions; rather than actually engage with the peer reviewed literature that already exposed those rationalizations to be logically ineffective. They are intuitively just “sure” the prior must be higher, and so they scramble around to “invent” any excuse they can come up with to get that result. Ignoring everything I wrote in OHJ already refuting them. Indeed, that they didn’t even check, proves they have no rational basis for their belief: like Christian apologists, they need the comfort of anything they can invent; they are unconcerned with whether what they invented actually even works.

The difficulty with getting a different prior is that any reference class you isolate for Jesus, like “founders of religions,” might get you a prior you like. But as soon as you put the background evidence back in that you left out (like, the fact that Jesus also belongs to several myth-heavy reference classes), you end up back where I did: with at best a 1 in 3 prior expectancy that Jesus would really have existed. I demonstrate this repeatedly in Chapter 6. If you aren’t engaging with that, then you are simply advertising to all and sundry that you don’t really care whether anything you are saying is correct. Anyone who actually cared, would scruple to make sure, by testing their theories against what I’ve already demonstrated regarding them. Finding another reference class won’t work unless it is large enough and distinct enough to be more predictive than the Rank-Raglan class. So far, no one has presented any such reference class for Jesus (see OHJ, Chapter 6.5, “The Alternative Class Objection”).

What critics will try next is to change the frequency for that reference class. By trying to insist more than 1 in 3 Rank Raglan heroes existed. But as there is absolutely no evidence that that’s the case, that approach is dead on arrival. But be that as it may, it remains the case: if you want to assert that that frequency is higher than 1 in 3, then you need to get and present the evidence that it’s higher than 1 in 3. There is no other logically valid, evidence-based way to proceed here. Because I base my conclusions on the existing evidence. I expect you to do so as well. Everything else is bullshit.

Likewise critics could try arguing for different likelihood ratios, but so far no critic has honestly even understood how, much less actually tried. Yet if they don’t know how to get a different likelihood of the evidence, they can’t know Jesus probably existed.

And that’s the final conundrum…

You can’t assert Jesus probably existed, if you don’t know how you can even know that. And you can’t know the probability Jesus existed, if you don’t even know what the prior probability is that he existed. And you can’t assert a prior, without evidence to back that prior. A prior is a frequency, a frequency of comparable persons turning out to be historical. That means you need actual comparable persons. And enough of them to give you a usable frequency. You need, in other words, evidence.

You also can’t know the probability Jesus existed, if you don’t know how much more likely the evidence is if he existed, than if he didn’t. And you can’t assert that it was “a lot” more likely, if you don’t even know what the best competing alternative is—or any of the background evidence again, which tells us how frequently certain things would turn out as they did, given the causes proposed (see my analysis of the fate of King Henry, for example, in Proving History, pp. 273-75).

You likewise have to know what the evidence actually is. And not lie about it. So far, most critics of OHJ simply lie about the evidence; or are literally clueless about it. They certainly have never, so far, tried to argue that the actual evidence that there actually is, would be more likely on historicity than I estimate, or less likely on mythicism than I estimate. But that’s what you have to do, if you want to argue for a different likelihood ratio than I end up with. And if you can’t get around my prior, you have to argue for a different likelihood ratio than I end up with. There is literally no other way to argue Jesus probably existed.

One way to do that would be to agree with all my assessments, but claim I left some evidence out. Then present that evidence, derive a credible estimate of its likelihood ratio (one that a sane and honest person can’t reasonably deny is at least plausible), and complete the math, to see what effect it has on the final probability Jesus existed (see OHJ, Chapter 12.2). The only other way to do it, would be to disagree with some of my assessments. But that means you have to show a different likelihood ratio should be preferred. And that requires presenting evidence that that’s the case. But my a fortiori estimates are already wildly generous to historicity, so getting evidence for more favorable likelihoods is going to be really hard. But that’s what an honest critic has to do.

Conclusion

The bottom line is, if you want to assert “Jesus probably existed,” then you need to be able to explain how you know that. How do you know that probability is high? If you can’t answer that question, in any logical way from the actual evidence there is, then you cannot honestly claim to know Jesus probably existed. And yet answering that question, requires rolling up your sleeves, figuring out Bayes’ Theorem, and presenting evidence for different frequency estimates than mine, as presented in On the Historicity of Jesus (master table in Chapter 12.1). So if that’s what you want to assert, please get to doing that already.

And this same reasoning follows for every claim you wish to assert or deny. If you want to win any argument, if you want to be right about anything, you have to know you are right and show you are right. And that requires knowing and showing how you get a high probability for your conclusion. And that requires knowing and showing how you get your priors and likelihoods. Because that’s what you are already doing intuitively. So you should know how to do it explicitly. So you can vet the accuracy of your own intuition, and so someone else’s intuition can be educated to see what it’s missing or how it’s erring.

§

To comment use Add Comment field at bottom or click a Reply box next to a comment. See Comments & Moderation Policy.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading