I’ve been dealing with a bunch of doofuses lately. And I can’t tell if they are alone in their quackery, or if their disease is afflicting anyone else. So here’s a primer on how not to be a total doofus about Bayes’ Theorem.
I define a doofus here as someone who not only reasons illogically, but actively defends their illogical reasoning, and resists any evidence or demonstration that they are being illogical. They will even complain when you point out that they are being illogical, and assert that’s not a valid objection to anything they are saying.
Please don’t be a doofus. And if you encounter a doofus, send them an “eye rolling” emoticon and just part ways. Let them stew in their madness. We have no reason to waste any more time engaging them.
-:-
Doofus Statement Number One: “I can’t be wrong about it, since I am the authority on it.”
A doofus actually said that. It’s an exact quote. By a man with a Ph.D. I shit you not. And they said that, in response to my explaining that anyone, even experts, can be wrong about anything (Axioms 3 and 4 in Proving History, pp. 23-26). Even if it’s a low probability, the probability is never zero. So two rules here…
- If you think the probability you are wrong about anything is zero, you are a doofus.
- If you think the probability you are wrong about anything is zero because you’re an expert, you’re a gold star doofus.
Unless the thing in question is present uninterpreted experience. Of course. What you are thinking, feeling, experiencing, your own stipulations about what ‘you’ will mean by any given words; these you cannot be ‘wrong’ about. (See my article Epistemological End Game.) But as such, that’s never a truth about anything outside your mind, and thus never a truth about anything in the world, apart from what you are feeling or thinking or experiencing or saying. That we can call “Cartesian” knowledge, of the cognito ergo sum variety. So “I am experiencing x” you can never be wrong about. As long as x is just a plain statement of what you are experiencing, and not an inference about what’s causing it or what it really means. But on everything else, you’re fallible. That means by definition (as in, it’s a literal synonym of) “there is a nonzero probability I’m wrong about x,” when x isn’t just a present uninterpreted experience, but rather, a claim about yourself or the world that goes beyond that. Which means pretty much most claims ever made, and thus the vast majority of human knowledge.
“I am experiencing being in a room right now” you can never be wrong about. That you actually are in a room right now, you could be wrong about. No matter how big an expert you are on rooms.
-:-
Doofus Statement Number Two: “Nope. I reject that. There is absolutely zero probability that the Christian Jesus was really just an undiscovered planet in the Kuiper Belt. I can’t be wrong about that. Because I’m an expert.”
This is a paraphrase, but an accurate representation of what this doofus genuinely argued (complete with the “undiscovered planet in the Kuiper Belt” bit). Of course his conclusion is false. There actually can’t be an absolutely zero probability of that. The probability is certainly absurdly low, so low we needn’t even be bothered to know what that probability is, we can just dismiss it as irrelevant. But it can’t be absolutely zero. Why? Because you could be wrong (see doofus statement number one). And that is synonymous with saying there is a nonzero probability you are wrong—and thus a nonzero probability that Christians did really mean that Jesus was just an undiscovered planet.
Because evidence may yet change your mind about that. That is, if you believe in basing beliefs on evidence. This doofus does not, apparently, believe evidence is what we base our beliefs on (he argued against that repeatedly). But let’s just leave him to be a doofus on that, and assume we actually all agree beliefs can only be based on evidence. Even the evidence of logical demonstration—which we can also be wrong about: per my example in Proving History, p. 25:
This holds even for many claims that are supposedly certain, such as the conclusions of logical or mathematical proofs. For there is always a nonzero probability that there is an error in that proof that we missed. Even if a thousand experts check the proof, there is still a nonzero probability that they all missed the same error. The probability of this is vanishingly small, but still never zero.
I show how we can even calculate that probability, using Condorcet’s Jury Theorem (note 5, p. 297). Thus, even logical truths are really known only through evidence, evidence we can be wrong about, or draw false inferences from. The more so when we are dealing with things in no way as certain as that.
So think about it. What if we recovered some authenticated first century letters from Christian leaders explaining that everything said about Jesus in the New Testament was just allegory for an undiscovered planet in the region of what we would later dub the Kuiper belt? Well, so much for our being certain they couldn’t have meant that.
This is logically possible. That is, it is logically possible such evidence could turn up. Yes, it’s absurdly unlikely. But that’s not what impossible means in the domain of logic. “That’s logically impossible” is a synonym of “that has a probability of absolutely zero.” And vice versa. Those two sentences, are literally saying exactly the same thing. Which means, if something is logically possible, it therefore cannot have a probability of zero. “If P, then Q” entails “not-Q, therefore not-P.” So, “If ‘logically impossible’ entails ‘zero probability’, then ‘logically possible’ entails not a zero probability.” This is a basic logical fact called modus tollens.
“But why should we care about things like that?” Mostly we don’t have to. But we are being a total doofus if we go around confusing “logical impossibility” with “absurdly low probability.” Of course the prior probability of the hypothesis “everything written about Jesus was an allegory for an undiscovered planet” is vanishingly small, so small it falls way below our resolution. And there is no evidence for it, so nothing exists to update the probability. So it can therefore be safely ignored. Along with every other logical possibility that’s nevertheless absurdly improbable like “Jesus was really a Martian” or “Jesus was a grapefruit.”
But there are two reasons why we must logically accept these absurd hypotheses don’t have a zero probability.
The first is logical validity. All empirical argument is Bayesian (as I’ll illustrate shortly). But one of several things Bayes’ Theorem teaches us is that hypotheses can only be argued true by arguing against competing hypotheses (see If You Learn Nothing Else about Bayes’ Theorem). And the Law of Excluded Middle entails the sum of any competing set of hypotheses must be complete; that is, if we leave any logically possible hypothesis out, our argument is invalid. It becomes a false dichotomy. Thus, when you conclude h is more probable than not-h, “not-h” must include every logically possible hypothesis that is not h. If it doesn’t, your argument is invalid. Literally logically invalid. And it remains invalid whether you are aware of the Bayesian structure of your argument or not.
So you have to be able to explain why you get to ignore thousands of weird unlikely hypotheses. You can’t just assert you shall ignore them. Because assertion is not a logically valid argument. The question you must answer is: why is it logically valid to ignore them. Bayes’ Theorem answers this question. In cases like the Kuiper-planet Jesus theory: we have vast accumulated evidence that things like that are not what people mean when they write stories and make statements like the stories and statements we have about Jesus. Therefore, the probability that despite all that evidence, nevertheless the Christians did mean that, is extremely small. Not zero. They could well have meant that; there is no logically necessary reason Christians had to conform to extant evidence of what people usually do. But the probability they nevertheless didn’t, is negligible enough we can ignore it.
And what does that mean? What is “negligible enough” to ignore? Hypotheses for which there is no particular evidence, with prior probabilities smaller than the resolution of our concern.
“No particular evidence” means the prior probability remains the posterior probability. Because that’s what evidence does: update the probability. No evidence, means no update. No change in the prior. No change in the posterior. And that’s why, if there’s no particular evidence for the Kuiper Jesus hypothesis, all that vast background evidence against it remains all there is. The resulting prior probability is then simply the probability.
“Smaller than the resolution of our concern” means smaller than we care about; which in practice means smaller than we plan to round off to. For example, if we are rounding all probabilities to the nearest whole percentile, then theories with probabilities below half a percent will vanish in our rounding. They won’t even show up in our math. We have effectively accepted their irrelevance. We could choose to look at narrower resolutions—for instance if we wanted to explore hypotheses that are only a tenth of a percentile likely—but usually we have no use for that, and it’s just a waste of our time…until we uncover evidence for the hypothesis. Then everything changes. But until then, we usually have no reason to care. And Bayes’ Theorem explains why. It provides the logical validity for our lack of concern.
The second reason you have to account for absurd theories (not take them seriously, mind you; but rather, to be able to account for why you can ignore them), is that you have to make room for the possibility of new evidence changing your mind about them. Otherwise you will be irrationally immune to evidence. Because it is logically impossible for any evidence to change your mind about something you’ve concluded has a zero probability of being true. Because that is synonymous with saying no possible evidence could ever change your mind about it. Which means no possible evidence could ever have made up your mind about it now. Which is rejecting evidence-based reasoning. Conversely, if some possible evidence can change your mind, that means some possible evidence can update your probability; but no number multiplied by zero, is ever anything but zero. So even infinite evidence proving a theory, will never convince you. You’ve decided its probability is zero, therefore you will go on disbelieving a Kuiper belt Jesus even after a vast quantity of evidence is unearthed unequivocally declaring that to be exactly what all early Christians were talking about.
Clearly there is something wrong with you, if no amount of evidence can ever change your mind about anything. So we have two more rules:
- If you declare no amount of evidence could ever change your mind about something, you are a doofus.
- If you declare no amount of evidence could ever change your mind about something because you’re an expert on it, you’re a gold star doofus.
Note I am not saying you are a doofus if you declare it’s absurdly unlikely any evidence will turn up that would convince you. That’s a perfectly reasonable thing to say. And again, Bayes’ Theorem is what makes that a logically valid thing to say. I’m saying you’re a doofus if you insist no possible evidence ever could turn up; that the probability of this is absolutely zero. That amounts to declaring yourself omniscient and infallible. Which reeaally makes you a doofus.
The mathematics is simple. If the prior odds of some weird hypothesis like Kuiper being true are 1/a, where “a” is, let’s say, “some value between a billion and a trillion,” then Bayes’ Theorem tells us the only evidence that can logically convince us Kuiper is true is evidence that is not just more likely on Kuiper than any alternative, but more than a trillion times more likely, and not a trillion times more likely than on just any alternative, but over a trillion times more likely than the most likely alternative. In other words, we need really damned good evidence. Quite literally, an absurdly improbable hypothesis, only becomes probable when we find absurdly improbable evidence.
Like discovering an authenticated cache of Christian letters explaining that the whole Jesus thing they wrote about was indeed just an allegory for a lost planet. The probability of that kind of evidence existing and yet that not being true is itself absurdly small. Thus, this is absurdly improbable evidence. So if it turned up, it would have enough evidential power to convert an absurdly improbable hypothesis, into a probable hypothesis. That’s the power of evidence. Thus Bayes’ Theorem produces the logical validation of the maxim “extraordinary claims require extraordinary evidence” (Proving History, pp. 72, 117, 177, and 253). But notice it is only logically possible for this much evidence to change our minds into accepting the Kuiper hypothesis, if the Kuiper hypothesis did not start with a zero probability. Because a zero probability would mean a is (literally) infinity, and not even an infinite amount of evidence can overcome that.
To make the mathematical point clearer, consider how the Bayesian formula converts to P(h|e) = P(h) x P(e|h) divided by P(e). If P(h), the prior probability of h—the probability of h before we introduce the evidence, e—is 0, then there is literally no amount of evidence that can ever change that probability to anything other than 0. Because the most P(e|h) can ever be is 1, and 1 x 0 is 0. And 0 divided by anything, is still always 0. So there is no number that P(e)—the probability of the evidence regardless of whether h is true or not—can ever be, that would ever change the probability of h to anything other than 0. Thus, you can never have a logically valid argument for changing your mind. Literally no possible evidence will ever warrant a probability other than zero. Even when the evidence is overwhelmingly superb. Even when it’s infinitely good!
Clearly, then, you can’t claim to be an evidence-based reasoner—someone who will update their beliefs with new evidence—if you believe anything has a zero probability of being true. Even the logically impossible: though technically that’s asserting a zero probability, in reality there is a nonzero probability you have made a mistake and incorrectly concluded it’s logically impossible (see my reference to Proving History above for the demonstration). If some actual expert in logic comes along, like a dozen peer reviewers for a mathematics journal, and they all point out that you made a mistake in your argument and the conclusion is not logically impossible after all, you would be compelled to delusionally reject their conclusion and insist you were right and they are all wrong, because you “can never be wrong”! Not even a hundred experts. A thousand. A billion. You’ll never admit you were wrong. That’s the consequence of accepting a zero probability of being wrong. It’s pretty much the definition of delusional: maintaining a belief even in the face of overwhelming evidence against it.
That that would be insane, is precisely why only a doofus claims anything has an absolutely zero probability. Other than uninterpreted present experience, which asserts nothing about what’s true, other than that you are experiencing what you are experiencing. “I am experiencing being in a room” can have an absolutely zero probability of being false. But “I actually am in a room” never can. Because evidence could always prove you aren’t.
-:-
Doofus Statement Number Three: “That’s absurd. If it’s possible, then that means I have to waste time calculating the probability that Jesus was really just an undiscovered planet in the Kuiper Belt. And that’s stupid.”
I’m paraphrasing the doofus again. But not inaccurately.
What’s stupid is thinking that because Bayes’ Theorem explains why you can disregard theories like that, that therefore “I have to waste time calculating the probability that Jesus was really just an undiscovered planet in the Kuiper Belt.” No. You don’t have to spend any time on it at all. You need no numbers. No calculations. You can put away your calculator and stop freaking out.
All you have to do is bracket out the domain of absurd hypotheses (explicitly or not). Precisely because they are absurd. And Bayes’ Theorem explains why that is logically valid and thus fully justified: it’s justified, because even the sum of the probabilities of all such hypotheses will be below half a percentile. Therefore, you can disregard them. They won’t show up in your math. If you are rounding to the nearest whole percentile—which is what we tend to do in such matters, because the margins of error in history are very large, even well above a single percentile. So probabilities below a percentile will simply be washed out, buried under our error margin.
Hence I spent a few pages on this in On the Historicity of Jesus. I explained why and how I am bracketing out an endless array of absurd hypotheses (which would by definition include “Jesus was really just an undiscovered planet in the Kuiper Belt”), and why that is logically justified, using the logic of Bayes’ Theorem (pp. 53-55).
A doofus argues that absurd hypotheses must have a zero probability because otherwise we’d have to get out calculators and do math to exclude them and “that’s, like, a total bummer, man.” Which is an embarrassingly illogical argument—the conclusion is a non sequitur; it follows in no logical way from the premise. But also a stupid argument. A non-doofus argues we don’t have to get out calculators and do math to exclude absurd hypotheses, because Bayes’ Theorem already shows us they have probabilities too small to need calculating.
-:-
Doofus Statement Number Four: “If you assume that we know the generic modality of Mother Goose’s ‘Hey, diddle, diddle,’ then yes, the historicity of the ‘cow jumping over the moon’ is absolute zero.”
This is another exact quote of the doofus. The statement itself is not doofusy. What makes them a doofus is not that they said this. But that they then confused saying this, with knowing the generic modality of Mother Goose. They confused analytic with synthetic propositions. And that’s just being a total doofus.
The analytic-synthetic distinction remains important. We now know propositions can combine them together or be ambiguous as to which they are. But the only way to ever tell if a statement is true, is to decide which it is (or break it up into separate analytic and synthetic parts). Because the truth conditions differ for each. An analytic proposition is a statement that’s true simply by virtue of its definitions. It does not describe anything in the world. It just defines a word or phrase or concept we may or may not then use in describing the world. Like “ophthalmologists are doctors.” A synthetic proposition makes assertions about what’s true apart from the meaning of the sentence. Like “ophthalmologists exist.” Or, the example the Stanford Encyclopedia of Philosophy uses, “ophthalmologists are rich.”
But reality cannot be changed by changing what you call it. Thinking you can do that, is a very common mistake among amateur philosophers.
Yes, if we assume that we know the generic modality of a nursery rhyme (and if we assume we know that such a modality never invokes a truth claim), then we can say the probability that the rhyme attests to a ‘cow jumping over the moon’ is “absolutely zero.” Because that’s simply a tautology. It’s an analytical statement. It doesn’t tell us anything about the real world. It just tells us how we will define terms: we will choose to mean by “nursery rhyme” stories that don’t make truth claims. But when we get to the real world, we are in a bind. Because now we could be wrong to categorize a story as a nursery rhyme. In other words, we don’t get to “assume” things are true in the real world. So we never in reality get to “assume” we know the generic modality of a story. We only know it to a certain probability: the probability equal to one minus the probability we are wrong in our assignment of modality.
Moreover, “nursery rhyme” may mean more than just “stories that don’t make truth claims.” For instance, we may say any story told in rhyme with childish-sounding content is a nursery rhyme. But now we have a problem. It’s logically possible for there to be stories told in rhyme with childish-sounding content that are truth claims. We can only then decide to not call them nursery rhymes (we’d have to invent some other word for them), or admit we can’t tell if something is a nursery rhyme merely because it is told in rhyme with childish-sounding content. Those are your only options. Except one…
The only other logically available option for us is to say that “a story told in rhyme with childish-sounding content” is probably not a truth claim, on the evidence that every other instance we know of hasn’t been a truth claim. But we could have missed some (we don’t have access to every story told in rhyme with childish-sounding content), and we could have miscategorized some—counting some as not truth-claims, that actually were; after all, we can’t circularly argue that they all “must” be, so we’d need some additional evidence that they are, besides “being told in rhyme with childish-sounding content.” And we rarely have access to that evidence to tell by. Indeed, one or some could always be a belligerent exception, someone defiantly using the rhyme model to convey truth claims, perhaps counter-culturally, or to maintain secrecy, or just to be creative or funny; one or some can even be a mistake, someone who didn’t realize they weren’t supposed to use the rhyme model to convey truth claims.
All these things are logically possible.
Therefore, all these things have some nonzero probability of being true.
Therefore, the probability we are wrong to assume any “story told in rhyme with childish-sounding content” is not a truth claim, equals the sum of the probabilities of all the other logical possibilities such a story could instead be an instance of.
Therefore, that probability can never be zero.
This is why we have to base our assumptions on evidence. You can’t just “assume” all nursery rhymes are not truth claims. You have to base that conclusion on a large body of examples—evidence—establishing that the observed frequency of contrary cases is zero. But we know observing a frequency of zero, does not entail or even imply an actual frequency of zero. This is a logically necessary truth of probability. It could entail that, if we were omniscient and infallible: as then we would know the entire set, all existing things in the set, and everything about them, without error. But no human being has that access to information. Nor ever will. Therefore, that we observe a frequency of zero itself has a nonzero probability of it having been observed as zero for reasons other than the actual frequency being zero. Like, our having missed some cases, miscategorized some cases, not knowing of some cases, and so on.
In fact, if we know nothing else about the probability (and often we don’t), the probability that the actual frequency is zero, given an observed frequency of zero, equals Laplace’s Rule of Succession. And as that rule proves, the answer is never zero. Even when we know something about the probability—for example, that some phenomenon observed at a zero frequency requires the existence of an entity that also is not observed, so we have a double frequency of zero—that can get us to a probability of an actual frequency of zero lower than Laplace’s Rule. But it still can never get us to zero. The only way we can ever get to zero, is to be infallibly omniscient. Which, amusingly, may be impossible—even for God. Because a God can never be sure they are not the victim of a Cartesian Demon fooling them into thinking they are infallibly omniscient. Since that has a nonzero probability even for God (because there is no logically valid way to rule out the scenario), even God can never have a zero probability of being wrong!
Fortunately, Bayes’ Theorem also explains why we can dismiss Cartesian Demons as absurdly improbable—not absolutely impossible, but, again, so improbable that we can safely assume there isn’t one…even if there is. After all, the paradox of logically valid knowledge is that we can be warranted in believing even things that are false. Because the only justified true belief we ever really have, is in the probability of a thing being true given the information available to us at the time. And we never have “all the information in the universe.” Hence, we can always be wrong. And we must always factor that in. Which is why evidence changes our mind about things: if we didn’t admit we could be wrong, we’d be saying no evidence can ever change our mind. Thus, admitting we can always be wrong is a necessary corollary of basing our beliefs on evidence. You can’t have one without the other.
Which is why claiming no evidence can ever change your mind, or that the probability of your being wrong is zero, is being a doofus. As is making up stupid excuses to say such stupid things, like “but then I’d have to precisely know and calculate the probabilities of absurd things.” No. You never need to do that. It’s full-on stupid to think you need to do that. And that you don’t need to do that, is precisely what we learn from Bayes’ Theorem!
-:-
Doofus Statement Number Five: “To use the mathematical formula of Bayes’ Theorem then ipso facto it requires using specific numerical inputs, not ranges of numbers.”
A different doofus said this. It’s an exact quote.
Once you’ve stopped laughing at him, let me make sure we are all clear on this:
Using Bayes’ Theorem never requires you to use “specific numerical inputs.” At all.
Not only is Bayes’ Theorem frequently used with ranges of numbers (representing margins of error, degrees of uncertainty, and sensitivity tests), in every field of science, commerce, and industry that it is used in (thus refuting the doofus). But it can also be used with no numbers at all.
Oooo! Scary! Magic you say!
No. Seriously. You don’t even need numbers.
Thomas Bayes’ paper proving Bayes’ Theorem never once used a single cardinal number (only in an appendix that gave some toy examples of its application). No proof of Bayes’ Theorem ever once uses any number.
And many applications of Bayes’ Theorem involve no numbers.
I supply an entire flowchart for using Bayes’ Theorem without numbers on page 288 of Proving History. It shows an entire schema of logical relations entailed by BT. These are logical relations. You cannot violate them, without violating logic itself. And indeed I prove all empirical reasoning is Bayesian, with a deductive syllogism, in Proving History (pp. 106-14). And I show how all standard historical methods are Bayesian (pp. 97-106), including all the methods used in Jesus studies that have any logical validity whatever (pp. 121-205). Because all historical arguments argue from a prior probability and a likelihood ratio. All of them. They just don’t use those words, and don’t realize why what they are saying is logically valid (as entertainingly proved by David Hackett Fisher in Historians’ Fallacies: Toward a Logic of Historical Thought). As philosopher of history Aviezer Tucker concludes: all history, ultimately, is Bayesian. And archaeologists have been at the forefront of demonstrating this.
Whenever historians talk about plausibility, or initial likelihood, or conceivability, they are talking about prior probability. Whenever historians talk about evidence favoring one hypothesis over another, or “strongly” favoring it, or “weakly,” and so on, they are talking about a likelihood ratio. The odds on any claim being true, simply equal the prior odds times the likelihood ratio. That’s it. And both are based on evidence. Historians base their conclusions on what’s plausible on evidence of what’s usual (and what isn’t). If they didn’t, they could never have any logically valid judgment in the matter. Historians base their conclusions on how strongly evidence supports one theory over another on how unlikely that evidence is on the competing theory. Which they in turn, again, base on evidence of what’s usual (and what isn’t); in particular, everything we’ve learned about the likelihood of outcomes in causal theory.
Since all historians do this, and do nothing else worth heeding, all historians can only prove their conclusions are logically valid by showing that they conform to and do not violate Bayes’ Theorem. If it’s anything other than prior odds times likelihood ratio, it’s formally invalid. Literally illogical.
-:-
Doofus Statement Number Six: “Human truth obtains certain objectivity. It only becomes probabilistic when seeking standards of absolute truth.”
The other doofus said this. It’s an exact quote.
Yes. He said that probabilistic knowledge is “absolute truth” and abandoning probabilistic knowledge is the only objective “human truth.”
Wrap your head around that one.
I know. I couldn’t make sense of it either.
For the record: You are the doofus if you are the one in the room denying that human fallibility exists and that knowledge is probabilistic and thus in varying degrees uncertain. Because all humans are fallible. And all knowledge is probabilistic and thus in varying degrees uncertain (other than Cartesian knowledge, as just explained). And that’s so obvious, that anyone who denies it has to be a lunatic.
We measure uncertainty as margins of error around a probability. If you say “I think x is very probable,” you cannot mean the probability of x is 20%, which is actually improbable, nor 60%, as that is probable, but hardly “very” probable; it’s surely not the kind of probability you mean. So we have the right to ask you what you mean: how far would a probability have to drop to make you no longer refer to it as “very” probable? You can tell us. It’s arbitrary; it’s your own words, so you get to say what you mean. But then you have to be consistent. You can’t start throwing up equivocation fallacies, constantly changing what you mean mid-argument. Unless you’re a liar; or actually want to be illogical. And only a doofus wants to be illogical.
We can then ask you, well, if you mean it has to be, perhaps, at least 90% to qualify for you describing it as “very” probable, what’s the most probable you can mean? When would the probability cease being just “very” probable and become, say, “extremely” probable? Same rules. You have to mean something by these terms. Otherwise they literally mean nothing. If you mean the same thing by “merely” and “very” and “extremely,” then those words convey nothing anymore. But the only thing they could ever mean differently, is a different range of probability. There is no escaping this.
So when this doofus tries to claim language can operate without any logical coherence in probability theory, he’s just being a full doofus. No. “Human” truth means probabilistic truth. And every statement about what is true or is likely or is plausible, or false or unlikely or implausible, is a statement about probability. And therefore the only way anything you ever say can ever be logically valid, is if it obeys the formal logic of probability. And when you dig around to figure out what that is, you’ll find it’s what Thomas Bayes found when he went digging around to figure out what it is.
Human fallibility and uncertainty is measured in probabilities. The English language is full of assertions about probabilities. We can’t even make any statement about the truth (other than tautologies that don’t tell us anything about the world, but only about our code rules for describing it), without making some statement about a probability. It can be a fuzzy statement—a range of probabilities—but it’s always some probability.
Pretty much everything you say, is a probability judgment. We are often not aware we are doing that because we don’t usually use words like “probability” or “odds” or “frequency” or such technical terms. We instead talk about something being “usual” or “weird” or “unusual” or “exceptional” or “strange” or “commonplace” or “normal” or “bizarre” or “typical” or “expected” or “unexpected.” Likewise “odd,” “somewhat,” “doubtful,” “often,” “hardly,” “generally,” “might,” “maybe,” “perhaps,” “inconceivable,” “could be,” “can’t be.” And on and on. But these are all statements of probability. Once you realize that, you can start to question what the underlying probability assumptions within them are, and whether they are sound. That’s what a responsible reasoner does. The doofus? He flat out refuses.
For example, what does “normal” actually mean? Think about it. What do you mean when you use the word? How frequent must something be (what must its probability be) to count as “normal” in your use of the term? And does the answer vary by subject? For example, do you mean something different by “normal” in different contexts? And do other people who use the word “normal” mean something different than you do? Might that cause confusion? Almost certainly, given that we aren’t programmed at the factory, so each of us won’t be calibrating a word like “normal” to exactly the same frequency—some people would count as “normal” a thing that occurs 9 out of 10 times, while others would require it to be more frequent than that to count as “normal.”
You yourself might count as “normal” a thing that occurs 9 out of 10 times in one context, but require it to be more frequent than that to count as “normal” in another context. And you might hedge from time to time on how low the frequency can be and still count as “normal.” Is 8 out of 10 times enough? What about 6 out of 10? And yet there is an enormous difference between 6 out of 10 and 9 out of 10, or even 99 out of 100 for that matter—or 999 out of 1,000!—yet you or others might at one time or another use the word “normal” for all of those frequencies. That can lead to all manner of logical and communication errors. Especially if you start to assume something that happens 6 out of 10 times is happening 99 out of 100 times because both frequencies are referred to as “normal”—or “usual” or “expected” or “typical” or “common.”
So taking seriously the fact that all our language, all our beliefs, are probabilistic in this sense, is fundamental to not being a doofus. Don’t be a doofus.
-:-
Doofus Statement Number Seven: “We can’t use Bayes’ Theorem to rule out miracles, because there is no data, and Bayes’ Theorem only works with data.”
Yep. A doofus said this. I’m paraphrasing. But it’s what he argued. Multiple times.
Every time anyone pointed out to him that in fact there is a ton of data on this, and that in fact he was reaching his own conclusion that miracles were too improbable to credit on all that data, this doofus just ignored them and kept repeating the argument. “Bayes only works with data. There is no data. So you can’t use Bayes.” Holy balls.
We kept trying to explain to this doofus that we were agreeing with him: miracles are too improbable to credit. What we were adding, was showing how that conclusion is logically valid. He never listened. He still doesn’t get it. Classic doofus.
In actual fact the reason we know, for example, that virgin born humans are impossible (colloquially speaking), is precisely because of the vast quantities of data we have that show humans don’t spontaneously conceive, and that no one exists with the power to effect one, even once. The absence of evidence is data. But more importantly, it’s not just absent evidence. The presence of non-virgin births is positive evidence of how humans conceive. And the vast quantity of fake stories of miraculous births and beings is evidence, too—that such claims tend to be fake. That we have never reliably documented a deviation from these facts, establishes an extremely low probability it can happen (by Laplace’s Rule of Succession). That’s a low prior, entailed by vast evidence (see my logical analysis of this as The Smell Test in Proving History, pp. 114-17; and the Argument from Silence, pp. 117-19).
Similarly, any claim that though humans don’t spontaneously conceive, nevertheless sorcerers can do it (or demons or angels or gods or faeries or extraterrestrials or timelords, or super-intelligent shades of the color blue), meets with the same vast data: everywhere we’d expect to see evidence of any such beings, we see none; but find endless fakes. The prior probability that they nevertheless still exist (so as to effect a singular virgin birth), is therefore, by Laplacean succession, absurdly small.
In other words, the reason the doofus is logically justified in concluding miracles have an extremely low probability, is Bayes’ Theorem. Every opportunity for miraculous powers to have been reliably evinced, turned up negative. Every time that happened throughout history, the prior probability of the miraculous was downgraded. Iterate this thousands of times, and that prior ends up absurdly low. We don’t need to know how low it is; because we already know it’s too low to give any attention to. But we do know what could change our minds, which means we do know something about how low the prior probability is: it’s exactly as low as the probability would be of the evidence that would otherwise have convinced us unless miracles were real. That’s a logically necessary truth.
Because that’s how evidence works. The only way evidence can change your mind about the probability of a thing, is by updating the probability of the thing. And the only way to update the probability of a thing in a logically valid way, is Bayes. The prior odds, must be multiplied by the likelihood ratio. Any other approach, will give you a logically invalid result. Try as you might. You will never find a logically valid syllogism that gets from the evidence to the conclusion without fallacy. Except by Bayes’ Theorem. Which is why Thomas Bayes was so delighted by his discovery, and his friends so happy to publish it, after he died, in the Philosophical Transactions of the Royal Society of London.
Only a doofus will say instead that miracles are “logically impossible” and therefore have a “probability of zero.” Because that’s saying no evidence could ever convince you there are gods or magic. Which is fundamentally irrational. We can say they are “impossible” only in the colloquial sense of absurdly improbable. Which is not a probability of zero. Just a really low probability.
And when we say you are justified in declaring miracles “impossible” in that colloquial sense, because of Bayes’ Theorem, we are not saying “because you pulled out a calculator and punched some numbers in.” No. We mean the argument you reached that conclusion with, is logically validated by Bayes’ Theorem. The doofus argues (correctly): all the evidence is of not-x, and the evidence of not-x is extremely extensive, therefore the probability of x is vanishingly small. Then the doofus argues (illogically): no mathematical formula describes what I just argued, and therefore no mathematical formula explains why what I just said is logically valid. To the contrary, only a mathematical formula ensures what they just said is logically valid; and when Thomas Bayes set out to figure out what that formula was, what he found was Bayes’ Theorem.
What the doofus did there is only logically valid if the conclusion is entailed without fallacy by the premises. Which means this is the only way to correctly model their own argument:
- The posterior odds of x equals the prior odds times the evidential odds (the “likelihood ratio”);
- When we have a proper binary question (miracles either exist, or they don’t), and no evidence is yet entered into the prior odds, the prior odds are 1/1 and thus simply 1 (because when we have no evidence to believe x is more likely than 50% and no evidence to believe x is less likely than 50%, then we only have evidence to believe x is 50%, due to the principle of indifference);
- When we then enter evidence in to reach a conclusion, we must enter all of it in, with nothing excluded; and with no falsehoods included as true (because “evidence” only means things that are true; falsehoods are also evidence, and thus are still included, but only as falsehoods; and the unknown, as the unknown; and the uncertain, as the uncertain);
- The evidential odds will then be how much more likely all that evidence is if x is false than if x is true (or vice versa; it just happens not to be vice versa);
- So if the evidence is exactly what we expect if not-x but extremely unlikely if x, then the evidential odds lean extremely toward not-x;
- Which means the final odds lean extremely toward not-x;
- Because the prior odds are 1 and anything multiplied by 1 is itself.
- So we end up with an updated prior for x—a posterior—that is “extremely low.”
- That then becomes the prior probability of x, for any new claim of an x.
The doofus doesn’t need to know that’s what his brain just did. But it’s what his brain just did. Just like when he learned such basic things about reality as object permanence and causality. And it’s the only logically valid way the doofus’s conclusion can follow without fallacy from his own premises. We update probabilities, as we encounter evidence. Period. Everything else is a violation of logic, and therefore irrational.
Just like that doofus’s update we just modeled: it was effected by evidence. Even the doofus admits that. But why is it logically valid to change the probability of a thing when encountering evidence? And why is it logically valid to change it to x rather than y given a particular piece or body of evidence? If you ever sit down and try to work out an answer to these questions, you’ll end up right where Thomas Bayes did.
And that’s what we learn. In the absence of any evidence, we couldn’t have gotten that result, and our conclusion would be invalid. In the presence of completely different evidence, we would have gotten the opposite result, and we’d have had to conclude differently. And if new evidence comes up that even in conjunction with the past evidence is more unlikely on not-x than on x, and by enough, it will flip the probability of x from “extremely unlikely” to “likely.”
And at no point in this process do we ever need to state a number or use a calculator. The conclusions all follow as a matter of logical necessity, from plain statements in English. Because the theorem is true. And what we just said, conforms to the theorem.
-:-
Doofus Statement Number Eight: “I don’t have to use the definition of miracle used by anyone who claims miracles have happened. I can refute their claims just by using my own definition of miracle and showing that it’s self-contradictory.”
This is a paraphrase, but quite accurate. He argued the point for hours.
Only a doofus thinks he can make-up his own definition of miracle, refute it, and then claim to have thereby refuted everyone else’s miracle claims.
If you want to refute what someone is saying, you have to correctly describe what they are saying. Because otherwise, if a Christian, for example, claims x, and all you do is refute y, you’ve not refuted what the Christian claimed. At all. To refute what Christians say, you have to refute what Christians actually say. Which is x. Not y.
This is fundamental. Make no excuses for doing this (like this doofus did for hours on end). No actual serious Christian philosopher defines miracle the way they did. I’ve actually published research on this; and it’s been used by other scholars. I investigated what people appear in practice to mean, and even outright say, when they talk about miracles (and similarly magic and all other claims of the supernatural), and how they are contrasting it with “natural” phenomena. And the answer was not what this doofus made up from the armchair. See Defining the Supernatural (and more in Defining Naturalism, Defining Naturalism II, and beyond).
People who claim miracles happen, are claiming a specific kind of causal entity exists. One for which we actually have no evidence. Yet we should have lots of evidence by now. Unless you resort to some form of Cartesian Demon explanation—to gerrymander away all the missing evidence—but Bayes’ Theorem shows us why that automatically reduces the prior probability, of any theory dependent on it, by absurd amounts (as I noted already above). And once you enter the arena with a theory that has an absurdly low prior and no other evidence to its name, you are automatically a pseudoscientist. A crank in religious clothing.
The doofus kept thinking we were denying this. To the contrary, we were explaining it. As we kept telling him. That he never listened nor heard what we were saying, is why he’s a doofus. All logically valid arguments against miracles, are Bayesian. Bayes’ Theorem explains why we can conclude from a vast array of evidence, that that which isn’t reliably evinced in it, doesn’t happen. Bayes’ Theorem also explains why we could change our minds about that—what kind of evidence would do it.
Of course I have a detailed article on Bayesian counter-apologetics. And a lecture on how we’d prove miracles existed in history, if any actually existed at all, which lecture is also Bayesian (Miracles & Historical Method). And I’ve formally demonstrated the truth of non-supernaturalism in debate. But the bottom line is this: miracles can only exist if miraculous powers exist, but if miraculous powers existed, we would expect much more evidence of them by now, and we have none (other than unverifiable tales); and any attempt to explain this away, to propose why miraculous powers nevertheless exist but we don’t have any good evidence of it, requires extraordinarily convoluted and improbable assumptions for which there is no evidence—basically, some variant of Cartesian Demon. The latter tanks the prior probability into the absurdly low. The former tanks the likelihood into the absurdly low. Either way, you end up with an absurdly low posterior probability. And all absurdly low posterior probabilities can be disregarded without further ado.
No calculator was needed to reach this conclusion. No numbers. No assertions of infallibility or absolute truth. No absurd theories had to be wasted any time on. Just evidence. And logic. The logic of Bayes’ Theorem.
-:-
Doofus Statement Number Nine: “Unwin and Swinburne claim to prove ridiculous things with Bayes’ Theorem. That means you can use Bayes’ Theorem to prove anything. Therefore it’s useless.”
A paraphrase, but again accurate. They really kept on arguing this. Repeatedly. After being repeatedly shown that what they were saying is phenomenally boneheaded. Think this through. Honestly…
- William Lane Craig claims to prove ridiculous things with standard syllogistic logic. That means you can use logic to prove anything. Therefore logic is useless.
- Hugh Ross claims to prove ridiculous things with science. That means you can use science to prove anything. Therefore science is useless.
- Arthur Brooks claims to prove ridiculous things with statistical math. That means you can use statistics and math to prove anything. Therefore statistics and math are useless.
No, Mr. Doofus. Any method can be used to prove ridiculous things…if you put bullshit into its premises. That tells you nothing about the validity of the method when used honestly and competently. Unwin and Swinburne’s uses of Bayes’ Theorem are fraudulent. Just like Craig’s use of logic and Ross’s use of science and Brooks’s use of statistics. Criticize the premises in any application of any methodology. Absolutely. But don’t fallaciously confuse that with criticizing the method. That fallacy leads to rejecting all logic and all science and all mathematics and literally any and every method of every conceivable kind in any field of human knowledge ever endeavored.
And that’s stupid.
Conclusion
There is a reason Bayes’ Theorem is widely applied to countless fields—in science, industry, intelligence, communications, and commerce—and grounds entire epistemologies in philosophy. There is a reason philosophers of history have discovered even history is Bayesian. There is a reason my book arguing that passed the peer review of a professor of mathematics. There is a reason archaeologists agree and now even peer reviewed journals show historical reasoning is modeled by Bayes. So stop being afraid of it. Admit it’s beyond you or too much trouble to learn. Or learn how it actually explains and validates your own reasoning, and how to competently critique misuses of it. All just as you already do for science and logic. Become competent at it. Or admit you haven’t time to be. Don’t flail at it with doofus arguments like a terrified child.
These fools react to Bayes’ Theorem the same way climate science deniers react to facts: they are terrified of the policy implications of the truth, so they do everything in their power to rationalize their rejection of the truth. These doofs are terrified that Bayes’ Theorem means they have to accept things or do things they don’t like or are scared of. So they piss their metaphorical pants, and rant against it stinking of fear and drowning in stupidity.
No, Bayesian reasoning does not require numbers or calculators or false precision. It can model any imprecision you want. You just have to do it honestly and competently. See my Bayesian Counter-Apologetics: no numbers, no calculations, no false certainties, just logically inescapable conclusions.
No, admitting miracles and other crazy things have a nonzero probability is not “surrendering” to Christians. It’s being logically honest. It amounts to admitting it’s possible for evidence to convince us—therefore the fact that no evidence has convinced us, is a valid reason not to be convinced.
Otherwise, if we said no evidence could ever convince us—no matter how amazing and massive that evidence was—we are actually saying we have no valid reasons for our beliefs. Because we’d just go on believing them regardless of any evidence. Which means the fact that no evidence has convinced us yet is completely uninformative. If no amount of evidence would ever convince us, how can anyone say there isn’t already enough evidence to convince a reasonable person who does base their beliefs on evidence? We certainly aren’t any example to judge by. We have just said we will reject any and all evidence. So our conclusions are useless. We just admitted we can’t even tell when there would be enough evidence; therefore our opinion that there isn’t yet enough is wholly unreliable.
If that’s what you want Christians to be able to say—that atheists have declared no evidence can ever convince them, therefore atheists have no rational evidence-based beliefs—go ahead. But please don’t include me in your insanity. Or any other rational human being. Be a doofus somewhere else. We don’t need doofuses shilling for atheism. It just makes atheism look irrational. Which actually advances the cause of Christianity. You may as well join them and ask for a paycheck.
I have a notion that chess teaches you excellent logic about life, and the world… I wonder how Bayesian it is?
Currently, the best chess player in the world is AlphaZero, a machine which uses Bayesian neural networks.
Formally, it involves no randomization other than the opponent’s decisions, which are analytically constrained, and thus (technically) differs from poker, for example, which for its randomization actually resembles life better. This makes chess a more analytical system than synthetic (meaning: there is always a best move that follows with logical necessity from the available premises). However, in reality, it is in practice impossible to compute every possible move (it can be done when one has sufficient time; humans rarely do), so the game emulates randomness, owing to the large complex of combinations of possible opponent moves.
Game theorists have found that the best strategies for handling uncertainty in analytical games like chess is indeed Bayesian (example, example, example). Nate Silver has an interesting discussion of Bayesian chess in The Signal and the Noise (although he doesn’t directly call it that).
(As another commenter here noted, leading computer chess players are Bayesian.)
First, let me state that I am a big fan of you, to the extent of being a Patreon for several months now.
But – you knew there would be a “but” 🙂 – I have two remarks, questions rather than criticisms, about the above text.
First is that you criticize “doofuses” when they say something is “impossible”, i.e. have a zero-possibility. My first answer would be that they mean it colloquial, as in fact you did too in the text, without specifying each time they mean “it’s theoretically not entirely impossible but the probability of it being true is absurdly low, rendering it practically impossible”. It’s just more easy and practical to say it’s impossible.
My second answer would build on an earlier statement that everything is possible (as a prior probability) except illogical assertions. So if someone claimed a cow jumped over the moon, I would discard it as being illogical : if you calculate the amount of energy that is needed for the cow to jump over the moon, it is biologically impossible for a cow to a. eat and store that amount of energy needed and b. develop the bone and muscle structure to make that jump. So this is an illogical and therefore impossible statement. Unless of course you’ld claim that we could have mistakes in our calculations… Then indeed everything IS possible. But that would include that Bayes’ Theorem could also be wrong, and then you have no logical basis at all ! So, it’s the one or the other : we accept some basic logic (and therefore some basic illogic) in which BT can do its job, or we don’t and then BT flies out of the window too !
My second question needs a little more context. You say that any possible alternative hypothesis, e.g. the Kuyper Belt Jesus hypothesis, must be taken into account as being part of ~h. Of course, you continue, the probability of that being true is absurdly small and requires extraordinary evidence in order to make it even likely.
I don’t agree with the statement that every possible hypothesis must be included in ~h, only the ones we know of at the time. Because there are an infinite number of alternative hypothesis, Jesus being part of the Milky Way, or the Andromeda galaxy to name just 2 of them… An infinite number of alternative hypothesis, each with a ridiculously small probability, would still add up to a considerable probability of one of them being true…
My suggestion is simple : only alternative hypothesis we know of are included in ~h, and the posterior probability asserted. Alternative hypothesis that are not known (i.e. not formulated in any shape or form) are not part of the “background information” and a forterio cannot be taken into account. If and when such a hypothesis DOES come into our attention, the posterior probability of our earlier application of BT becomes the prior probability and it can be evaluated on its own.
Hmmm, reply to my own 2nd remark : of course, ~h should include any hypothesis we don’t know of, because otherwise you would have only h + the known alternative hypotheses (~h) = 100%. That isn’t necessarily true : all of them could be wrong. So, indeed, we must include also all unknown, thus infinite amount of, hypotheses.
In short dr. Carrier,, disregard my 2nd remark 🙂 !
That’s another good way to look at it!
You are noticing the same logical problem I’m talking about. Bayes’ Theorem is only formally valid if P(h + ~h) = 1, and thus sums all possible outcomes—including, as you note, “all of the hypotheses we can think of being wrong.”
You can in fact think of “that all of the hypotheses we can think of are wrong” as itself a covering hypothesis (a superset), than which any specific hypothesis we haven’t thought of yet must be even less probable; and which, as you notice, can’t have a probability of zero, as that would be, again, asserting we can’t possibly be wrong (in this case, about what all the options are: yet finding out it’s something we hadn’t thought of, would prove we were wrong, so our having been wrong cannot have had a zero probability).
My example of miracles and the supernatural should illustrate this.
(Although of course I should mention I suspect miracles and the supernatural are logically impossible, even using the definitions of their own proponents; but as it has not been proved to be, we can’t assign them a zero probability; we couldn’t do that even if we did produce a formal proof of the fact, as all that would do is greatly reduce the probability of our being wrong, per my example of the thousand mathematicians in Proving History.)
Actually, the doofus in question really means actually impossible.
Nope. That’s what I would always assume too. Which is why we actually asked them that; and went out of our way to make sure that isn’t what they meant. But nope. They were adamant. They repeatedly insisted they meant absolutely zero, and not just colloquially “impossible” in the sense of extremely or absurdly improbable. They even repeatedly mocked the idea of “impossible” colloquially just meaning an extreme improbability.
And indeed, my entire article here was inspired by my bewilderment that they maintained this even after hours of trying to be charitable and letting them clarify.
That’s actually not epistemically true, however. Because there is a nonzero probability we are wrong to classify an assertion as illogical. Thus, I point out in this article, that yes, declaring an assertion illogical is saying it’s logically impossible—and thus “if we are right about that” has a zero prior—but we don’t get to “assume” things in the real world. So we never get to assume “we are right about that.” We can do that in closed hypothetical systems in our minds. But when it comes to the real world outside our mind, we become fallible again. We can rarely ever know for an absolute certainty that something is logically impossible (there may be some exceptions owing to their being simple enough to obtain Cartesian status; but no such examples ever came up).
But note, I was the only one granting exceptions for logical impossibility. The doofuses were consistently talking about empirical questions. And when they claimed logical impossibility, they only ever did so by removing themselves from reality. For example, defining “miracle” in their own way, so as to prove their own definition incoherent; a fallacy, as no serious proponents of miracle define miracle that way. Hence my discussing that example in this very article; they never listened nor conceded anything here, they instead just rationalized why they get to make up their own definitions and ignore the definitions of actual proponents.
BTW, there is nothing illogical about a cow jumping over the moon. We can speak of its physical impossibility, but only by relying on empirical premises that have a nonzero probability of being false. You seem not to understand the difference between logical impossibility and physical probability, or why the latter is only ever probabilistic.
Yep. Just like any other Cartesian Demon can be true, and any other logical proof can be false. Both points I already made in this very article. So I’m a little concerned that you seem not to know what’s in the article you are commenting on here. We rely on BT not because we can’t be wrong about it being a logically necessary truth (a formally proved theorem), but because the evidence establishes the probability that it is not a logically necessary truth (a formally proved theorem) is absurdly small.
Alas, as seductive as that is, it violates formal logic. Pay attention to my demonstration: if you assign a prior of zero, no amount of evidence can ever get any update beyond zero. Not even “discovering a new hypothesis” would be able to: because anything multiplied by zero remains zero. So it doesn’t matter how amazingly improbable it is that the new hypothesis is incoherent and logically impossible, we would have to always conclude any new hypotheses is incoherent and thus logically impossible. Because that’s what it means to have said its probability was previously exactly zero: that no possible evidence could ever change our mind about that. Yet in b is the fact that we often don’t know the hypothesis that will turn out to be true; therefore the frequency of that happening (its prior) cannot be zero.
Read any technical literature on Bayesianism or Bayesian epistemology. It all explains why ~h does include and must include every logical possibility whatever, even ones we don’t know about yet; and that not solving this logical conundrum poses a fatal problem to anyone who would attempt to ignore it. The solutions to this requirement vary. Mine (explained in the article you claim to be commenting on but seem not to have read) is that it is improbable (not impossible, just not highly probable—or in the worst cases, e.g. when the principle of indifference prevails, no more likely than 50%) that the truth is something we can’t even imagine possible—until we do conceive of it, then our doing so is itself evidence that reconditions the probability (and that probability cannot have been zero, if we want the evidence of a newly discovered hypothesis to update our probabilities). This is explicitly discussed by Wallach in his treatment of a prominent example in archaeology.
That’s actually not true. An infinite sum can be lower than a tiny finite fraction. Study up on the basis of calculus to understand why. Calculus was based on the discovery of this very fact. And it follows not only from what Archimedes observed, that any finite number can be obtained with an infinite sum when summing infinitesimals, but even from infinite sums of finite numbers (and not just infinitesimals).
It might be easier to grasp this when you realize the only hypotheses that sum, are mutually exclusive hypotheses. Most hypotheses are not mutually exclusive but overlapping or subsumed by other hypotheses.
For example, “Jesus was a planet in the Kuiper belt” does not add to “Jesus was not even meant to be a historical person”; it is in fact a sub-hypothesis of the latter, thus already included within it. So whatever the probability is of “Jesus was a planet in the Kuiper belt” it not only has to be less than “Jesus was not even meant to be a historical person” but it does not add at all to the probability that “Jesus was not even meant to be a historical person.” Because it’s just a division of the probability space already fully occupied by “Jesus was not even meant to be a historical person.” And since the latter probability is vanishingly small, “Jesus was a planet in the Kuiper belt” can only be smaller, and also adds nothing to the probability of ~h (once we have already included the whole covering hypothesis “Jesus was not even meant to be a historical person” in ~h).
Thank you for your clarifications. Like I said, you could have ignored my 2nd question, but I appreciate the trouble you went through to make it extra-clear. I now understand better what you mean by “logical” vs. empirical impossibility.
Indeed your comment was quite useful. It’s always best to have multiple explanations up, of anything difficult or counter-intuitive; it increases the probability one of them will resonate.
I almost feel like I know who you’re talking about, or at least one of the contributors to the ‘doofus’ statements. I know a person that you have previously collaborated with who has made very…off…statements about Bayesian epistemology on his private blog.
Is the plural of doofus, DOOFI?
Maybe from the Latin, doof-us, second declension, n. “one lacking a brain.”
I think you committed a big mistake when you try to argue that «logically possible entails not a zero probability». If you use modus tollens in «logically impossible entails ‘zero probability» what you get is «not zero probability entails logically possible». This last sentece is true, but what you affirm is not as many people that study probability can show you. For example in you have something like a normal distribution (that is, a “continuous” probability) the probability that you got a concrete result is cero, but that doesn’t mean is impossible. For example if the normal distribution tries to model the experimental error, is easy to show that the probability that the error is a concrete number like five is cero, but doesn’t make impossible to get it.
Your English grammar is hard to follow so I am not sure what you think you are trying to say here.
You seem to be committing the fallacy of affirming the consequent (just because all logically impossible things have an objective—not an epistemic—probability of zero does not mean nothing else does) and equivocation (you seem to be conflating objective and epistemic probability here: epistemic probability measures the probability that we are correct to affirm a thing, and not the actual objective probability of the thing, which can almost never be known to such precision).
For example, the objective probability that Hitler is living in a bunker in Brazil right now might well be zero, yet it is not logically impossible. Furthermore, whether that objective probability is zero is precisely what we can never know with certitude, so the epistemic probability that Hitler is living in a bunker in Brazil right now cannot be zero (though we can argue from evidence that it must be quite low). But that is simply a product of the imperfect state of our knowledge. It has nothing to do with what’s “logically” possible or impossible.
Indeed, this follows even for claims of logical possibility and impossibility themselves, which often have a nonzero epistemic probability of being wrong, i.e. there is often some chance (however small) that our belief that something really is logically impossible (or logically possible) is incorrect, and it really isn’t logically impossible (or logically possible).
Objective probability is different, because it’s measuring a different thing.
It is tautologically the case that the logically impossible has a zero objective (not epistemic) probability, because that is literally what the words “logically impossible” mean (a frequency of instantiation that is always zero). But since we never know for sure whether something is logically impossible, often it will retain a nonzero epistemic probability; nor does it follow that if something is logically possible that it’s objective probability is nonzero. If conditions happen to be such that it can never and will never have occurred (unbeknownst to us), then its objective (not epistemic) probability is zero, even as it remains “logically” possible (e.g. if things had been arranged differently, then it could have occurred).
What any of this has to do with infinitesimal areas in the geometry of normal distributions I cannot fathom. Your closing point seems to be a total non sequitur.
You aren’t really even clearly discussing that (“concrete number like five” is not an infinitesimal area for countable sets, but a finite and measurable area under the curve, as it spans the whole unit from what we would otherwise call 4.5 to ~5.5; e.g. you can’t ever really have 5.2 persons in a room). But even if you more clearly set up such a case, where we have a normal distribution for a continuous variable (like height rather than persons), typically when someone asks the probability of someone being “five feet tall” they do not mean “to infinitesimal precision” but to the measurement precision (like, to the nearest quarter inch), which is again a finite measurable area (the span of half an inch centered around five feet). And this is even apart from the fact that length is not infinitely divisible (quantum mechanics entails measurement discrimination ceases below the Planck length); even if length were infinitely divisible, the point follows.
For example, if (?) you are trying to make some naive Zeno-style argument about the infinitesimals summing to finite ranges, like that “exactly five feet” to “infinite precision” has an infinitesimal and thus “functionally zero” probability and therefore “all heights are logically impossible,” you are evincing ignorance of calculus, which has since the time of Archimedes (and especially since Newton and Leibniz, and even more since Cantor) proved that infinitesimals actually are not identical to zero in transfinite arithmetic; they only reduce to zero in finite arithmetic. The same would follow for any continuous variable—like probability itself.
This is why we can use calculus to sum infinitely many heights (or, as Newton was aiming to do, accelerations) of infinitesimal quantity and get a nonzero number. And how statisticians indeed do this very same thing with summing infinite discrete probabilities of infinitesimal size into a nonzero probability. That is in fact what normal distribution curves illustrate. If you don’t know how that works, you need to get up to speed. I recommend taking some courses in calculus.