Comments on: How Not to Be a Doofus about Bayes’ Theorem https://www.richardcarrier.info/archives/14186 Announcing appearances, publications, and analysis of questions historical, philosophical, and political by author, philosopher, and historian Richard Carrier. Wed, 23 Feb 2022 18:44:48 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-34141 Wed, 23 Feb 2022 18:44:48 +0000 https://www.richardcarrier.info/?p=14186#comment-34141 In reply to Iiiccc.

Your English grammar is hard to follow so I am not sure what you think you are trying to say here.

You seem to be committing the fallacy of affirming the consequent (just because all logically impossible things have an objective—not an epistemic—probability of zero does not mean nothing else does) and equivocation (you seem to be conflating objective and epistemic probability here: epistemic probability measures the probability that we are correct to affirm a thing, and not the actual objective probability of the thing, which can almost never be known to such precision).

For example, the objective probability that Hitler is living in a bunker in Brazil right now might well be zero, yet it is not logically impossible. Furthermore, whether that objective probability is zero is precisely what we can never know with certitude, so the epistemic probability that Hitler is living in a bunker in Brazil right now cannot be zero (though we can argue from evidence that it must be quite low). But that is simply a product of the imperfect state of our knowledge. It has nothing to do with what’s “logically” possible or impossible.

Indeed, this follows even for claims of logical possibility and impossibility themselves, which often have a nonzero epistemic probability of being wrong, i.e. there is often some chance (however small) that our belief that something really is logically impossible (or logically possible) is incorrect, and it really isn’t logically impossible (or logically possible).

Objective probability is different, because it’s measuring a different thing.

It is tautologically the case that the logically impossible has a zero objective (not epistemic) probability, because that is literally what the words “logically impossible” mean (a frequency of instantiation that is always zero). But since we never know for sure whether something is logically impossible, often it will retain a nonzero epistemic probability; nor does it follow that if something is logically possible that it’s objective probability is nonzero. If conditions happen to be such that it can never and will never have occurred (unbeknownst to us), then its objective (not epistemic) probability is zero, even as it remains “logically” possible (e.g. if things had been arranged differently, then it could have occurred).

What any of this has to do with infinitesimal areas in the geometry of normal distributions I cannot fathom. Your closing point seems to be a total non sequitur.

You aren’t really even clearly discussing that (“concrete number like five” is not an infinitesimal area for countable sets, but a finite and measurable area under the curve, as it spans the whole unit from what we would otherwise call 4.5 to ~5.5; e.g. you can’t ever really have 5.2 persons in a room). But even if you more clearly set up such a case, where we have a normal distribution for a continuous variable (like height rather than persons), typically when someone asks the probability of someone being “five feet tall” they do not mean “to infinitesimal precision” but to the measurement precision (like, to the nearest quarter inch), which is again a finite measurable area (the span of half an inch centered around five feet). And this is even apart from the fact that length is not infinitely divisible (quantum mechanics entails measurement discrimination ceases below the Planck length); even if length were infinitely divisible, the point follows.

For example, if (?) you are trying to make some naive Zeno-style argument about the infinitesimals summing to finite ranges, like that “exactly five feet” to “infinite precision” has an infinitesimal and thus “functionally zero” probability and therefore “all heights are logically impossible,” you are evincing ignorance of calculus, which has since the time of Archimedes (and especially since Newton and Leibniz, and even more since Cantor) proved that infinitesimals actually are not identical to zero in transfinite arithmetic; they only reduce to zero in finite arithmetic. The same would follow for any continuous variable—like probability itself.

This is why we can use calculus to sum infinitely many heights (or, as Newton was aiming to do, accelerations) of infinitesimal quantity and get a nonzero number. And how statisticians indeed do this very same thing with summing infinite discrete probabilities of infinitesimal size into a nonzero probability. That is in fact what normal distribution curves illustrate. If you don’t know how that works, you need to get up to speed. I recommend taking some courses in calculus.

]]>
By: Iiiccc https://www.richardcarrier.info/archives/14186#comment-34134 Sun, 20 Feb 2022 10:24:20 +0000 https://www.richardcarrier.info/?p=14186#comment-34134 I think you committed a big mistake when you try to argue that «logically possible entails not a zero probability». If you use modus tollens in «logically impossible entails ‘zero probability» what you get is «not zero probability entails logically possible». This last sentece is true, but what you affirm is not as many people that study probability can show you. For example in you have something like a normal distribution (that is, a “continuous” probability) the probability that you got a concrete result is cero, but that doesn’t mean is impossible. For example if the normal distribution tries to model the experimental error, is easy to show that the probability that the error is a concrete number like five is cero, but doesn’t make impossible to get it.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-26186 Tue, 26 Jun 2018 14:03:18 +0000 https://www.richardcarrier.info/?p=14186#comment-26186 In reply to Sebawayh X.

Maybe from the Latin, doof-us, second declension, n. “one lacking a brain.”

]]>
By: Sebawayh X https://www.richardcarrier.info/archives/14186#comment-26183 Mon, 25 Jun 2018 15:33:07 +0000 https://www.richardcarrier.info/?p=14186#comment-26183 Is the plural of doofus, DOOFI?

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-26177 Fri, 22 Jun 2018 22:02:09 +0000 https://www.richardcarrier.info/?p=14186#comment-26177 In reply to Art. 25.

Indeed your comment was quite useful. It’s always best to have multiple explanations up, of anything difficult or counter-intuitive; it increases the probability one of them will resonate.

]]>
By: Art. 25 https://www.richardcarrier.info/archives/14186#comment-26176 Fri, 22 Jun 2018 21:32:35 +0000 https://www.richardcarrier.info/?p=14186#comment-26176 In reply to Richard Carrier.

Thank you for your clarifications. Like I said, you could have ignored my 2nd question, but I appreciate the trouble you went through to make it extra-clear. I now understand better what you mean by “logical” vs. empirical impossibility.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-26175 Fri, 22 Jun 2018 21:25:32 +0000 https://www.richardcarrier.info/?p=14186#comment-26175 In reply to Art. 25.

That’s another good way to look at it!

You are noticing the same logical problem I’m talking about. Bayes’ Theorem is only formally valid if P(h + ~h) = 1, and thus sums all possible outcomes—including, as you note, “all of the hypotheses we can think of being wrong.”

You can in fact think of “that all of the hypotheses we can think of are wrong” as itself a covering hypothesis (a superset), than which any specific hypothesis we haven’t thought of yet must be even less probable; and which, as you notice, can’t have a probability of zero, as that would be, again, asserting we can’t possibly be wrong (in this case, about what all the options are: yet finding out it’s something we hadn’t thought of, would prove we were wrong, so our having been wrong cannot have had a zero probability).

My example of miracles and the supernatural should illustrate this.

(Although of course I should mention I suspect miracles and the supernatural are logically impossible, even using the definitions of their own proponents; but as it has not been proved to be, we can’t assign them a zero probability; we couldn’t do that even if we did produce a formal proof of the fact, as all that would do is greatly reduce the probability of our being wrong, per my example of the thousand mathematicians in Proving History.)

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-26174 Fri, 22 Jun 2018 21:11:23 +0000 https://www.richardcarrier.info/?p=14186#comment-26174 In reply to Art. 25.

First is that you criticize “doofuses” when they say something is “impossible”, i.e. have a zero-possibility. My first answer would be that they mean it colloquial…

Nope. That’s what I would always assume too. Which is why we actually asked them that; and went out of our way to make sure that isn’t what they meant. But nope. They were adamant. They repeatedly insisted they meant absolutely zero, and not just colloquially “impossible” in the sense of extremely or absurdly improbable. They even repeatedly mocked the idea of “impossible” colloquially just meaning an extreme improbability.

And indeed, my entire article here was inspired by my bewilderment that they maintained this even after hours of trying to be charitable and letting them clarify.

everything is possible (as a prior probability) except illogical assertions.

That’s actually not epistemically true, however. Because there is a nonzero probability we are wrong to classify an assertion as illogical. Thus, I point out in this article, that yes, declaring an assertion illogical is saying it’s logically impossible—and thus “if we are right about that” has a zero prior—but we don’t get to “assume” things in the real world. So we never get to assume “we are right about that.” We can do that in closed hypothetical systems in our minds. But when it comes to the real world outside our mind, we become fallible again. We can rarely ever know for an absolute certainty that something is logically impossible (there may be some exceptions owing to their being simple enough to obtain Cartesian status; but no such examples ever came up).

But note, I was the only one granting exceptions for logical impossibility. The doofuses were consistently talking about empirical questions. And when they claimed logical impossibility, they only ever did so by removing themselves from reality. For example, defining “miracle” in their own way, so as to prove their own definition incoherent; a fallacy, as no serious proponents of miracle define miracle that way. Hence my discussing that example in this very article; they never listened nor conceded anything here, they instead just rationalized why they get to make up their own definitions and ignore the definitions of actual proponents.

BTW, there is nothing illogical about a cow jumping over the moon. We can speak of its physical impossibility, but only by relying on empirical premises that have a nonzero probability of being false. You seem not to understand the difference between logical impossibility and physical probability, or why the latter is only ever probabilistic.

“that would include that Bayes’ Theorem could also be wrong, and then you have no logical basis at all”

Yep. Just like any other Cartesian Demon can be true, and any other logical proof can be false. Both points I already made in this very article. So I’m a little concerned that you seem not to know what’s in the article you are commenting on here. We rely on BT not because we can’t be wrong about it being a logically necessary truth (a formally proved theorem), but because the evidence establishes the probability that it is not a logically necessary truth (a formally proved theorem) is absurdly small.

I don’t agree with the statement that every possible hypothesis must be included in ~h, only the ones we know of at the time.

Alas, as seductive as that is, it violates formal logic. Pay attention to my demonstration: if you assign a prior of zero, no amount of evidence can ever get any update beyond zero. Not even “discovering a new hypothesis” would be able to: because anything multiplied by zero remains zero. So it doesn’t matter how amazingly improbable it is that the new hypothesis is incoherent and logically impossible, we would have to always conclude any new hypotheses is incoherent and thus logically impossible. Because that’s what it means to have said its probability was previously exactly zero: that no possible evidence could ever change our mind about that. Yet in b is the fact that we often don’t know the hypothesis that will turn out to be true; therefore the frequency of that happening (its prior) cannot be zero.

Read any technical literature on Bayesianism or Bayesian epistemology. It all explains why ~h does include and must include every logical possibility whatever, even ones we don’t know about yet; and that not solving this logical conundrum poses a fatal problem to anyone who would attempt to ignore it. The solutions to this requirement vary. Mine (explained in the article you claim to be commenting on but seem not to have read) is that it is improbable (not impossible, just not highly probable—or in the worst cases, e.g. when the principle of indifference prevails, no more likely than 50%) that the truth is something we can’t even imagine possible—until we do conceive of it, then our doing so is itself evidence that reconditions the probability (and that probability cannot have been zero, if we want the evidence of a newly discovered hypothesis to update our probabilities). This is explicitly discussed by Wallach in his treatment of a prominent example in archaeology.

An infinite number of alternative hypothesis, each with a ridiculously small probability, would still add up to a considerable probability of one of them being true.

That’s actually not true. An infinite sum can be lower than a tiny finite fraction. Study up on the basis of calculus to understand why. Calculus was based on the discovery of this very fact. And it follows not only from what Archimedes observed, that any finite number can be obtained with an infinite sum when summing infinitesimals, but even from infinite sums of finite numbers (and not just infinitesimals).

It might be easier to grasp this when you realize the only hypotheses that sum, are mutually exclusive hypotheses. Most hypotheses are not mutually exclusive but overlapping or subsumed by other hypotheses.

For example, “Jesus was a planet in the Kuiper belt” does not add to “Jesus was not even meant to be a historical person”; it is in fact a sub-hypothesis of the latter, thus already included within it. So whatever the probability is of “Jesus was a planet in the Kuiper belt” it not only has to be less than “Jesus was not even meant to be a historical person” but it does not add at all to the probability that “Jesus was not even meant to be a historical person.” Because it’s just a division of the probability space already fully occupied by “Jesus was not even meant to be a historical person.” And since the latter probability is vanishingly small, “Jesus was a planet in the Kuiper belt” can only be smaller, and also adds nothing to the probability of ~h (once we have already included the whole covering hypothesis “Jesus was not even meant to be a historical person” in ~h).

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/14186#comment-26171 Fri, 22 Jun 2018 20:20:03 +0000 https://www.richardcarrier.info/?p=14186#comment-26171 In reply to Marc Miller.

Formally, it involves no randomization other than the opponent’s decisions, which are analytically constrained, and thus (technically) differs from poker, for example, which for its randomization actually resembles life better. This makes chess a more analytical system than synthetic (meaning: there is always a best move that follows with logical necessity from the available premises). However, in reality, it is in practice impossible to compute every possible move (it can be done when one has sufficient time; humans rarely do), so the game emulates randomness, owing to the large complex of combinations of possible opponent moves.

Game theorists have found that the best strategies for handling uncertainty in analytical games like chess is indeed Bayesian (example, example, example). Nate Silver has an interesting discussion of Bayesian chess in The Signal and the Noise (although he doesn’t directly call it that).

(As another commenter here noted, leading computer chess players are Bayesian.)

]]>
By: Mattias Davidsson https://www.richardcarrier.info/archives/14186#comment-26170 Fri, 22 Jun 2018 10:13:31 +0000 https://www.richardcarrier.info/?p=14186#comment-26170 In reply to Art. 25.

Actually, the doofus in question really means actually impossible.

]]>