Just this month Bible scholar James McGrath, whose incompetence and dishonesty I have documented several times now (example, example, example, example), posted a really foolish attempt to critique Bayesian history on his blog. Titled Jesus Mythicism: Two Truths and a Lie, McGrath aims to expose “the problematic case that” I have “made for incorporating mathematical probability (and more specifically a Bayesian approach) into historical methods.” Here’s why this shows McGrath doesn’t understand logic, math, Bayes’ Theorem, or even history.
Neil Godfrey has already destroyed this over at Vridar. But here’s what I think everyone needs to learn…
History Is Mathematical
Just McGrath’s opening statement alone is far more problematic than anything I’ve argued. McGrath must somehow think there can be some kind of “probability” that isn’t mathematical. But there isn’t. And that’s not something I am “proposing.” It’s an inescapable truth of logic. All statements of probability are mathematical. Period. Therefore all statements about what probably happened in the past are mathematical. By Hypothetical Syllogism, a standard formula of logic: if A, then B; if B, then C; therefore if A, then C. If all historical statements are stating a probability, and if stating a probability is always mathematical, then all historical statements are mathematical.
You can’t get around this. So how could any case “for” the fact that historical conclusions assert what’s probable be “problematic”? Is McGrath a postmodernist who rejects all historical truth? Is he an irrational dogmatist who believes every claim about history is either absolutely false or absolutely true without any probability of it being otherwise? You might see we have a lazy thinker here already. But once we admit all statements about the past are statements of probability (“unlikely,” “most likely,” “very likely,” “almost certainly,” are all making distinctions of probability), then we cannot avoid concluding that all statements about history are mathematical. You cannot escape this by insisting you are not “using numbers.” You are necessarily implying numbers every time you make such assertions…provided you mean by such statements anything at all.
As I explained already to a bunch other Doofuses:
We measure uncertainty as margins of error around a probability. If you say “I think x is very probable,” you cannot mean the probability of x is 20%, which is actually improbable, nor 60%, as that is probable, but hardly “very” probable; it’s surely not the kind of probability you mean. So we have the right to ask you what you mean: how far would a probability have to drop to make you no longer refer to it as “very” probable? You can tell us. It’s arbitrary; it’s your own words, so you get to say what you mean. But then you have to be consistent. You can’t start throwing up equivocation fallacies, constantly changing what you mean mid-argument. Unless you’re a liar; or actually want to be illogical. And only a doofus wants to be illogical.
We can then ask you, well, if you mean it has to be, perhaps, at least 90% to qualify for you describing it as “very” probable, what’s the most probable you can mean? When would the probability cease being just “very” probable and become, say, “extremely” probable? Same rules. You have to mean something by these terms. Otherwise they literally mean nothing. If you mean the same thing by “merely” and “very” and “extremely,” then those words convey nothing anymore. But the only thing they could ever mean differently, is a different range of probability. There is no escaping this.
So when this doofus tries to claim language can operate without any logical coherence in probability theory, he’s just being a full doofus.
So has McGrath joined the doofuses?
Once you admit all probability statements are mathematical, you need then to ascertain what mathematical formula describes what you are doing when you make assertions about probability. What are historians doing when they reach conclusions about what is or isn’t probable, or how probable it is? The answer will always be Bayes’ Theorem. There is no avoiding that. No one has proposed any other. And no other known to me, accurately describes any valid historical reasoning—unless it just reduces to Bayes’ Theorem again. This has been independently verified, under peer review, by a philosopher of history; and a philosopher of archaeology (and by several archaeologists as well, cited thereby). So it’s apparently Bayes’ Theorem all the way down. Historians therefore had better start learning this. Otherwise, as David Hackett Fisher has extensively documented, they will be highly prone to logical error in their work.
Indeed, I even demonstrate by syllogistic logic that all historical claims are necessarily Bayesian in Proving History, pp. 106-14 (which, incidentally, was formally peer reviewed by a professor of mathematics and a professor of Biblical studies—as I insisted on in my contract with the publisher). I likewise show there that all standard historical methods usually employed (as loose logical formulas), are in fact actually Bayesian (Ibid., Chapters 4 and 5). Does McGrath interact with or critique any of these demonstrations in Proving History? Nope. He didn’t even do that when he claimed to have read the book. That by itself is already incompetent. Competent academics, actually read a book they critique, and actually interact with its arguments. So why won’t McGrath do that? That he doesn’t, entirely discredits his opinion in the matter. Why he doesn’t, we can only guess.
McGrath cannot deny history is mathematical. Because he cannot deny it is probabilistic. And he cannot deny all assertions of probability are mathematical. So if what he wishes to object to is instead only the specific formula, then the burden is on him to present what mathematical formula describes correct historical reasoning. If he cannot, then what remains is what has been proved under peer review: that it’s Bayes’ Theorem. He cannot logically gainsay that, if he cannot even comprehend it. He would do better to just admit he doesn’t understand the logic of probability; and admit this is a serious problem for a guy whose profession is all about making assertions of probability. Otherwise, he has a job to do.
What Bayes’ Theorem Actually Says about Historical Reasoning
McGrath claims that “if one followed Carrier’s logic, each bit of evidence of untruth would diminish the evidence for truth, and each bit of evidence that is compatible with the non-historicity of Jesus diminishes the case for his historicity.” That’s entirely false. Thus demonstrating McGrath does not understand Bayes’ Theorem. Even after supposedly reading a whole book about it and presumably studying it as any competent critic would do.
Neither I have said nor does Bayes’ Theorem ever entail “each bit of evidence that is compatible with” some theory h being false “diminishes” the probability of h. To the contrary, one of the principal insights of Bayesian reasoning is that such evidence has no effect at all on the probability of h. Merely “being compatible” with a conclusion, only increases the probability of that conclusion if that same evidence is incompatible with the alternative (to some degree). Otherwise, an item of evidence can be equally compatible with both h and ~h, in which case it has no effect on the probability of h. The likelihood ratio is then just 1/1, which is just 1, and anything multiplied by 1 is itself. No change. I discuss this fact repeatedly in Proving History. It is impossible McGrath can have missed it. Unless he never really read the book.
Likewise, “each bit of evidence of untruth would diminish the evidence for truth” is only true if you more carefully reword that to say “each bit of evidence that is more probable on the untruth of h would diminish the probability of the truth of h.” Which is simply a straightforward description of evidence. That’s what evidence does, and what something must do to be called evidence for one conclusion or another. It is the likelihood ratio that determines whether any e is evidence for or against h, and how much for or against h it is. Hence if a made-up legend about someone is just as expected whether they existed or not, then that made-up legend argues for neither conclusion.
It is frequently the case that there is evidence against h that is weak (e.g. O.J. Simpson insists he is innocent), but evidence for h that is strong (e.g. a ton of forensics and the complete lack of any plausible alternative). The conclusion then is h is probable. But it would be even more probable without that contrary evidence (e.g. if O.J. Simpson admitted he was guilty). That h can already be almost certainly true (e.g. 99% likely) and yet could still be even more probable (e.g. 99.99% likely) is a simple mathematical fact that shouldn’t be confusing or freaking out anyone who even so much as graduated middle school.
So once we have those two 6th grade math-facts cleared up, let’s look at the pile of poo McGrath’s confusion has tripped him into:
But in history as historians practice it, each claim, each piece of evidence, stands or falls on its own merits. The non-historicity of the cherry tree incident in no way dilutes the case for the historicity of George Washington. There is no need to go back over the evidence and do a recalculation of the case for historicity. That is not just because the impact of that non-historical story is infinitesimally small in comparison with other evidence. It is that the case for historicity is based on the evidence which supports it, and is not diminished by the fact that all famous people also have non-historical claims made about them.
Here McGrath seems to think that somewhere I have said that simply because there are false things said about Jesus, that this reduces the probability of Jesus. Holy balls. Dude. I said exactly the opposite! And have repeatedly said exactly the opposite. For example, here is the actual conclusion with respect to the Gospels in my peer reviewed case for the non-existence of Jesus (which McGrath has also lied about reading):
The Gospels generally afford us no evidence whatever for discerning a historical Jesus. Because of their extensive use of fabrication and literary invention and their placing of other goals far ahead of what we regard as ‘historical truth’, we cannot know if anything in them has any historical basis … This is equally expected on both minimal historicity and minimal mythicism, however, and therefore (apart from what we’ve already accounted for in determining the prior probability in Chapter 6) the Gospels have no effect on the probability that Jesus existed, neither to raise or lower it. (pp. 506-07)
Neither to raise or lower it. So why does McGrath believe I said each of these false stories “lowers” the probability of history? Because he is incompetent. And a liar. He did not read my books, despite claiming to. He consequently doesn’t know what’s in them. He chooses to make completely false claims about what’s in them instead. Why does historicity have to be defended with incompetence and lies? That’s what more and more people want to know.
Now, I did say “apart from what we’ve already accounted for in determining the prior probability.” So there is some sense in which I claim the evidence of the Gospels argues against the historicity of Jesus. But what sense exactly is that? Is it, as McGrath here falsely implies, that I am adding up false story after false story and concluding each false story reduces the probability of Jesus? Absolutely not. Explicitly not. And any honest person who actually read my work knows this. To the contrary, I have argued on this blog that more evidence of Jesus’s mythification probably couldn’t further reduce the historicity of Jesus. Because once you are that mythical, there is no more mythical you can be. Or to be more precise, there is no more evidence in our background as to the frequency of persons that mythologized, being mythical.
The Gospels only tell us, not on an item-by-item basis, but globally, how mythologized Jesus was. The answer: just as mythologized as a lot of other mythical people. In fact, few historical persons were ever that mythologized. I’ve explained this to McGrath before. If you put Jesus and everyone else as mythologized as he is into a hat, and drew one out at random, the odds you’d draw a historical person are no better than 1 in 3—and possibly as bad as 1 in 15 (OHJ, Ch. 6). Because this cannot be circularly prejudged—so we cannot say in advance whether the person drawn out of that hat is Jesus or not. The odds must be the same. For Jesus as for anyone else in the hat. That can only be changed with evidence—that is, evidence specifically that Jesus is more likely historical than the others in that hat. Notably, exactly what McGrath wants to say.
For instance, if for some reason Julius Caesar were in that hat (he isn’t, but let’s pretend he is), the prior odds he’d be historical would be 1 in 3—the odds upon merely drawing his name from the hat—but the evidence would still be overwhelming that he nevertheless existed, totally crushing that 3 to 1 against into millions to one in favor. So we need evidence. And that’s what we lack for Jesus. Despite McGrath’s hyperbolic assertions to the contrary, we have no evidence for Jesus that can even be called good…compared to every other person we are certain existed. For example, even Hannibal or Spartacus are way better attested.
This means, for example, the Argument from Contamination only pertains to the prior, not the posterior…provided there is evidence of a thing other than mythical texts about it; if there isn’t, the prior is all you are left with. But this is why McGrath needs to learn to tell the difference between a prior probability and a posterior probability. He needs to learn how frequencies work within reference classes. He needs to learn how to correctly class a figure like Jesus. And he needs to learn how evidence works—namely, why evidence increases or decreases a probability, and how much. And when it has no effect at all. Otherwise, McGrath can have no real idea of what he is even doing when judging probabilities.
Epilogue
I’ve covered this before. And again. In very easy terms. And I explain it in detail in OHJ (e.g. pp. 506-09, 451-52, 601-06). McGrath has no excuse at this point. History is all about ascertaining probabilities. Learning how probability works is his job. So McGrath needs to do his damned job, and finally, actually, learn how probability works. Until he does, he can’t ever claim to have a logically valid argument for any conclusion. Including whether Jesus existed.
I’m still convinced the plural of “doofus” is “doofi” [du:fai], not “doofuses” 🙂
@ART 25 I think that the plural you are thinking of is “doofpodes”
Or it could be doofs. Or as one commenter jokingly suggested, doofpodes (of course it can’t be that, unless the singular were doofpus). But alas, Miriam-Webster says, doofuses.
I stand corrected
How do you find the energy to even bother constantly correcting this drivel? I see so many people mocking your use of Bayesian reasoning in historical inquiry, and yet Bayesian reasoning just entails using logic and assigning reasonable probability values. Even when historians don’t state their case mathematically, they are still using intuitive mathematics to decide what is probably true/untrue in history. That’s what everyone does without realising it (though in more error-prone ways, as we tend to move automatically with intuitive probabilities).
It’s like they don’t realise that they are literally demanding that logic and probability/mathematics be ignored in favour of God-knows-what alternative…
You’re totally right. And it baffles me for the same reason.
As Tucker, Wallach, and I all show, when historians reason logically, they are already reasoning in a Bayesian fashion (and when they aren’t reasoning in a Bayesian fashion, they aren’t reasoning logically). Even though they don’t know that’s what they are doing.
All we are doing is showing them that it’s what they are already doing. So they should be aware of that, and learn it, to ensure they do it consistently, and thus avoid countless logical traps (such as are documented by Fisher).
This is exactly what Aristotle and other logicians throughout history have done: reveal what the actual underlying assumptions are of all arguments, that make them valid (or invalid). People were already using logical arguments, and fallacies, long before humans even knew what logic or fallacies were. It would be absurd to say, “Well, I don’t structure my arguments in formal syllogisms or use logical symbols, therefore I am not using logic, therefore I don’t need to learn anything about how logic works or ever have to be logical.”
Yet this is exactly what the McGraths of the world are doing.
The only extant interpretation of the probability calculus than can make sense of your procedure is the subjective interpretation of Ramsey and De Finetti. Here probability is something like one’s disposition to accept odds on a proposition. These credences or ‘degrees of belief’ are subject to the (otherwise uninterpreted) probability axioms if you don’t want a Dutch book made against you. (No other, e.g. frequentist, interpretation is possible since you assign probabilities to individual dateable propositions.) The system is nice for statistical inference via the Bayes Rule because the probabilities ‘P(this sample | real distribution X)’ basically write themselves so we can infer ‘P(real distribution X | this sample)’ and figure something out. Moreover, the ‘irrational’ ‘arbitrary’ character of the prior tends to be washed out with the increase in samples – the standard theorems of De Finetti, Savage et al. attempt to prove various forms of this thought.
When we just have a bunch of arbitrary historical propositions and opaque arbitrary conditional probabilities linking them, mostly cherry picked, the procedure is transparently irrational.
Of course credence is measured with a real number like other simple continuous quantities, e.g. temperature. The declaration ‘all probabilities are mathematical’ is thus as fatuous as the declaration ‘all temperatures are mathematical’ or ‘all fertility rates are mathematical’.
That almost reads like a bot wrote it. Of course all temperatures and fertility rates are mathematical! That’s not fatuous. That’s a tautology. To deny it would be absurd.
As to how all subjective probabilities are necessarily estimates of actual (hence objective) frequencies (and increasingly approach the latter with more information), read the latter half of Chapter 6 of Proving History.
All one need do to avoid Dutch Books is verify your distributions always sum to 1 and don’t exceed 1. And in fact that’s precisely how we vet the validity of estimated frequencies. Subjectivism or objectivism don’t even matter for this purpose.
Subjective estimates are just simulated models of objective realities. Our accuracy is then the measure of how close our model is to the actual reality. Since margins of error are to be included, reality need only be somewhere within the margins of the ranges estimated, for reliability to obtain under the stated conditions of consistency.
Which means this test need only hold for the margined estimates (all your minimums must pass the Dutch Book test, as must all your maximums). As long as you do that, your model is coherent. And as long as reality is contained within the resulting margins (to as high a probability as your margins are set to obtain: confidence level entailing confidence interval), you will be making as accurate a statement (about the range of probabilities the truth falls within) as observations allow.
In practice humans do this intuitively every second of their lives, and every time they state a confidence in any statement. The point of learning the logic underlying it, is precisely to catch and avoid logical errors in these intuitions, such as imagining distributions that violate the laws of probability (e.g. Dutch Books).
In addition to Dr. Carrier’s reply, I would say that your argument seems to me to have an equivocation fallacy, in that you mix the tools to describe the world (i.e. probability theory) with the world it describes. Your argument seems to say (unless I’m misreading it), that if you have a very improbable historical claim (e.g. Jesus was a zombie) and another one (e.g. Jesus was an alien), then critics would throw the Dutch Book argument at it, because both possibilities cannot add to 1, and therefore probability cannot be used in the evaluation of those claims. But it can, by adding another historical claim: that both those claims are not true. The probabilities of all three claims WOULD then add up to 1.
I see, but if temperature is mathematical, so is the number of teacups in the cupboard.
My remark didn’t read as if written by a bot.
That subjective assignments of confidence in a distribution tend to approach the real distribution by Bayesian updating is an important result, but it is no help to you since propositions like “Jesus didn’t exist” aren’t distributions. ‘Subjectivism versus objectivism doesn’t matter’ if you are dealing with estimates of frequency. “Subjective estimates are just simulated models of objective realities”, is a legitimate simplification — where the ‘realities’ are distributions.
That you are interested in such proposition as ‘Jesus existed’ and ‘Paul wrote XYZ’ is what forces you to employ the subjectivist interpretation of probability. The propositions, p, that you put in a frame like ‘P (p | Jesus didn’t exist)’ aren’t samples, so no one knows how to appraise them; people just have them … or else they amuse themselves arranging them to produce coherence with the axioms.
In the statistical use, we have theorems that show how Bayesian updating of subjective probabilities can be a sort of organon of discovery. Where we are dealing with a mass of dateable individual propositions all we have is a flat weak coherence constraint like the principle of non-contradiction and the other logical principles, which are generally obeyed even in novels.
Yep. The number of teacups in the cupboard is mathematical too. So you can’t reason out anything using that number of cups and ignore mathematical laws there either. There’s no avoiding these things. When you are arguing about quantities and ratios, the logic that governs the validity of anything you say is math. And denying that is absurd. That’s my point.
And no one is claiming “propositions like “Jesus didn’t exist” are distributions.” Distributions refers to the probabilities assigned to propositions like that, not the propositions themselves. Obviously. Thus, to ensure a model is coherent, it must pass the Dutch Book test, or in more specific terms: it must satisfy the axioms of probability.
Ergo, if you set a prior probability on the proposition “Jesus didn’t exist,” the prior probability of its converse (“Jesus did exist”) must be one minus that. I discuss this in Proving History as a required part of sound and valid procedure in historical reasoning. You can no more “colloquially” believe “Jesus did exist” and “Jesus didn’t exist” are both “very probable,” than you can do so in any more formal articulation of the same assertion. The laws apply regardless of whether you admit you are doing math or not, whether you use “numbers” or not. It’s the same logical invalidity. Historians can’t avoid that by pretending they aren’t subject to the laws of probability and don’t need to know how math works.
(Likewise the likelihoods, though there it’s not P(h)+P(~h)=1 that must be satisfied, but P(e|h)+P(~e|h)=1 that must be satisfied, a fact I detail in Proving History as a methodologically useful observation for historians to learn; and that can only partly be avoided if we cancel out coefficients of contingency and thus reduce the likelihoods to their lowest common denominators; as long as our mathematics remains formally consistent, e.g. the removed coefficient is the same in both likelihoods, and P(e|h)+P(~e|h)=1 is still satisfied with the coefficient present. See Proving History, index, “coefficients of contingency.” This is basically what happens when we convert any Bayesian equation from the probability formula to the odds formula, e.g. likelihood ratios 5%/10% and 2%/4%, which satisfy the condition P(e|h)+P(~e|h)=1, both reduce to the same odds of 1/2.)
And again, in history we are always and only ever dealing with estimates of frequency. No subjective estimate of probability has any other function or meaning in historical reasoning. I demonstrate this, again, in the second half of Chapter 6 of Proving History.
And it’s not true “no one knows how to appraise them.” I show exactly how they can be appraised, by reference to objective facts (and when there are no objective facts raising or lowering an estimate, the indifference principle obtains by logical necessity by disjunctive syllogism). I do this for every specific estimate I give in On the Historicity of Jesus (describing what it would take to change the estimates; always some objective frequency-affecting fact) and in general terms as methodology in Proving History (in several places throughout, e.g. the closing numerical analysis of the Criterion of Embarrassment; my discussion of using reference classes to develop priors in Chapter 6; etc.).
You claim that objective – by which I will understand classical frequentist probabilities – are the same as subjective probabilities, at least in some ideal limit. There are theorems about the connection, as I had said, due to De Finetti Savage and others. But it is ungrammatical to say they come to the same, or come to the same in the limit, or the like. What has a frequentist probability is a repeatable event like “heads” or “spin up”. What has a subjectivist probability is an ordered pair of a person and a proposition – a complete statement with a truth value, not a repeatable. In the unit interval what has a ‘probability’ (a measure) is a measurable subset. And so on for all the interpretations of the calculus. One is not a subjective approximation to the other: they have nothing in common: the things that follow ‘P (…)’ can’t be substituted for each other grammatically.
Only the subjective interpretation has anything at all to do with your discussion. The trouble is that it is incredibly weak without strong unalterable unproblematic conditional ‘probabilities’. The killer example of this is the statistical one where the probabilities are for samples from a hypothesized distribution, which everyone knows how to calculate. With these I can make a genuine inference.
When, as in your case, there is just a mass of individual singular historical propositions, the conditional probabilities are up for grabs like everything else. The calculus as a whole just places a thin coherence constraint my credences, like propositional logic, and can’t pose as a significant organon of discovery.
I don’t just claim this. I have an extended multi-page peer reviewed argument for it. In Proving History, Ch. 6.
And to be sure we are clear, I don’t say they are “the same,” but as you qualify, that subjective statements about probability are estimates of the objective probabilities (which are never known to a certainty, only themselves to a probability). In the same way that a “model of the atom” is not identical with any actual atom, but attempts to get increasingly closer to what the actual structure of an atom is, and does so in response to evidence (not random whim or arbitrary feelings).
I should also make sure you understand (as is argued throughout both my books on this) that we are usually dealing with a fortiori estimates, not a judicantiori estimates. By analogy, for a statement like “the probability of a meteorite destroying my house tomorrow is x,” the a judicantiori estimate would be the best estimate we can make from actual data (e.g. the documented frequency of meteorites of sufficient power striking the earth per square foot per day, for example, using information maybe acquired from NASA or JPL or wherever), while the a fortiori estimate would be something like “x is less than one in a million.” Where we are not saying x is one in a million, but that it has some value below that (could be many orders of magnitude below it even; doesn’t matter, since we are only seeking an a fortiori conclusion, sufficient to make decisions like whether to stay home tomorrow, and not a precise conclusion, such as a scientist would need to estimate insurance premiums for meteor strikes).
Even a judicantiori estimates can have large margins of error, as when we don’t have data like “frequency of meteorites of sufficient power striking the earth per square foot per day” (e.g. how an actuary in 1800 would decide an insurance premium for meteor strikes, not having any exact data, but only having loose proxy data, like how often he or any journalist has heard of meteorites destroying houses, and roughly but not exactly how many houses there are, and so on). These distinctions might be confusing you, so I mention them in the hopes to forestall that.
Your remaining comments show me you have not read Proving History. So you don’t know what you are talking about here. If you want to address how I (not someone else) resolve the matter of the connection between subjective and objective probabilities, and how it avoids things like Dutch Book fallacies, you are just going to have to read the book.
But spoiler alert: there is no such thing as an unrepeatable truth statement. When you say there is an 80% chance you are right about some statement x, you are describing the frequency of how often you are right about statements backed by that kind of evidence (i.e. “in relevantly congruent epistemic situations, I will be right 4 out of 5 times”). You can be wrong about that (e.g. mis-estimate that frequency), but trying to estimate that frequency is still what you are doing. From there it’s all about how we make sure that estimate (of how often wrong we are about any statement that meets conditions y; which will definitely be repeatable, in theory and in practice) is more reliable. And that’s where you find out how actual data constrains our estimates, and thus how more data makes our estimates more accurate, and thus how at the limit, our frequency of being right about a thing, ends up identical to the actual frequency of the thing we are trying to be right about (i.e. the frequency of the thing we are asserting happened, happen, becomes at the ideal limit identical to the frequency we are right in asserting it happened). At any rate, if you want to see why all this works, read the book. That’s why I wrote the book. So I wouldn’t have to repeat dozens of pages of explanation over and over again. Please get informed. Then we can talk.
“My remark didn’t read as if written by a bot” : that is not yours to say, because you cannot know how your writing is perceived by others. You can try to write – and hope – that it doesn’t, but its the reader who is the judge of that – and I agree with dr. Carrier : your first remark DID read a bit “bottish”.
Also, do you seriously question if the NUMBER of teacups in the cupboard is mathematical ? Hint : the word NUMBER should give you a clue.
You being so wrong about 2 mere basic semantics, is typical in the remainder of your argument of how you confuse the sum of the various personal estimates of probability of a proposition (which sum does not stand the Dutch book test) with the actual distribution of probabilities of all propositions (which does). I must say, I admire dr. Carriers patience to formulate a coherent answer to your in my eyes very incoherent assertions.
“What Bayes’ Theorem Actually Says about Historical Reasoning.” Should have said: What An Interpretation of Bayes’ Theorem Could Imply For People Reasoning About History. Mathematics is the architecture of the most simple and humble truths. It is their simplicity, lack of pretension what gives mathematical systems their usefulness and beauty. I could not read pass that.
On a side note, Carrier I’m curious about what you think of the Galileo affair and how that relates to the relationship between science and the Catholic church?
Way too complicated to answer and off topic here.
Hello Richard,
I stumbled over a very recent debate between Dillahunty & Giunta on YouTube and Giunta claims that he can show that his god is very probable using Bayesian inference.
There’s also his project beliefmap dot org that claims to prove god’s existence, Jesus is god, etc.
Maybe you want to take some of this apart in the future?
On Bayesian arguments for God see my discussion here. Christian apologetics likes to propose irrational probabilities. But also, they like to leave evidence out. See what a correct Bayesian test of theism looks like here.
Beliefmap.org does not have any Bayesian structure I can discern. It doesn’t seem coherent at all. Nor do I see a map. It’s just a random collection of garbage arguments with occasional arbitrary probability assignments that ignore most logic and evidence. Like most of the internet.
Every argument in there is already refuted by my parallel work elsewhere. For example, their really awful “Jesus existed” stuff is already refuted in On the Historicity of Jesus. But I’ll make sure everything there gets into CHRESTUS eventually. Thanks for pointing it out.