Tim Hendrix wrote a critical analysis of my book Proving History two years ago, and recently made it available online. Coincidentally I also just discovered a review of the book in College & Research Libraries Reviews, which had been published in June of 2012 (pp. 368-69). That was only one long paragraph, but I was surprised it understood the book and took a positive angle on it, concluding:

The use of a mathematical theorem to establish reliable historical criteria can sound both threatening and misguided. However, Carrier describes and defends the theorem in layman’s terms, demonstrates that historians actually think in terms of probabilities while rarely quantifying them, shows how all other axioms and rules in historical methodology are compatible with the theorem, and then gives it a practical workout on recent studies on the historicity of Jesus … [in which] Carrier shows how the criteria for judging whether or not Jesus was a historical figure (coherence, embarrassment, multiple attestation, contextual plausibility, etc.) are replaceable by Bayes’s Theorem, which “if used correctly and honestly . . . won’t let you prove whatever you want, but only what the facts warrant.”

Hendrix (who has a Ph.D. relating to Bayesian studies) gives it a much closer look on its technical aspects in applying Bayes’ Theorem. There are some issues of grammar that suggest English might not be Hendrix’s first language (he also uses British spelling conventions), but his writing is good enough to work around that (most of the time).

Overall, Hendrix concurs with a lot. On taking a Bayesian approach to the historicity of Jesus, his conclusion is that, “I think this is an interesting idea, and while I am uncertain we will find Jesus (or not) at the end, I am sure Dr. Carrier can get something interesting out of the endeavour” and “the sections of the book which discuss history are both entertaining and informative.” He also approves of my defeat of certain approaches to history in Jesus studies, such as over-reliance on the Criterion of Embarrassment. But the bulk of his analysis is critical, though only of a few select points. All of which he bizarrely misunderstood. To those I now turn.

Does All Historical Reasoning Reduce to Bayes’ Theorem?

Hendrix starts with a descriptive introduction, both of the book and of Bayesian reasoning. Then he analyzes my formal demonstration that all historical reasoning must reduce to Bayes’ Theorem. The first issue he raises with this is that I only implicitly mean by that the probability that factual hypotheses are true or false (and given what starting assumptions we put in, a separate issue). But what about, he asks, other kinds of statements?

For instance, suppose we define Jesus as a “highly inspirational prophet”, a great many in my field would say the modifier “highly” is not well analysed in terms of probabilities but requires other tools. More generally, it goes without saying we do not have a general theory for cognition, and I would be very surprised if that theory turned out to reduce to probability theory in the case of history.

I’ll just say, if you can’t define it, then you can’t answer it. So these kinds of unanswerable questions are moot. But even if we do define the terms usefully in a question like this and what we end up with is not a factual statement but an evaluative statement, then we are no longer making a claim about history. We are making a claim about what value people should assign to something. And that’s a different field of inquiry than history. And that is why Proving History does not address those questions.

Meanwhile, alternative interpretations of a question like that are straightforward historical claims. For example, “the teachings of Jesus were widely valued in historical period P” is a hypothesis that will have a probability value derivable from Bayes’ Theorem based on the likelihoods when we collect evidence of people in P saying they value those teachings, and/or acting on that value, and checking for how widespread these things were. This might of course end in valid nuanced outcomes like “we can prove lip service was near universal, but actually following the teachings of Jesus is virtually nonexistent.” That statement is true to a certain probability, and that probability would derive from the two consequent probabilities in the theorem: the probability of the evidence we gathered on that statement being true, and the probability of that same evidence on that statement being false. Prior probabilities might factor in as well, depending on how you model the problem, but the output will be the same (whether evidence affecting the probabilities goes in b or e). Likewise for any other statement like “the figure of Jesus was highly affecting of the culture in period P” [using “highly” in the sense of “widely” or “non-trivially,” for example], which is another historical claim whose probabilities derive from the evidence again.

So I don’t see any actual problem here. And Hendrix does admit his concern is minor.

His greater concern is that though Bayes’ Theorem does decide outcomes from inputs (so all historical methods do reduce to it at that level of analysis), it doesn’t help us decide the inputs. That’s not entirely true (see the index of Proving History, “iteration, method of “), but it is relevantly true, in that, as you walk the math back, eventually you leave the realm of history and enter the realm of physics and philosophy (with all its Cartesian Demons and Hilbert’s Hotels and Holographic Cows), but more importantly, unlike in, say, particle physics, in history we can’t do the math precisely in the first place. We can only at best reach a fortiori estimates (see index, “a fortiori“). Because historians simply have to “guess” at what the inputs are. Just as intelligence analysts must do when they use Bayes’ Theorem to anticipate the behavior of foreign nations and hostile parties.

This I fully acknowledge in Proving History and provide tools to work with, in fact the tools historians already routinely use, they just don’t realize they are using them. Because this problem exists regardless of what methods historians use. What BT does is force us to admit it, and to spell out where and when we are doing it, so that our inputs can be identified, analyzed, and critiqued. That is an enormous advantage over every other method historians have attempted to model their craft with (as I demonstrate in Chapter 4). The tools that help us maintain validity for history in the face of the subjective estimating experts must perform include the method of a fortiori reasoning (pp. 85-88; plus, index), avoiding false precision and instead mathematically accounting for uncertainty and margins of error (pp. 66-67), and not confusing subjective estimates with arbitrary estimates (pp. 81-85), but instead making debates about the inputs a fundamental part of history as a field, where subjective estimates have to be justified by data and validated by peers (pp. 88-93, 208-14).

In other words, historians can’t get away with saying “x is probable” without explaining what they mean by “probable” (55%? 95%? 99.99%? What?) and why they think it’s that probable and not some other probability—or why they think it’s probable at all, a question that ultimately can’t be answered by any historian without sound Bayesian reasoning (whether they are consciously aware of it or not). The crucial function BT serves here is to settle what inputs we are supposed to be looking for in the first place. Historians can talk probabilities all day long, but often have no idea what probabilities they should be looking for or asking about. BT shows us the role of priors (historians routinely rely on prior probabilities without even being aware of it, and often can’t even tell the difference between a prior probability and a consequent probability) as well as the role of the likelihood ratio (that we must estimate the probability of e on h, and the probability of e on ~h, and the ratio between them is determinative of the output, a complexity most historians are completely oblivious to, as I explain is a problem, for example, in On the Historicity of Jesus, pp. 512-14).

This is why Hendrix’s concern here is misplaced. He thinks it’s obvious that the posterior probability is entailed by the combining of a prior and a likelihood ratio and that it is obvious that the likelihood ratio consists of relating the probability of e on h against the probability of e on ~h. Therefore, he thinks it’s vacuous to say that all historical methods are Bayesian, because that’s already obvious, so what use is proving it?

Well, guess what. None of this is in fact obvious. In fact, too many historians screw the pooch on every step of this reasoning: they don’t know they are relying on priors, and don’t know how the priors they are relying on should be affecting their conclusions (they also don’t know what a prior actually is or how to validly derive it from data: e.g. see Proving History, pp. 229-56; or how doing so requires demarcating the contents of b from e: index, “demarcation”); they also don’t know that the probability of h is determined by the probability of e if h is true, and are even less aware that it is also determined by the probability of e if h is false. In fact, failing to properly test their theories against alternatives has been commonly pointed out as an error historians are prone to, and many historians who even know they are supposed to do it, don’t know how to.

In other words, Proving History is about explaining to historians exactly what Hendrix is saying: that BT determines the outputs from your inputs. So throw away all other methods of generating an output from your inputs that you’ve been using, and learn this mechanism instead, because it is the only one that is valid. And once you know how it works, you will finally know how to validly derive an output from your inputs, and what inputs you are supposed to be looking for and guessing at and arguing over in the first place.

Historians don’t know these things. Consequently, they don’t believe these things. Some even adamantly deny it. Thus necessitating that I provide a formal proof, one that they can’t weasel out of. And so I was required to do so. And so I did. Hendrix might find it frustrating that we have to do this. I share his pain. But alas. Hendrix agrees with me: knowing whether a claim about history is true requires Bayesian reasoning. He seems only to be annoyed that I had to prove that.

Does Proving History Teach Us How to Apply Bayes’ Theorem?

Hendrix is concerned that I don’t prove any new facts about history by applying the theorem. In fact that wasn’t the function of Proving History. A test application on a serious problem is in the sequel, On the Historicity of Jesus, as is repeatedly stated in PH (although when he wrote this review, the latter had not yet been published, so he couldn’t evaluate it).

What I do in PH is show that all the methods historians already use reduce to BT (Chapter 4) and that when they realize this, they can better understand and apply those methods, and avoid mistakes in using them. And I then use BT (in Chapter 5) to show that the methods used in Jesus studies either violate BT (and thus must be abandoned as illogical) or fail to get the results claimed for them (if you apply them in agreement with BT). I then provide tools for how to build BT arguments and avoid mistakes in doing so (in Chapters 3 and 6).

Throughout, what Proving History is about is not how to do math, but how to understand the logical structure of historical reasoning. Which structure happens to be described by Bayes’ Theorem. But the aim is not to build and run differential equations on plotted graphs. The aim is to understand the structure, and thus understand the logic, and thus understand what probabilities you are supposed to be estimating, and what is then entailed once you’ve estimated them. One doesn’t even have to do math to do that and apply it soundly (PH, pp. 286-89), but even insofar as one uses math, it need only be as crude as sixth grade arithmetic, and that with wide margins of error. Precision is not required. Complex calculations are not required. Historians simply need to learn how to interrogate their own statements like “x is very probable” or “x is somewhat likely” and then understand, once they’ve explained to themselves what they are even saying with such words, what then necessarily follows by the logic of probability. Proving History equips them to do that. They also need to know how such input statements can be justified by the evidence, or how they can be debated within the field once exposed, and PH gives some guidance on that, too.

So I think Hendrix’s concern here is misplaced as well. Proving History does what it aims to do. Nothing more.

It is notable that Hendrix agrees with my applications of BT in these respects. He concurs with how BT, and probability theory generally, collapse applications of the Criterion of Embarrassment by Jesus scholars. Indeed, Hendrix does an excellent job of re-demonstrating one of my points about this with a full application of Bayes’ Theorem, which in the book I kept much simpler (as a discourse about ratios) to illustrate every step of reasoning and not overload the reader with unnecessary modeling. Proving History was written for humanities majors, whose eyes would have completely gazed over and not at all understood Hendrix’s revising of the argument into a BT form. I think his revision is excellent, and a good addition to the point. It just wouldn’t have worked well in PH, given its actual target audience.

One thing I do think he does wrong, though, is make the problems of history far more complicated than they need to be. He says at one point that we need “10-20-(100?)” variables in any equation. That simply isn’t true. You can bypass all of that with broader definitions and allowing minor concerns to be washed out by a fortiori reasoning (PH, p. 85).

For example, Hendrix thinks it matters to the probability of preservation whether a Jesus-friendly preserver of an embarrassing story knew that story was true, but that’s not the case. As I wrote (emphasis now added):

[A]ll false stories created by friendly sources have motives sufficient to preserve them (since that same motive is what created them in the first place), whereas this is not the case for true stories that are embarrassing, for few such stories so conveniently come with sufficient motives to preserve them (as the entire logic of the EC argument requires).

So the probability that a Jesus-friendly preserver of an embarrassing story would preserve that story is entirely a function of whether that Jesus-friendly preserver saw enough value in the story to preserve it. It totally wouldn’t matter whether it was actually true or false for it to have that value. It also wouldn’t matter to the math whether the reason the Jesus-friendly preserver valued it was that they believed it was true. Because the reasons don’t matter at all. That it had enough value to them to preserve it is the only fact we need measure, not why. We don’t need to know why—only if we could prove the “why” was “that it was actually in fact true” would it matter, but (a) we can’t in any of these cases and (b) their believing it was true wouldn’t tell us that, either, even if we could prove they believed it was true (or even cared whether it was true), which (c) we also can’t do. Hendrix is thus needlessly over-complicating the math. Our objective rather should always be to simplify the math, as much as possible that still gives us a logically sound conclusion. A fortiori reasoning, and careful defining of measured terms, accomplishes that.

The rest of Hendrix’s critique consists of insisting historians need vastly greater precision and vastly more complex models of history to say anything about history at all. That doesn’t make any sense. To the contrary, they can’t and never have and never will have the kind of precision Hendrix wants. That sucks. But welcome to history. Moreover, more complex models are almost always useless. When historians reason about history—and I mean all claims about history, made by all historians, in all works of history in the last sixty years—they have not used, nor needed, any of the complex models Hendrix wants. The role of understanding BT is not to make history needlessly more complicated. The role of understanding BT is to look under the hood of arguments historians are already making, and using BT to model those arguments, and thus understand what their inputs actually are, and what that historian is basing them on (and thus whether they should be replaced), and whether the output they are getting is consistent with those inputs (in other words, consistent with BT).

This does not require increased complexity. Unless you can demonstrate an argument is invalid or unsound by virtue of its excess simplicity. But in that case you should focus solely on the one point you are making. And in doing so you’d be doing something useful, and thus applying BT to improve historical reasoning. Most cases won’t suffer that problem. Because most complexities can be dissolved within broader terms (e.g. an h that is inclusive of 100 h‘s; carefully constructed binary definitions of h and ~h; etc.) or ignored because their effect is already smaller than the margin of error (the function of a fortiori reasoning). Indeed, we need to be looking for all the ways of doing this: making these complexities become irrelevant, so we can make useful and clear statements about history with the limited data available, which can be analyzed and vetted and productively debated.

Likewise, if (as Hendrix rightly proposes could happen) you think someone’s Bayesian model is wrong, showing that is indeed what is useful about reducing historical arguments to their Bayesian form. Because the wrong model will then have existed in their argument even if they didn’t articulate their argument in Bayesian form. It will exist even if they have no idea what Bayes’ Theorem is! So avoiding BT does not get you out of the problem of incorrectly modeling a historical question. To the contrary, avoiding BT only makes that mistake invisible. That’s worse. Once we compel historians to build Bayesian models of their arguments, then we can more easily see if they are faulty, and then critique and correct them. Progress in historical knowledge is the only result.

So these are not valid criticisms from Hendrix. These are actually agreements with the very things I already say in the book.

Does Proving History Get Wrong How Probability Works?

I argue in PH (pp. 265-82, although crucially building on pp. 257-65) that one major part of the Bayesian-Frequentist dispute dissolves upon analysis, once you correctly model what Bayesians and Frequentists are actually saying about probability as a matter of epistemology. In short, a Bayesian “degree of belief” is in fact an estimated frequency: the frequency with which claims based on such a quality of evidence will turn out to be true; and that frequency reduces to an estimate of, and is thus derivable from and limited by, the same physical frequencies of entities in the world (actual and hypothetical) that Frequentists insist all probabilities must be built on.

I confess I found little on point in what Hendrix attempts to say about this. He goes weird right away by saying that the demarcation of physical and epistemic probabilities is circular because they both contain the word probability. That makes no sense (“mammalian cats” and “robotic cats” is a valid distinction; it does not become circular because the word “cat” is used in both terms). But more importantly, it seems to be ignorant of the fact that I did not invent this demarcation. It has been a standard and famous one in philosophy for over a century, and is fundamental to the field of epistemology. So I have no idea what he is talking about here. Maybe he needs to read up on the subject in the Stanford Encyclopedia of Philosophy.

Hendrix goes on to essentially restate everything I say about probability in PH, only often in a manner far too advanced for the intended readers of PH. But he continues to make confusing statements of disapproval of things that are actually established facts in the philosophy of probability. The most I can fathom is that he thinks that Chapter 6 is badly written. Which is a complaint I am sympathetic to. I’m not satisfied with it myself and already knew it needs improvement. But his arguments against it all simply restate exactly what Chapter 6 argues, so he seems to have confused himself into thinking Chapter 6 says something different. Which I suppose goes in the evidence box for it being badly written. Although then the evidence that he is not entirely facile with the English language might come to be relevant. In any event, he does not offer any useful way to improve this defect. He does exactly the worst thing instead and makes the discussion far too complicated for a humanities readership. What we need is a better written Chapter 6 that will be easily understood by a humanities readership. I welcome anyone producing such a thing!

But some of Hendrix’s complaints miss the point. For example, he objects to my saying that when we roll a die the probability of it rolling, say, a 1, will either be the actual frequency (rolling the actual die a limited number of times and counting them up) or the hypothetical frequency (what we can predict will happen, from the structure of the die and the laws of physics, if the die were rolled forever and counted up). Why does he object to so obviously correct a statement? Because the die might not be perfect (its unknown imperfections will affect its rolls). But that is already covered by my model: those imperfections are part of the physical model of the die, and thus will be included in the hypothetical extension of its results.

What he seems to mean is that there is a third probability to account for: a hypothetical infinite series of rolls of a die whose precise physical structure is not known to us. That is, in fact, what we mean by an epistemic probability. Which I cover later. He is thus ignoring the fact that I do indeed agree with exactly his point, and add it in later as an extension of the subject. Where I discuss the “actual vs. hypothetical” frequency question, I am explicitly discussing physical probability, not epistemic probability. Again, a distinction that is standard and universal in philosophy, and which again he claims to think is circular (even though countless published philosophers have not).

So Hendrix’s complaint here is baffling to me. I get to explaining why epistemic probability will vary from the physical probability (including a hypothetical physical probability) subsequently in the book (I have a whole section on it, pp. 265-80, exactly following the section he is talking about, pp. 257-65). And my explanation is basically the same as his. So in claiming to critique my book, he actually ends up just repeating what it says.

It gets worse when Hendrix even more bafflingly fails to get the entire point of those two closing sections. Even though I carefully explain that an epistemic probability of any s is the probability that any s would be true given the kind and scale of evidence we have, and that as the evidence increases the epistemic probability converges on the true frequency, he confusingly says, “But what is the true frequency of the 8th digit in pi being a 9? Why should we think there is such a thing? How would we set out to prove it exists? What is the true value of the true frequency?” This is just a really strange thing to say. He is asking about a statement of (I presume epistemic) probability, that he believes it is 80% likely that the 8th digit of pi is a nine.

Okay. Let’s walk him through it. What does he mean by “it is 80% likely that the 8th digit of pi is a nine”? He must mean that given the data available to him, he is fully confident (I suppose to near a 99% certainty) that there is an 80% chance of his being right. To be so uncertain that you know you have only a dismal 80% chance of being right about this, I can only imagine some scenario whereby he doesn’t know how to calculate that result, and thus is reliant on, let’s say, a textbook, and apparently this is a post-apocalyptic world where it’s the only textbook that mentions pi that survives, and the text in the texbook is damaged at that point, and damaged in such a way that there is an 80% chance the smudged or worm-eaten symbol on the page is a 9 and a 20% chance it’s 6.

In that scenario, his 80% would have to be his estimate of the frequency with which characters damaged in just such a way will originally have been a 9 instead of a 6. But “the true frequency of the 8th digit in pi being a 9” would then be the actual (or hypothetical) frequency with which pre-apocalyptic textbooks read a “9” in that position instead of (the correct) 6. And he is badly mis-estimating that true frequency because of the damage to his evidence. True, there is also a “true frequency” of a character being damaged in such a way originally having been 9 and not something else, and his 80% is an approximation to that, such that he could be wrong even about that, but that is supposed to be accounted for by his confidence level and margins of error. What he is actually trying to get at with the 80% is the frequency with which textbooks actually read as such, and not differently.

Alternatively, maybe he doesn’t care what textbooks said, and wants to know what the actual probability is of the 8th digit of pi being a 9 as a matter of mathematical fact. How then could he think that probability is 80%? I guess, perhaps again we are in a post-apocalyptic world, where no knowledge at all has survived, and he is trying to freshly determine this question with some sort of mathematical device, a device he knows from prior use gets the correct answer 80% of the time (even though I can’t imagine what sort of thing that would be), and this device gives him a result of 9 for the answer. In this case the true frequency is simply 0%, but his mathematical device sucks so badly it has fooled him into thinking it’s 80%. Well, yeah. That can happen. Welcome to the conundrums of epistemic probability.

If that is what he meant, though, then Hendrix has chosen a bad example, because he is ignoring the fact that historical questions are not at all like the question of what the eighth digit of pi is (as I explain in PH, pp. 23-26), whose answer does not have a nonzero probability of being false, unlike all claims about history, which do. The epistemic probability that that digit is 9 is rarely these days going to come out 80%. It’s going to come out near zero. Because we have really damned good evidence for this, and therefore our epistemic probability will converge very closely to the true probability, which is zero. But our epistemic probability will still not be zero. Because there is a nonzero probability everyone in earth history to now has done the math wrong (see not only p. 25 with n. 5 on p. 297, but also my epistemological remarks in general in The God Impossible).

It’s all the more baffling then that Hendrix reduces his complaint to “the notion of ‘true frequency’ in” such cases as “the probability Caesar crossed the Rubicon, or a miracle was made up and attributed to a first-century miracle worker” becomes “very hard to define” whereas “if we accept [that this] probability simply refers to our degree of belief,” then “there is no need for such thought experiments.” Uh, yeah. That’s the entire point of my two sections on this! We never do know the true frequencies. So at no point do I ever say we need them to do history. But we do need to be able to get close to them if we are ever to legitimately say anything is likely or unlikely. Knowledge is impossible, if we can never know when our epistemic probability is probably close to the true probability.

Thus all we can do is do our best to get close to the true frequencies. And what epistemic probability is all about, and what the function of evidence is, is to get as close to that truth as we are capable of, given that evidence. And that is what Hendrix is doing with his degrees of belief, which are his stated measures of how likely he thinks it is that he is right. Which is a statement of how frequently he is sure he will be right, on all matters comparably evidenced. That’s simply the fact of the matter. And it doesn’t seem at any point that he understands this.

In other words, I am explaining in PH why epistemic probability (what he calls “degree of belief”) never exactly equals physical probability (the “true” probability) but why we can sometimes trust that it gets close to it, and when. All knowledge consists of nothing more than this: not knowing the true probability of anything (which no one can ever know; I explain this several times in the book), but instead knowing to a high confidence level that it lies between some x and y (our confidence interval). And we get there by accumulating evidence such that it becomes highly improbable that we are wrong about that (but never impossible—not even in the case of the digits of pi).

At no point does Hendrix ever appear to understand this. And not understanding it, his objections to it either make no sense, or actually affirm exactly what my book says.

Finally, Hendrix spends a lot of words trying to deny that when you say you are 80% sure of something, you are saying 1 out of 5 times you will be wrong. But that is literally what you are saying. At no point does Hendrix appear to understand this. At all. And none of his attempts to deny it make any mathematical sense. In fact, Hendrix doesn’t even seem to grasp at any point what it is he is denying. This I can only count as an epic fail in the domain of semantics. In any event, he does not confront any of my explanations or demonstrations of the fact. He instead just confuses physical with epistemic probabilities again. So there is nothing further to discuss.

Nor will I bother with his silly attempt to insist we need to account for infinities and irrational fractions in probability theory. Nope. A fortiori reasoning does away with any such need. And his discussion of my libraries example is too unintelligible to even understand. As best I can tell, he seems to not be aware that that is an exercise in a fortiori estimation of the margins of error. He seems to think it’s some sort of attempt to do particle physics with uncertain data, which indeed would warrant his complaints. But since it’s not, it doesn’t.

Conclusion

In the end Hendrix thinks the subjectivity of the inputs will make progress in Jesus studies impossible. I disagree. If the method I propose is followed, all disputes will be analyzable in a productive way. Even disputes about input. And once we bracket away Christian apologists whose opinions are of no merit in this matter owing to their insurmountable bias (a bias that is to them literally existential), secular scholars can then have productive debates that will end on a common range of conclusions that everyone in that group will agree is most likely ballpark correct (a scarce few fringe nuts aside). They just have to actually do this. Right now they are just all publishing disparate armchair opinions based on unanalyzable intuitions whose soundness or even logical structure they have no idea of and thus cannot even in principle validate.

Hendrix also thinks we need hyper-granularity of language and hyper-complex models. This has never been true in history. And yet all history reduces to BT already. So admitting what has always been true is not going to suddenly make all of historical reasoning vastly more complex. To the contrary, it will allow us to explain why broad and simple models have always worked, and how to keep doing that even better than we already have been. And that will be owing to the tools of careful definition and a fortiori reasoning. Most complexities simply don’t matter. They either are too trivial to have any visible impact on our math at the resolution we are actually working at, or they are too irrelevant to prevent their being subsumed and thus dissolved under more broadly defined hypotheses and descriptions of evidence.

Hendrix also thinks “to convincingly make [a] case [that] Bayes theorem can advance history one needs lots and lots of worked-out examples” is simply not true. Indeed, that was already proved untrue before me, by Aviezer Tucker in 2009. As both he and I show, in different ways converging on proving the same fact, all historical reasoning is already being advanced by Bayes’ Theorem, and has been for half a century at least. Historians just didn’t know that. And consequently, they haven’t been able to productively tell when it’s being done well or poorly. Proving History gives us a lot of tools for finally doing that. Thus the argument of PH is that historical methods currently being used are already Bayesian (Chapter 4) and are only valid when they are (Chapter 5). And that we can tell the difference between valid and invalid applications of a method by understanding how it operates on Bayesian logic (Chapters 4 and 5).

Hendrix also thinks historians can’t use Bayes’ Theorem unless they can do transfinite mathematics or solve irrational fractions, but that’s not only false, it’s silly. It requires no further comment. Likewise his unrealistic requirement that the book should be twice its current length by thoroughly explaining fundamental phrases and terms that a reader who doesn’t already know them can already ascertain through a judicious use of Google.

Finally, when Hendrix says “the proof [in Chapter 4 that] historical methods reduce to the application of Bayes theorem is either false or not demonstrating anything which one would not already accept as true if a Bayesian view of probabilities is accepted as true” he isn’t saying anything useful about the book. What he means by “is either false” is simply that the book does not address how to answer evaluative claims about history, but since the book isn’t about how to make value decisions but how to determine what we should think the probability is of “what happened and why” (Chapters 2 and 3), he isn’t saying anything relevant to the book’s function. Meanwhile his disjunctive alternative, that Proving History does not demonstrate “anything which one would not already accept as true if a Bayesian view of probabilities is accepted as true,” is wholly circular: that historians who already accept that their conclusions should follow a Bayesian model do not need it proved to them. That’s true…as a conditional statement floating around in Plato’s realm of ideas. But it’s irrelevant. Because historians have yet to be convinced that their conclusions should follow a Bayesian model. So they do need it to be proved to them. That’s why I wrote the book!

So in all, Hendrix doesn’t have any relevant criticisms of Proving History. By not understanding the points he aims to rebut, his rebuttals either don’t respond to anything the book actually argues, or end up verifying as correct what the book actually argues.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading