This week I am doing a series on early reviews of my book On the Historicity of Jesus. If you know of reviews I don’t cover by the end of the first week of July, post them in comments (though please also remark on your own estimation of their merits).
-:-
One of those early reviews posted is by Loren Rosson III (at The Busybody), a notable librarian who is well-informed and well-involved in the biblical studies community. Interestingly, he compares my book to another that I have on my shelf (literally right behind me as I type) but have not yet read, arguing that Mohammed was also mythical: Robert Spencer’s Did Muhammad Exist? (which I’ve been told by all accounts is the best book on the subject; but don’t ask me my opinion on that topic, I have not examined it).
Rosson’s review is thoughtful and well stated. He is fair and accurate even when critical. That’s a good sign that one is not engaging in motivated reasoning nor has emotional or ideological blinders on. His review is also overall positive, and only focuses on his disagreements because they are more interesting to him (as one should expect).
I should note, however, that though Rosson makes a good point about non-specialists being able to have good arguments (I’ve said as much of Doherty, for example), I am not comfortable with trusting Spencer’s master’s degree in religious studies when one should need a Ph.D. in medieval Arabic studies to confidently make the kinds of claims Rosson describes Spencer making. By contrast, I am writing about a historical claim in Greco-Roman antiquity, and I have a Ph.D. in Greco-Roman history. Consequently, I would want to hear a Ph.D. in medieval Arabic studies comment on Spencer’s claims before being confident in them. Whereas a Ph.D. in New Testament Studies is arguably less qualified than me to discuss the historicity of a person in an ancient religion, as their primary field is not history and historical methods, and its focus is obsessively scripture-based and rooted in a long history of Christian faith assumptions, and not as focused on the necessary background evidence pervading the period and time. The result is often disastrous: note the evidence even a superb scholar like Mark Goodacre thought existed (yet doesn’t), because the “consensus” in his field actually derives from unexamined Christian faith assumptions. If that can happen to Goodacre, it’s even more likely to happen to everyone else in his field. For more of my take on this issue (and the kinds of eye-rolling arguments Rosson is rightly criticizing) see summaries here and here and here, and more detail here.
I should also note that Rosson is (only slightly) incorrect in saying I don’t find any evidence for mythicism in the Gospels, since technically I did in the Rank-Raglan data, as I point out at the bottom of OHJ, p. 395. Due to the requirements of Bayesian logic, that data had to be removed from consideration when evaluating the rest of the Gospels as evidence, and it is then that I find nothing further that tips the scales either way. This doesn’t affect anything Rosson says, but I want to make sure people are clear on this point. Indeed, more elements of the Gospels could even be pulled into the Rank-Raglan scheme (I cover some examples in OHJ, p. 230), but the effect would not be statistically significant. That is, it would not lower significantly the probability that a historical person would have those added on. Once a historical person is already scoring near the top of the Rank-Raglan scale (and I allow that between 1 and 4 who do score that high were indeed historical: see OHJ, ch. 6), the addition of yet more mythical details does not much change things (basically, we are talking about the law of diminishing returns, applied to probability).
But on to Rosson’s few actual criticisms of my arguments:
- First, when Rosson says, “There is nothing improbable about an apostle who never knew Jesus, and was at loggerheads with those who did, and who wanted to avoid any reference to his earthly business.”
I actually already refute that argument on OHJ, pp. 525-28. It is, of course, already an unproven assumption, and an improbable one at that, that no references to Jesus’s earthly business were ever in any way supportive of anything Paul taught, or related to any disputes Paul had to answer or that he or his congregations were ever curious about (OHJ, pp. 510-28, where I document pretty much every expert examining the question in detail admittedly scratches their head over this: it is contrary to every expectation). But even if we granted that improbable assumption (which even as a 50/50 assumption, cuts the prior probability of historicity in half), it still makes no sense that Paul would never have to address the most powerful and obvious argument against the very things Rosson speaks of. Paul couldn’t win an argument by pretending it didn’t exist.
- Second, when Rosson says, “When you weigh all of Paul’s Jesus-death metaphors, the scales tip in favor of minimal historicity.”
Rosson does not put numbers to this. My method calls for critics to do that (OHJ, pp. 601-06, 616-18). I can’t properly evaluate the merits of this statement, or even know what he means by it, without his stating which number it changes, how much, and why. I am left with having to guess. From what he says, he seems to mean that it would be less probable of a celestial self-sacrifice to be described as a martyrdom than an earthly one. Given that the theory is that the first Christians genuinely believed the celestial self-sacrifice happened, and was indeed voluntary (as Paul’s use of the “Philippians Hymn” makes clear, in Phil. 2:5-10), I do not see how one can argue this. Why would they not see that as a martyrdom as much as any other martyrdom? Rosson seems to be projecting his own (?) anti-supernatural bias onto persons like Paul. But Paul fully embraced the supernatural realm as a real place. He even claims to have visited it (2 Cor. 12). It doesn’t matter at what elevation you are standing when you are martyred. It’s not as if being martyred on Mount Olympus suddenly no longer counts as a martyrdom, because it’s “too high up.” It’s possible I am misreading the logic Rosson intends. But that’s why the method needs to be followed.
- Third, when Rosson says, “Paul would have little reason to bring up a lesser non-apostolic James in the context Gal 1-2, as such a figure would be beneath mentioning.”
I actually already refute that argument on OHJ, pp. 588-91. I need say no more. Rosson also already grants that in my a fortiori estimates I count this as evidence for historicity just as he does. So I already accounted for views such as his. Even though I find they don’t hold water.
- Fourth, although this is minor, when Rosson says, “a genuine case from the gospels [of an embarrassing detail someone couldn’t omit] would be Jesus’ mistaken prophecy about the apocalypse.”
On the methodological point, Rosson mischaracterizes my argument (this time in Proving History) as being that authors could always omit whatever they wanted, when in fact I actually agreed that was sometimes not the case, and rather this is something one needs to take more seriously than those using the criterion of embarrassment have. So I fully concurred with what Rosson says here, even explicitly (e.g. PH, pp. 136, 159, 166-67). I suspect Rosson is here succumbing to a common mistake in understanding probability and the language of probability: when someone says x is unlikely, it is not an apt rebuttal to say that x sometimes happens. “Unlikely” already entails x sometimes happens, so the person who said x is unlikely is already conceding the point. Once you make that mistake, it is then easy to forget about or overlook even explicit statements of that point (such as the three I just cited from PH). But hopefully Rosson now stands corrected.
On the factual point, I fully agree with Rosson’s example, and even (presciently!) explained why he is right in PH, pp. 148-49. Jesus’s failed prediction of the end times was indeed embarrassing, and not merely so, but a problem Mark had to address. The problem is that we know where Paul got Jesus’s predictions of the imminent end: revelation. That does not support historicity–any more than the Book of Revelation does. That was probably a fabrication, not even an honest vision. But even as an honest vision, it is not a historical Jesus who is saying anything in it. So, too, for Paul’s source of Jesus’s teachings about an imminent end. So even here, one of the rare cases where it works at all, the Argument from Embarrassment does not get you to a historical Jesus.
-:-
In the end, Rosson admits that, even with his objections voiced (which I enumerated above), what remains is “not so as to leave me supremely confident” in the historicity of Jesus. In this respect his position now resembles that of Philip Davies: though still clinging to historicity, nevertheless granting that it’s respectably possible Jesus didn’t exist. And that tenuous thread dangles on the four objections above, whose flaws I noted. But I am impressed with how well-considered Rosson’s pro-con analysis of OHJ is, and would love to see more reviews like it. I am also impressed that even despite his objections, he has added my book to his list of recommended readings for the study of Jesus. We are moving the debate forward. That’s a good thing.
-:-
For a complete list of my responses to critiques of OHJ, see the last section of my List of Responses to Defenders of the Historicity of Jesus.
Man…you guys are so hung up on this, you need a PHD, in this and that, I disagree with all that.
Frank Lloyd Wright only had roughly two years(arguably) of college, it’s believed that he didn’t even finish high school, yet! he is said to be the greatest American Architect designer, ever.
What happened to self study, if only the colleges keep a monopoly, and not have an alternative staring them right in the face, then we whine up the terrible college tuition problem we have now, and a lock on knowledge and limited access, to upward mobility, controlled by a few, academics, who always just spout the established line, hmmmm, does any of this sound like…NOW!.
This is not mathematics, nor is it truly hard science, not even craftsmanship, like carpentry, pipe-fitting, nor plumbing, where if one is a tad off, in their measurements, then nothing will fit, physically won’t fit.
Someone, who, study’s long enough, can be self-taught. and have something more valuable to say, than any, establishment, academic hack, or the politically correct, inquisition, crowd, currently in control of the helm of modern American, academia. I used to not believe that, but now I know it.
Nevertheless, I may buy your book, and will register for your historical method lecture, if it is not too, late, because I read and follow everything, relating to this period in history, and it would be, immensely, beneficial, to learn, from one of your know-how and stature, even though I disagree, with your overall conclusions.
Hi Richard,
Thanks for the feedback. With regards to the criterion of embarrassment, I won’t quibble anymore since I’m more in agreement with you anyway, and yes, you did add those qualifiers in Proving History.
What I’m saying is that I don’t think the abundance of Paul’s noble-death metaphors is the evidence we would expect to see on the mythicist assumption. On that assumption, I’d expect either only atonement and passover metaphors, or at least that those would dominate. Unless there are precedents for celestial beings dying the noble death in the supernatural realm — by which I mean the deity dies “for others” expressly so that those others may copy the martyr’s example, as in Rom 6; as in the Maccabean Jewish martyrs, pagan philosophers, etc;). Maybe there are, and I’m unaware of them. You’re right, I haven’t put numbers to this yet, for now I’m just noting how the estimates might be impacted.
Also BTW, I have no “anti-supernatural bias” in assessing Paul. I recognize his high Christology, and agree with you entirely about the Philip 2 hymn. (Here I respectfully disagree with Crook.) But that’s not the point.
Regarding Spencer, he engaged Antonio Jerez in comments under my review, which is worth checking out. Your reservations are reasonable, and trust me, at first I was wary too. There are a couple of obstacles keeping me from his conclusion (one of which is touched on by Antonio), but on whole he’s made a decent case.
Yes, and I’ve already inflicted a colleague with your book. We’ll see how many more drink the Kool-Aid. 😉
Loren
It’s still unclear how this kind of thinking gets from A to B.
On Paul’s use of noble death metaphors, and their actual connection to atonement and passover theology, see the works I cited, and the discussion, at OHJ, pp. 76 and 209-14 (esp. pp. 212-14).
In particular:
Jarvis Williams, Maccabean Martyr Traditions in Paul’s Theology of Atonement: Did Martyr Theology Shape Paul’s Conception of Jesus’s Death? (2010), pp. 53-63, 72-84.
Jon Levenson, The Death and Resurrection of the Beloved Son: The Transformation of Child Sacrifice in Judaism and Christianity, esp. pp. pp. 176-77 and 210-19.
It seems strange to suggest the voluntary submission and sacrifice of a celestial being (explicitly in Phil. 2) was not imagined or used as a model to follow–Paul frequently uses it just so (Christians are to emulate Christ in everything). Yet as Phil. 2 says, he believed this was a celestial being (who chose to descend and assume flesh and refuse honors and be killed and then rewarded). I don’t see why we need precedents to accept what Paul already plainly says.
This is like saying there can’t be ice in heaven because we have no specific prior mention of ice in heaven, although we have abundant references to everything on earth having versions in heaven and many objects given as examples (like water, scrolls, roads, plants). The latter should be sufficient to accept that people may well have imagined ice in heaven, too, so that if Paul said there is ice in heaven, we should not require precedents to believe he meant there was ice in heaven. Analogously, when Paul says a celestial being came down to sacrifice himself for the greater good and we should emulate him in that, we should not require precedents to believe he meant a celestial being did that, and that, therefore, we should indeed emulate a celestial being who does that.
This is true regardless of whether Paul believed that this celestial being went all the way down to earth (and thus “became” an actual Jesus of Galilee) to die or only to the sphere of corruption to die (a la the earliest reconstructed Ascension of Isaiah). It’s the same either way for him: we are to emulate a celestial being’s self-sacrifice for the greater good. And if so easily (i.e., without any defense or apology) for him, so for Peter or any other original Christian.
I have read the Spencer book. It is good. But may I recommend the book by Nevo and Koren ‘Crossroads to Islam’? That is a masterpiece.
I must add that Spencer is basically summarizing research done by others. As far as I can remember (I read the book more than a year ago), he makes no personnal contributions. That is not the case with the Nevo book, which is in my opinion truly devastating, particularly if we read it alongside the Luxemberg interpretation of the Koran.
For the benefit of other readers here, José is referring to this Nevo and Koren’s Crossroads to Islam, which does not argue Mohammed didn’t exist, but does argue for many other conclusions that Spencer leans on.
Nevo is a published archaeologist who specializes in early Arabic, but not very advanced in his training (he has no known graduate degrees), and he died before Spencer published so we can’t get his take on that. Although he may have been sympathetic (see his Wikipedia entry). Koren is only a library science expert, with some specialization in the languages.
Here is a peer reviewed literature review on the subject by a bona fide expert Jeremy Johns (although Spencer is not discussed).
Here is an academically published review of their book by a Classicist who has problems with their argument and qualifications.
Here is a Muslim apologist (of dubious qualifications) citing (presumably more qualified) critics to attack the Nevo-Koren work.
I have not vetted any of this. I just provide it for those who want to explore further.
Robert Spencer is a Christian apologist who presents a distorted and biased view of Islam. His works show hardly any trace of academic honesty, and instead on wholesale portrayal of Islam and Muslims as absolute evil. As regards his academic qualifications, he has an MA in Catholic Studies, and probably doesn’t know Arabic (as derived from some analysis of his claims here : http://www.loonwatch.com/2012/01/more-proof-why-you-really-shouldnt-trust-robert-spencers-scholarship/) although the site is a little biased.
Tom Holland makes a pretty strong case for the revisionist approach to Islamic origins in his book ‘In the Shadow of the Sword’ (he doesn’t know Arabic either, but does a pretty good job of surveying current scholarship, but his conclusions are not widely accepted in Islamic Studies, although their proponents are highly respected there ) as does Shoemaker in the ‘Death of a prophet’, but none of them claim that Muhammad didn’t exist. Robert Hoyland in his masterpiece, ‘Seeing Islam as others saw it’, lists out all the references to Islam and Muhammad in other cultures, and there is even a reference to him dated to 634 AD.
Basically, you probably know all of this.. what irked me was that you regarded the work of such a man as POSSIBLY the best work on the subject out there. Probably you haven’t heard of him, or read his works before, but please do not mention his works as respectable. If ever there was a person to whom Islamophobia can be justifiably applied, then this would be the man.
I understand that you haven’t actually said anything in his favour, and so, this entire diatribe is not entirely justified.. (and not entirely logical either).. Just had to write this.. big fan of your work though 🙂
These cautions are worth stating. I have not investigated any of them. Just to be clear, I only said everyone has told me Spencer’s is the best book arguing Mohammed didn’t exist. Even you did not propose a better. And I said I could not vouch for what I was told anyway. Or for whether Spencer is at all successful (sometimes the best argument for a thing also sucks–especially when that thing is false).
BTW, Spencer responded to Rosson’s review. His comment is here. It bears on some of the points you are making. Again, I have not vetted any of this. He continues to engage in comments after that.
Hi Richard,
A couple of minor technical points, and then some personal thoughts about letters:
(1) I was confused by this remark,
Either it is evidence, or it isn’t. There may be valid reasons why you didn’t include it in you final accounting, but if it is evidence, then it is always vaid to include it in the Bayesian logic. How else could you have known that it is evidence?
(2) You are, of course, correct when you say
but you next comment is problematic:
Technically, this is the mind projection fallacy. Probabilities are not frequencies. It is unlikely that big bang happened 300 years ago.
(3) At the conferral ceremony for my Ph.D., My supervisor surprized me, pleasantly, during his traditional speech. He started in the normal way: “At this point it is traditional, that I be the first to address you by the title ‘Doctor’,” then continued, “but I’m not going to do that. Because I know the title means nothing to you.”
He was right. It is the process of acquiring knowedge, skills, and expertise that matters. (And he knew me well enough to know he was paying me a compliment.)
When you refer to the required expertise needed to address some issue, I think it is incorrect to assert that someone must have a PhD in (whatever). I think it is absolutely right that highly specific training and knowledge are required sometimes, but I know many without higher degrees who possess such expertise, and several with PhD’s who don’t know their figurative arse from their figurative elbow. Almost certainly, most of the experts you are thinking of have PhD’s, but it feels wrong to automatically disqualify those who don’t. I’m sure you don’t mean to imply such a black and white requirement, but your wording appeared such.
It all gets counted. I think you are confused because this is a technical decision of where to put the evidence in the equation, in b or in e. Apart from mere utility, it actually doesn’t matter which (mathematically it all ends up the same), as long as b + e = everything there is (and you do all the math correctly). See “demarcation” in the index of both Proving History and OHJ.
In fact, you can run an equation by starting with nothing in b, adding one item in e, and getting a result (a posterior probability) which you then use as your prior probability when adding the next item in e, and so on until e is exhausted. Thus, what started in e, ends up in b. See “iteration, method of” in Proving History.
What I did in OHJ (and in ch. 6 I show how it works in mathematical detail, too) is take some usable data out of e and put it into b to get an iterated prior. I actually show how we get there from starting without that data in b, and putting it in e and treating it alone, to get a posterior, which then is the prior when checking the remaining evidence in e. The advantage of doing that is that it singles out exactly what I am starting with, and makes starkly clear how useless the Gospels are otherwise. It’s handy to keep the two features distinct, for easy discussion and consideration.
Yes, probabilities are frequencies (they are meaningless otherwise). See Proving History, pp. 257-80 (where I even discuss examples exactly like the Big Bang, indeed even examples of events that have never happened and probably never will, as nevertheless still having frequencies).
The Big Bang actually has a calculable frequency from Quantum Mechanics: it’s 10 to the power of {10 to the power of 56}. Like all frequencies, this does not mean it only ever happens every exactly that many years, but that if iterated to infinity that would be the average distance in time between occurrences. That iterating is hypothetical, but still valid (I show why in PH).
Likewise, epistemic probabilities, which is what I think you are stating here, are also frequencies, but of different things. “It is unlikely that the big bang happened 300 years ago” is not a reference to the frequency of Big Bangs but to the frequency with which evidence of the kind we have would indicate the conclusion, i.e. how often would evidence of the kind we have exist (call that e), and it still be the case that the Big Bang happened 300 years ago (call that h). Bayes’ Theorem gets you to an answer. But that is still just a frequency: the frequency with which a conclusion reached that way will be wrong. Yes, that frequency will be amazingly low, but that’s not the same thing as impossible (see my example of mathematical proofs being likewise uncertain in PH, pp. 25 and 297 n. 5).
And as it happens, the two probabilities entail each other (epistemic probabilities are just approximations of physical probabilities–in fact, they are physical probabilities adjusted for error, and error also has a frequency), which is why we can use one to get the other. See the last section of ch. 6 in PH (included in the cited pages above).
Yes, I concur. I would equally trust someone who had the same amount of training and expertise in medieval Arabic languages and history as a Ph.D. It’s just that I have almost no way of knowing someone has the same amount of training and expertise in medieval Arabic languages and history as a Ph.D., if they don’t have a Ph.D. That’s what Ph.D.’s are for: to independently certify that fact. Analogously, someone can be as good as a surgeon without a medical degree. But if someone doesn’t have an M.D., how can you know they are as good as one? It’s not impossible, but it’s often exceedingly hard, to know that. Which is precisely why the M.D. exists as a thing.
Regarding my point (1), I believe I misunderstood. I think the word I paid insufficient attention to was “rest,” as in “rest of the Gospels.” My apologies.
Regarding my point (2), it seems to me you are mistaken. Your distinction between epistemic and physical probabilities is fallacious. Probabilities are not frequencies, but expected frequencies.
A probability might be numerically equal to a physical frequency, but there is no guarantee of this, due to the theory-ladenned nature of inference. I cover this in a couple of short glossary articles, theory ladenness, and frequency interpretation. I cover theory ladenness in greater depth in a recent post, The Calibration Problem: Why Science Is Not Deductive.
I also discuss the same matter in Extreme values: P = 1 and P = 0, where I dispell a related myth about probabilties.
All probabilities are epistemic, that is just what they are.
The phrase, “indicate the conclusion” is a bit vague – I’m not sure what you mean. But you cannot say that of all propositions that have been assigned a probability of 0.9, 90% of them are true. Every single one of them could be false. You can not divorse the probability assignment from the probability model. If I change my model, the probability will change as well. If two people apply different models to the same proposition with the same data, arriving at different probability assignments, which of them is the correct frequency? The best you can say is that you expect 90 % of those propositions to be true, which is fine, and perfectly rational, but is not the same as legitimately identifying a probability with a frequency.
Irrespective of all this, “unlikely” can not entail “sometimes happens,” as you claimed. The phrase “It is unlikely that the big bang happened 300 years ago,” is neither of the things you mention, but is very trivially a statement about my degree of belief that this happened. It is a true statement – I find it unlikely, though I am not willing to assign it a probability of zero. This does not entail that it happened. Frankly, it’s difficult to see how you could think otherwise. (We can make the same argument about possible non-unique events: it is unlikely that pixies dance around my kitchen, on summer nights.)
If you had read what I pointed you to, you would know that was precisely my point. (Even more correctly, they are, as I said, “hypothetical” frequencies, since they are based on modeling infinite runs.)
That is basically the same thing I said: epistemic probabilities are physical probabilities adjusted for error. You say “theory-ladenned nature of inference,” I condense that into just “error” (since “theory ladenness” is not the only source of error in attempting to estimate the actual probabilities of things). Tomayto, Tomotto.
Uh, no.
See the pages in PH I directed you to. Epistemic probability normally means the probability that you are right or wrong. Physical probability is the probability you are right or wrong about. They measure different things. You can also have an epistemic probability about an event, which is an attempt at estimating the physical (i.e. true) probability of an event, but as such will rarely be the same probability (because you are not omniscient).
That’s why there is a true frequency of red marbles in a jar of red and white marbles (whether we know what that frequency is or not)–which entails the physical probability of drawing a red–and at the same time there is an epistemic probability that you are right about what that frequency of red marbles is–which is not the same as the physical probability of drawing a red (because it is measuring not the frequency of marbles, but the frequency of being wrong in your claim given the kind of evidence you have about the jar’s contents).
You can also talk about a different kind of epistemic probability, that you are right about a single event, that the next marble drawn will be red–and that probability will still deviate from the true probability the more you are wrong, and that will be a function of how little you know about the contents of the jar at the time of your prediction. In that case you are at least attempting to estimate the physical probability, and when you have total knowledge of the jar, your epistemic probability equals the physical probability, but otherwise it does not (except by accident, but by most preferred definitions accidental knowledge is not knowledge).
Certainly–of all propositions. But it is logically impossible for all propositions to be false. So this is a moot point.
You can have 100 propositions, and a 90% chance each is wrong, and still have all 100 propositions be false. Because there are infinitely many propositions, not just 100 of them.
If you have done your math right and collected your data right, then if you have 100 propositions at a 90% chance each is wrong, then the probability all are wrong is vanishingly small (though not zero). This is what the science of statistics is about. How likely is it, given the kind of data you have acquired, that the frequency the acquired data shows you, is not the true frequency of the sampled data? We can determine that probability precisely; as long as the data have been collected in a proper way (e.g. random sampling; although even non-random sampling can be mathematically adjusted to account for the bias).
This holds true all the way down the epistemic chain (e.g. our acts of perception are just another occasion of data sampling, etc.).
We thus get a confidence level and a confidence interval. I discuss both in PH. The confidence interval is what we believe the probability of something is; the confidence level is how probable it is that we are right about that. Both are frequencies. Both are frequencies of different things.
That can’t be answered without more information about what differs between the observers. I discuss the causes of disagreement in PH. I have several sections on it. It’s even in the index: “disagreement.”
Yes, it does. In fact, it logically must. If it does not entail “sometimes happens,” then it must be compatible with “never happens,” but if something never happens, it is not unlikely–it is impossible. “Unlikely” does not include as a meaning “has a probability of zero.” That’s why we use the word “unlikely,” specifically so as to make clear that we don’t mean “impossible” or “never” and so on.
Which is a frequency: the frequency at which you believe you can be wrong about such a claim, given the kind of evidence you have. Different evidence would entail a different rate of error and thus you would state a different degree of belied. I extensively demonstrate in the last section of PH that this is the case. Degrees of belief are simply synonymous with frequencies of error. If they were not, they would be meaningless statements.
Unless you are saying it is impossible that you are “sometimes wrong,” you are not even addressing what I said.
Obviously when you say “I find x unlikely, though I am not willing to assign it a probability of zero” this entails that sometimes you will be wrong about xx (“I am not willing to assign it a probability of zero”). In other words, you are admitting “sometimes” you are wrong. Which is exactly the same thing I am saying: “unlikely that x” entails “sometimes x.” It does not matter that “sometimes x” means (in this case) “once in a million years would I be wrong about something like that,” “once in a million years” is still sometimes.
More to the point, if you assigned a degree of belief to x in this case, and assigned the same degree of belief to y, you would laugh if I tried to argue you must be wrong to say x is true, because I can prove y is false. Because that you will be wrong about y just as often as you will be wrong about x does not mean that if y is false, x is false.
BTW, in your example, the frequency of being wrong should be so low that it would be startling if I proved y false, so much so that you should doubt your low assignment to x being false. But in the case I was discussing, we were not talking about such low probabilities, in fact I actually discuss actual examples of x being false and thus plainly stated in PH that sometimes x is false–and not once in a million years, but far more often than that. So the analogy breaks down at that point. But what I said remains true: “x is unlikely” entails “sometimes x“; that does not mean “often x.” And you seem to have confused the one for the other.
Richard, your reply was quite long, so I’m afraid mine will be also. I’ll defend my points about the nature of probability, as I feel these are crucial to a proper understanding of epistemology.
But your response indicates that you continue to fail to appreciate the difference between frequencies and expected frequencies. If you had read, and attempted to understand, the material I pointed you to, you would most probably get this.
A probability assignment can only be an error if the arithmetic has been done incorrectly. A probability is the degree of belief that a hypothetical rational agent would possess if it was certain of the truth of the background information, including the probability model.
Margins of error account for the “uncertainty” here. So I’m not sure what your point is now. My point is that “the degree of belief” is a frequency, even in your own words, an expected frequency: the expected frequency with which you will be wrong (given the kind of evidence you are basing your judgment on).
I don’t know what you are going on about. But you seem to have lost track of the point. I simply said all probabilities are frequencies. You haven’t contradicted that. Your “degrees of belief” are still just frequencies, the frequency you are estimating that you would be wrong standing on whichever kind of evidence gives you that amount of certainty.
And since any frequency of being wrong that is not zero logically entails that sometimes you will be wrong, and “unlikely” is not a synonym of “impossible” or “never”, it follows that “I am unlikely to be wrong” entails “sometimes I am wrong.” Always.
There is no other point at issue here.
On the Big Bang claim example, you seem to be thinking “I will be wrong about x only once in a gazillion years” does not mean the same thing as “sometimes I will be wrong about x.” Um, sorry, but yes, it does. They mean the same thing. The frequency with which you are sometimes wrong is irrelevant to the fact that sometimes you are wrong. Even if that’s rarely. And this may be one of those times. And you have admitted exactly that.
So what is your point?
This is not a frequency, this is an expected frequency, contingent upon some probability model. Until you get this difference, you do not understand probability. I might legitimately expect to be wrong 10% of the time, but actually be wrong 90% of the time.
There is no guarantee otherwise, even if I magically conjure up some infinite set of probability assignments. The devil might be conspirting to keep some crucial detail forever hidden from me, which would forever bias my modeling efforts. Or consider Bostrom’s simulation hypothesis. There are many ways that the laws of physics could be totally different to what we believe, without us ever suspecting it.
Here, you make it extremely clear why the difference between frequency and expected frequency is so important. The crux of the issue is: we don’t know the actual frequency of error – we can make no factual claim about it. We can only assign a probability distribution to it. Your above statement is correct, but you have bait-and-switched yourself – the word “unlikely” doesn’t guarantee anything about this frequency.
I urge you to consider this a bit more, before making further pronouncements. Since I am not omniscient, I can not know for certain that all the things I assert as true are in fact true. This does not entail that I am ever wrong about anything that I assert to be true. If was a random proposition asserter, by extraordinary luck, all my assertions might be true. Even if I asserted my expected error frequency there would still be no guarantee that I asserted something false – since this is merely a statement about by state of mind! But if I assert my actual error frequency to be any non-zero x, then I’m guaranteed to be wrong about something.
Even if my lack of complete confidence did somehow entail that I am sometimes wrong, it certainly would not entail anything about the truth value of any particular proposition that I asserted, which is what the argument was originally about.
Very simply:
set x = “pixies dance around my kitchen”
set y = “x has non-zero probability”
set z = “x is sometimes true”
y != z
Thus, your original statement,
is false.
An expected frequency is a frequency. Or do you not know how adjectives or set theory work?
Therefore there is an expected frequency with which things you assert will be false.
QED.
That’s my point.
I don’t know what your point is. But you haven’t been arguing against my point for some time now.
Richard,
You aren’t reading my comments. You might be looking at the words, but that is all.
No. A frequency is the relative number of occasions on which a thing actually happens. An expected frequency is somebody’s best guess for what a frequency is.
Expected rain is not the same thing as rain.
Yes. But this guarantees nothing about the actual frequency being non-zero. This is sufficient to prove my point.
Do you deny that statements y and z at the end of my previous comment mean different things?
So now you are repeating what I said? You just here made a distinction between a physical frequency and a guess at a physical frequency. Both are a frequency. Just one of them is adjusted for error.
It doesn’t matter if you “guess” the frequency is x. That still means you are saying the frequency is x. Which entails sometimes ~x. QED.
Unless you are asserting the frequency is zero.
Which is not at issue here (neither I nor you have discussed those cases).
One more time, there are a few subtleties you seem not to grasp. I appreciate your patience.
(1) A thing is not the same as a person’s concept of the thing. A statement of pobability guarantees nothing about any physical frequency. I believe you get this, though your wording isn’t clear enough to demonstrate it, and in fact indicates the opposite.
(2) An estimate of a frequency is not the same as the frequency with which I expect such an estimate to be true. My best evidence might indicate that a coin is heavily biased, though I may not be very confident of this. (A coin might be randomly selected from a box containing 51 coins that always show heads, and 49 coins that show heads and tails with equal frequency.)
(3) A statement like “x is unlikely” serves to summarize the properties of some probability distribution. If x is a statement about a unique event, it says: “The probability mass associated with the frequency that I expect to be wrong about problems such as that relating to x, if I assert that x is false, is somehow predominantly concentrated in the region of low frequencies.” It does not say that I am confident that x occurs with non-zero freuqency – as illustrated by my earlier proposition about pixies. Also, and very importantly, it does not say that I am confident that the error frequency associated statements such as “x is false” is non-zero. It is not a definitive statement about that frequency, but a summary of the main properties of the probability distribution that I associate with this error frequency.
Taking all of this into account,
(i) the statement that event x occurs with low frequency does not entail that x sometimes happens (contrary to what was suggested by your wording)
(ii) the statement that event x occurs with low frequency does not imply that I believe that x sometimes happens (contrary to what you perhaps meant). If my probability distribution over f overlaps zero, then I still believe f = 0 is possible.
That’s irrelevant here. If you think a frequency is x, you are not saying it is x in concept, you are saying it is x in fact but that you might be wrong. So the reality-concept distinction is not at issue here. Saying a frequency is x is still making a claim about an actual frequency. Thus claiming the frequency of y is x, still entails claiming that sometimes ~y occurs (unless you are saying x is zero, and not only zero, but zero with no margin for error, which neither you nor I are talking about).
Moreover, there will also be a frequency with which you are wrong when you say the frequency is x. That is a different frequency than x. It is measuring a different thing–not the frequency of the thing you are talking about, but the frequency of your being wrong about things like that.
But that is still a frequency. And it is an actual frequency. It is not a mere concept of a frequency, but a guess at an actual one.
And if you keep wide enough margins of error, your guess (that the frequency falls “somewhere” within your declared margins) will be correct to a very high frequency. Which is the object of all human knowledge.
I don’t see why you are having trouble understanding this.
I have repeatedly been saying exactly the same thing.
I don’t understand why you haven’t noticed this.
When you say “I may not be very confident of this” you are saying “the frequency with which I am right about claims like this might not be very high.” Ergo, this is a claim about a frequency. It’s frequencies all the way down.
See my discussion of your own example (I just use dice instead of coins) in Proving History, pp. 265-80.
Here is where you have gotten confused. We were not talking about vanishingly small probabilities. We were talking about merely low ones, and in which there is no dispute that the thing claimed to be frequent sometimes nevertheless doesn’t happen.
What you are trying to argue is that such claims amount to something like “if the frequency of x is no greater than 1 in 100, then the frequency of ~x is somewhere between 0 and 1 in 100,” therefore it is possible x never occurs.
But we weren’t talking about whether it was logically possible for x never to occur. We were talking about what we are allowing to be possible. In other words, we were talking about exactly the opposite thing: that agreeing that x only occurs at a certain rate, we are agreeing it is possible for x not to occur.
We can agree that the frequency of x might indeed be zero. That makes no difference to the fact that we are also admitting that it might not be zero, and indeed might even be respectably high (as in my case, explicitly allowing a frequency of historicity for high-scoring Rank-Raglan heroes that isn’t zero…nor even vanishingly small).
That’s irrelevant here. If you think a frequency is x, you are not saying it is x in concept, you are saying it is x in fact but that you might be wrong. So the reality-concept distinction is not at issue here. Saying a frequency is x is still making a claim about an actual frequency. Thus claiming the frequency of y is x, still entails claiming that maybe sometimes ~y occurs, and indeed by definition that it occurs at a frequency of 1 – x with the exact same confidence that you have that y occurs at frequency x, as the one statement entails the other (unless you are saying x is zero, and not only zero, but zero with no margin for error, which neither you nor I are talking about).
Moreover, there will also be a frequency with which you are wrong when you say the frequency is x. That is a different frequency than x. It is measuring a different thing–not the frequency of the thing you are talking about, but the frequency of your being wrong about things like that.
But that is still a frequency. And it is an actual frequency. It is not a mere concept of a frequency, but a guess at an actual one.
And if you keep wide enough margins of error, your guess (that the frequency falls “somewhere” within your declared margins) will be correct to a very high frequency. Which is the object of all human knowledge.
I don’t see why you are having trouble understanding this. But maybe you are misunderstanding that this is even what we’ve been talking about…
I have repeatedly been saying exactly the same thing.
I don’t understand why you haven’t noticed this.
When you say “I may not be very confident of this” you are saying “the frequency with which I am right about claims like this might not be very high.” Ergo, this is a claim about a frequency. It’s frequencies all the way down. It also entails the converse frequency of being wrong (“I am highly confident that x entails I will sometimes be wrong about x,” which does not entail you are wrong on this specific occasion, only that you could be, to the converse of the stated frequency of your confidence).
See my discussion of your own example (I just use dice instead of coins) in Proving History, pp. 265-80.
Here is where you have gotten confused. We were not talking about vanishingly small probabilities. We were talking about merely low ones, and in which there is no dispute that the thing claimed to be frequent sometimes nevertheless doesn’t happen.
What you are trying to argue is that such claims amount to something like “if the frequency of x is no greater than 1 in 100, then the frequency of ~x is somewhere between 0 and 1 in 100,” therefore it is possible x never occurs.
But we weren’t talking about whether it was logically possible for x never to occur. We were talking about everything we are allowing to be possible. In other words, we were talking about exactly the opposite thing: that agreeing that x only occurs at a certain rate, we are agreeing it is possible for x not to occur.
We can agree that the frequency of x might indeed be zero. That makes no difference to the fact that we are also admitting that it might not be zero, and indeed might even be respectably high–as in my case, explicitly allowing a frequency of historicity for high-scoring Rank-Raglan heroes that isn’t zero…nor even vanishingly small. You can’t claim that when I say the frequency of x is 99 in 100, that therefore I am saying ~x never occurs. Nor can you adduce a single example of ~x and thus argue that when I said the frequency of x is 99 in 100, I was wrong.
Those were the errors I was calling out.
You tried defending the errors I was calling out. For some reason.
Now you have abandoned that (rightly) and descended into defending claims wholly irrelevant to anything I was talking about in the first place.
You have severely mischaracterized my statements, so I feel compelled to continue.
If your current workload prevents you from responding properly, then just say so.
I never mentioned vanishingly small probabilities, nor relied upon them for any of my arguments.
The claim was actually about something being unlikely, not frequent, or even possessing a small non-zero frequency. Contrary to your claim, there is nothing about the word unlikely that guarantees (a) that a thing sometimes happens, or even, acknowledging the difference between frequency and expected frequency, (b) that I believe that a thing sometimes happens.
Example: It is unlikely that Richard Carrier commits murders. Is the truth of this statement sufficient to have you locked up? Or for you to sue me for libel?
(In the above, (a) and (b) are different things, despite you denial. See for example comment 5.1 : “Bayes’ Theorem gets you to an answer. But that is still just a frequency: the frequency with which a conclusion reached that way will be wrong. ” There is nothing to guarantee this frequency, as I have explained multiple times. And no, this does not render probabilities meaningless, as you claimed in the same comment – probability is the amount of belief a modeled rational agent would possess if the prior information, and the probability model were guaranteed correct, a situation that cannot ever arise, but an approximation that allows rational decision making to any arbitrary degree of sophistication.)
Can you please point out where I made those errors?
I did not claim you did. I claim Rosson did. Then you lept to the defense of Rosson. For some reason I can’t fathom.
If that is not what you thought you were doing, then you clearly didn’t understand what I was saying about Rosson. I was being charitable in assuming you did not go off the rails that badly right from the start, and that you did understand which errors I was calling Rosson out for (or theorizing he was falling victim to, since I am not sure that’s what happened, his statements only imply it), and therefore you were defending Rosson.
Otherwise, what on earth were you arguing?
Rosson acts like when I said “likely x” I was saying “never ~x.” But “likely x” does not mean “never ~x.” And the converse of never is sometimes. So if I did not mean “never ~x” I must mean “sometimes ~x.” QED.
It is not a legitimate objection that “I could have thought never ~x,” because had I thought so, I would have said so. That “never ~x” is logically possible is wholly irrelevant to what I was saying about Rosson’s erroneous inference. Indeed, we only get “never ~x” when we reframe the discussion with margins of error. But Rosson wasn’t confused about there being margins of error. His mistake was not in reading me having said “the probability of x is between 67% and 100%” and then interpreting me as having said “the probability of ~x is 0%.”
This is a side issue from the above, but this also suggests you are not listening.
If you say “It is unlikely that Richard Carrier commits murders” you actually mean “it is unlikely that I am wrong that Richard commits no murders.”
The first thing being frequency-counted here is not my acts of murder, but your acts of being right or wrong in your epistemic judgments.
Thus what is duly entailed by “it is unlikely that Richard Carrier commits murders” is “sometimes I am wrong about things like that,” not “sometimes Richard Carrier commits murders.”
Nevertheless, if you genuinely thought that 10% of my time I am committing murders (if that is what you actually meant by “It is unlikely that Richard Carrier commits murders”), then indeed “sometimes Richard Carrier commits murders.”
You are thus confusing an epistemic statement about my having ever done x, with a statement about the frequency with which I do x.
Certainly, if you thought that latter frequency was so high that odds are I have murdered at least one person by now, then you would indeed be thinking “sometimes Richard Carrier commits murder” (to whatever same probability of being correct you assign to your estimate of that frequency). But usually when you say it is unlikely that I have ever committed murder, you are saying the frequency of it is so low that odds are I have not murdered at least one person by now, and that you are (let’s say) 90% certain the frequency is that or lower. The “unlikely” is then referring to how likely you are to be wrong, not to how often I commit murder (the latter frequency would be something far lower than that, in order for it to be unlikely that I have ever committed murder).
It is true that this then entails “sometimes Richard Carrier commits murder” if “Richard Carrier” lived to infinity and thus your estimate of the odds of my ever doing so approach 100% as t approaches infinity, but t doesn’t approach infinity for human lives. Indeed, this can be used to calibrate your estimation of how likely it is that I will commit murder: what frequency (in murders/year) do you think has a better than 50% chance of being true given that I will live for 1,000,000 years? You could say it would even then be below 1 (and since you can’t have half a murder, it would then be false to say “sometimes” I commit murder–but only because now we are dealing with vanishingly small probabilities, where I am expected to die before even one event occurs). But if you’d say it would be 1 or above, then you are indeed saying that sometimes, per million years, I will commit murder (more probably than not). It’s just that I’m not likely to live that long, so this is kind of a moot point.
Hence my point about vanishingly small probabilities not being relevant to what I said about Rosson.
In reality you would not say it’s 10% likely I have committed murder. You would know that the actual prior probability that any random American will ever commit murder is well below 1 in 300, or 0.3%. And if you allowed any evidence to reduce that in my case (e.g. I am not an active member of a drug gang, nor have any other markers indicative of being in the murderer class), it would drop even further. But let’s stick with that, as per 85 years (average lifespan). So it would be incorrect to say “it is unlikely [= 0.3%] RC will commit murder” therefore “sometimes [less than 1 per 85 years] RC will commit murder” only because in that condition odds are I will die before even 1 murder occurs. If I lived a million years, you can legitimately expect me to commit murder sometime, unless you can adduce evidence that reduces its frequency in my case to below once in a million years. And so on, as life is extended.
Your confidence level would then correspond to a confidence interval (the maximum and minimum frequency with which RC commits murder, at that confidence level, let’s say 90%), which could include zero at the low end. But that would not mean you believed it was zero, much less that you were saying it was zero. That’s my point.
Moreover, that your confidence in that interval is 90% does not mean you think there is a 10% chance I have committed murder by now. It only means there is a 10% chance the interval is larger than you estimated. What that entails regarding what you would be compelled to believe about the actual probability I’ve murdered someone would depend on how much larger the interval is widened, and how much of that interval crosses into at least 1 murder per 50 or so years, and what the probability is of the true frequency falling in that section of the resulting bell curve (formed by each frequency being assigned a probability of being the true frequency, all of which probabilities summing to 1).
I did not. The very first thing I said about this was that you were correct in identifying Rosson’s argument as an error.
That you don’t see the difference between what I said and a defense of Rosson’s argument is ample evidence that you don’t comprehend what I am talking about.
This is my exact point, and contradicts your original statement, “[x is] unlikely entails sometimes x,” along with several you have made since.
Ah, then I mistook what you were trying to argue. I don’t think then that we actually disagree about anything.
Note that as I have explained, there is in fact a relationship between your rate of error claiming x and the rate of ~x. It is only when the rate of ~x is vanishingly small (or relevantly small, e.g. smaller than can be expected to occur even once in a human lifespan in the murder case) that it becomes inaccurate to say ~x sometimes occurs when you say x is likely. Otherwise, the entailment does obtain. That’s what I’ve been saying.
But, you are using the wrong analogy. Rosson is talking about my statement “it’s inherently unlikely that any Christian author would include anything embarrassing in his Gospel account, since he could choose to include or omit whatever he wanted,” p. 134 PH, which is not a statement of my rate of error, but an actual statement about the frequency of x occurring (my confidence in that statement would then be close to 100%; your Big Bang example was using the confidence level and mistakenly equating that with the frequency we are claiming to have confidence in). Indeed, in the Rosson case we are talking about my even being explicit about this, e.g. “The exceptions are very few and hard to establish in particular cases,” p. 136 PH, could hardly be clearer. I even discuss the mathematical rate in detail, with equations (pp. 163-66).
I have been assuming you knew the original sentences in PH that we were talking about. I see now that you must not have. Which is why we’ve been talking past each other.
Hi Richard,
Glad we got one thing cleared up, at least.
As it happens, I have read those sentences, but they are irrelevant to my point. Your assertion was
This statement is abstract and general, and is thus independent of any context. Perhaps you feel I’m nitpicking, but it started as a minor point, and then escalated when you defended it. Note that the wrongness of your statement absolutely does not depend on any probability or rate being vanishingly small.
I’m very glad we understand each other a little better, but I’m afraid there still are some issues on which we disagree. I’m not sure, however, how much miscommunication remains to be corrected, as a result of your disorientation, regarding the drive of my initial comment.
I very much doubt that any of your probability calculations can have been adversely affected, but the problem I perceive is with the way the theory is understood. If you are interested to understand better my objections, I’m happy continue the discussion here, or offline, if you prefer a more relaxed environment – either by email, or Skype, or what not (I believe you can find my email address). I’m pretty busy at the moment, though, as I expect to travel to Europe within a few days later.
For now, however, I’d like to express why I feel these issues I perceive are important. I appreciate your indulgence.
When people start to learn probability theory, if they have enquiring minds, the first niggling doubt usually comes when they ask themselves, “how do I know my priors are correct?” In fact, as you know, this problem is trivial – the priors are entailed by the pre-existing data and the probability model – but many never get past it (it’s a standard frequentist objection, that I still hear often, and apparently (weirdly) one that some Bayesians also subscribe to). Those that do get past it face another source of doubt and confusion: “how do I know that my probability model is correct?”
Whatever objections people might be stuck on, their confusion undermines their faith in probability theory, and many have struggled to find alternative theories of inference, several of which remain quite popular. As none of these alternatives is probability theory, however, they are all necessarily bullshit, which leads people, including many active scientists, into all kinds of trouble regarding the relevance and meaning of their results, and sometimes causes people to reject useful and valid methodologies, or to directly employ inappropriate mathematical procedures.
Now, the great worry, “how do I know that my probability model is correct?” has been the particular anguish of many. The failure of probability theory to answer this question has led many to abandon it in favor of low-grade alternatives. But this question is not something probability theory claims to answer, nor can it. Nor, in fact, can any possible alternative. What probability theory does provide is a mechanism to proceed if we have particular concerns about some model: we apply the theory recursively, using the method of model comparison. We assign our current model to a point in some higher-level parameter space, and crank the Bayesian handle again. This can be continued, as I mentioned earlier, to any arbitrary degree of sophistication. This is the best that any theory of inference can deliver, and is not a bug, but a marvelous feature.
The problem with your statement that probabilities are frequencies is that it demands that probability theory can guarantee the correctness of one’s probability model.
This objection survives whether the frequency you are identifying relates directly to the phenomenon under investigation (e.g. how biased is that coin?), or if it’s the frequency with which one will be right about similar problems of inference. This, I’ve tried to explain in the preceding comments. I understand, though, that my ability to explain is imperfect, and often assumes the triviality of things others may not have spent time thinking about.
Please note that a frequency (as opposed to “a frequency,” somebody’s idea of a frequency) is a real property of some system, not some hypothetical construct. If I toss a newly minted coin 1000 times, getting 522 heads, then destroy the coin, the frequency of heads for that coin was, and remains forever exactly 0.522. (If a person lives and dies, having committed not a single murder, then their murder rate is and forever remains exactly 0.) Note also that had I not destroyed the coin, but kept it and declared its frequency of showing heads to be 0.522, or 0.5 (different numbers, corresponding to two possible probability models), for all future tosses, I would have been notably wrong, had somebody remotely activated a mechanism inside the coin, making it show heads on every subsequent occasion.
Once again, if you’d like to discuss this further, I’ll be happy to oblige.
With genuine appreciation for your work promoting probability theory and rationality,
Tom
(PS: every time I see “OHJ,” my mischievous brain sends it to my inner voice as “OMG”)
Thank you. But you are no longer talking about anything relevant to my comment on Rosson. On why you are not entirely correct that “a frequency is a real property of some system, not some hypothetical construct,” because we often have to use hypothetical and not actual frequencies to establish a sound logical argument in the absence of omniscience, see PH, pp. 257-65. Which actually addresses the very example you are using (albeit with dice instead of coins).
If we were omniscient, and thus knew the actual frequencies of things, then your entire case before would be even more undermined–because then when I said “likely x,” rather than x is impossible, it would always literally mean sometimes ~x. It can only be allowed to ever include “never ~x” when we aren’t speaking about the actual frequency (because it can never be known to us). Otherwise, if we know the actual f, then we would know that f = 0. Unless it doesn’t.
But even when we don’t know, and therefore “never ~x” is possible, my comment about Rosson was simply that it is incorrect to conclude that when I say “likely x” I mean “never ~x.” That was my original point, and it remains correct.
I was wondering about Rank-Raglan as the reference class. I agree in principle that Jesus belongs in it, and it’s probably the only peer-reviewed methodology, but does it include all Mediterranean religious figures from the time? From what I can tell it has Zeus, Moses and Romulus but I’m not sure if figures like Bendis, Inanna, Zalmoxis etc. are included.
Not that any of them were historical, but it seems like the relevant place/time is the Mediterranean from say the the Axial age to the council of Nicea, and critics might try to dismiss it for including other mythical characters (like King Arthur or Robin Hood). Just a thought.
Not enough data exists for the deities you name. So they have to be excluded (they can’t be shown to score above 10).
I also do not use King Arthur or Robin Hood or anyone outside the applicable historical-cultural context in my frequency data.
After reading this post, I am more and more sympathetic to idea that the implication:
power struggle between Paul and Pillars —> historicity
is a false implication. But you already explained your reasons in this discussion with Vince Hart, when you wrote:
Your entire argument was that historicity had to be true because there was something more to the tension than just doctrinal differences. Otherwise, if we know of nothing more to the tension than just doctrinal differences, then we cannot say “there was something more, therefore there must have been a Jesus,” and if we cannot say there was something more, then all we know is that it was the dispute Paul records, which in no way entails historicity (and by its nature even argues against it), therefore “all we know is something which in no way entails historicity (and by its nature even argues against it).”
I learn now to have no trust into these scholars that exaggerate the real proportions of conflict Paul versus Pillars.
However, from Loren Rosson’ view I intend to save, under the Jesus Myth theory, this suggestive scenario described in so admirable terms in his post.
It’s possible? It represents the maximum of possible tensions that we can infer from Galatians? With which corrections, if required?
Very thanks,
Giuseppe
That is all besides the point. What is relevant in Gal. 1-2 is not what the doctrinal particulars were (that is an interesting side question, but not helpful for the historicity debate), but the fact that Paul clearly believes (and assumes the Galatians believed) that human testimony was not authoritative, and only a revelation counted as trustworthy contact with Jesus (it is the only thing that makes one an apostle). Hence Paul insists up and down he never got it from the apostles before him, that he never in fact even met them until long after he’d been evangelizing.
What is conspicuously missing there is Paul having to justify this to the Galatians. He never once has to argue that revelation is sufficient to equal direct human discipling by Jesus; that the apostles before him were taught and chosen directly by Jesus in life (“and wouldn’t he have told us about these new doctrinal changes then?”) is never the argument he has to confront. He is never even aware of anyone making that argument. To the contrary, it is already assumed, and requires no argument, throughout Gal. 1-2 that revelation is definitive, and there is no other access to Jesus Paul is aware of, or that the Galatians trust.
This makes no sense. Unless, in fact, revelation was definitive, and there was no other access to Jesus to be had, and thus no other that could be held over Paul as an argument against him. Otherwise, it would be the first argument he would have to confront and refute.