Is everyone who lacks belief in a god an atheist? Or is there always some middle category—we’ll call it “agnostic”—such that (a) they don’t believe any gods exist but also, at the same time, (b) they can never be called an atheist? Those in the know will know I am of course talking about the endless debates everyone frustratingly has all the time with Steve McRae, whom despite a lot of elaborate effort I couldn’t pin down to any coherent position on the matter. Here is an outcome report on that attempt and what facts and reality actually dictate in the matter.
Backstory
My involvement all began with my article on Misunderstanding the Burden of Proof, in which I agreed with McRae that everyone bears a burden of proof to assert any belief or disbelief. Thus even atheists bear a burden of proof to warrant any belief that god probably doesn’t exist. But I disagreed with McRae by pointing out that anyone who is already calling themselves an atheist has already met that burden. So they don’t need to meet it “again.” The burden is now back on theists to prove atheists are wrong about that.
This does mean an atheist should be able to give an accounting of why they are an atheist. They don’t “have” to in the sense that they are under no obligation to every theist to waste their time doing this. We all have better things to do. But in principle they should be able to, otherwise their atheism is not warranted. Even atheists who are highly uncertain about their atheism—atheists who conclude the probability of there being any god is close to 50% (many of whom thus prefer to call themselves “agnostics”)—must be able to justify their uncertainty. If they cannot give an account of why they think the probability of any god’s existence on presently available evidence is even so high as 50%, then their believing that is without justification, and thus not rationally founded.
In short, every probability you could ever possibly assign to “some god exists” is a positive belief that requires justification. There is no such thing as a truly “negative” belief within any coherent logic of epistemology. Every so-called negative belief is in fact a positive assertion of some probability or range of probabilities of the truth of the thing not believed. Even “I’m not sure whether x is true or false” or “I don’t know whether x is true or false” logically entails asserting that the probability of x being true so far as you know is or is close to 50%. Which is a positive assertion about x. You therefore must still have a justifiable reason to believe that assertion.
That justification can be fairly simple, however, e.g. “I’ve seen no evidence for or against x being likely” or “I have seen equally balanced arguments for and against x being likely,” but that’s still a justification, it’s still a listing of evidential reasons warranting your belief (in this case, that the probability of x is close to 50/50). And that still entails positive truth claims about the world and your experience of it (e.g. have you really seen no such evidence? have you really seen only balanced arguments for and against?).
But second to this epistemological point is a separate semantic point: that what people in common discourse mean by the word “atheist” is, in actual empirical fact, most commonly just anyone who lacks belief in the existence of any god. McRae wants to change the English language so that this would be somehow semantically disallowed, because he has some sort of weird phobia against being called an atheist himself, and, to satisfy that strange emotional need, he’d rather pave the earth with leather than just wear shoes.
Consequently McRae pisses off a lot of people who prefer the ordinary meaning of English words, and who need that meaning to deal with Christian propagandists trying to distort ordinary language to push their false narrative of the world (such as with the “you aren’t really an atheist” rigmarole or the “you have to waste your time arguing with me or else your atheism isn’t justified” game). But, being insensitive to that socio-political reality, McRae simply won’t let ordinary people call themselves atheists for merely lacking belief, because he himself doesn’t want to be called an atheist for merely lacking belief. Which evinces a completely incorrect understanding of how language works. And McRae really doesn’t like being told that.
That’s the backstory. Which led to the following.
Linguistic Imperialism as Fool’s Errand
Starting back on the 25th of February in 2019, Steve McRae and I agreed to discuss the issue of the epistemic and semantic status of atheism and nontheism in a special Facebook thread I hosted on my account.
Steve asked whether I “think atheists would have a stronger position if they adopted the more philosophical understanding of it in a positive case?” But no answer to this question can be intelligible without working through two subordinate questions, which relate to what it even means to “have a stronger position” or why definitions of words have anything to do with that. So we first had to work through those two things to see if we got anywhere. Then, and only then, could we repeat this process for the original question. Here, I shall be getting only to the bottom of those preliminary questions. (As to the concluding question, I already answered it in my Burden article.)
Steve of course meant the most common definition of “atheism” specifically in use in philosophy (which is not the definition most widely encoded in the brains of the wider public), that being of a positive belief that there is no god; as opposed to merely lacking belief in any gods, which he wishes we coud refer to with a term like nontheist instead (even though these are etymologically identical, a– simply being the Greek counterpart to the Latin non-). But is it more useful to only call “atheism” the positive assertion of the belief that there is no god?
The difference is, again, just a matter of what probability one asserts for the existence of a god: “positive atheism” (also known as “hard” or “strong” atheism) simply asserts that that probability is low enough to be reasonably certain there are no gods; while “negative atheism” (also known as “soft” or “weak” atheism) simply asserts that that probability is not that low, but is also not high enough to suspect there is a god either. Which, again, entails negative atheists are asserting the probability of there being any god is too close to 50% to be confident either way.
That latter position of course can be most easily communicated in regular discourse by calling oneself an agnostic. And generally we’ll all know what you thus mean by that. But that again means in the colloquial, now commonly-familiar sense of agnostic (as someone unsure, undecided, less confident about the status of god); not the rare philosophical sense, having to do with a supposed impossibility of having any knowledge of god, which is now an obscure and antiquated epistemological position almost no one you meet on the street will ever have heard of. But by the most commonly used definitions of all these words, all agnostics in either sense are also atheists. And though they can still dictate how they prefer people refer to them (after all, everyone gets to name themselves), they cannot thereby command how language works generally. If they are atheists by common definition, they are atheists by common definition. Denying this is denying reality. It’s to confuse personal preferences with linguistic facts.
McRae wants instead for common English usage today to “change” to align with hyper-technical usage only widely known in academic philosophy. But, first, that’s not how linguistic change happens. It’s a vain dream to think your wishing or raging at a vast linguistic community will affect it. It won’t. Language will go where language goes; and until then language is what it is. All the railing against it you might conjure will not change either fact. But also, second, even McRae’s claim about academic philosophy is too presumptive; in actual fact philosophers have long employed multiple different definitions of the word “atheist” and there really isn’t an official definition even in that field.
As the Stanford Encyclopedia of Philosophy puts it:
While identifying atheism with the metaphysical claim that there is no God (or that there are no gods) is particularly useful for doing philosophy, it is important to recognize that the term “atheism” is polysemous—i.e., it has more than one related meaning—even within philosophy.
It then gives examples of even more restrictive definitions (such as atheist meaning specifically a philosophical naturalist) and broader definitions (such as atheist meaning anyone lacking belief), which latter the Encyclopedia admits is “a legitimate definition in the sense that it reports how a significant number of people use the term.” Exactly.
Nevertheless, as that same entry points out, a greater utility (for certain limited purposes) of the middle definition tends to be the more common view in the ivory tower. Particularly as philosophers tend to want to talk about propositions (assertions) rather than “psychological states.” Although I believe there remains confusion as to whether these are actually in fact different things. Once you frame all assertions as assertions about a probability—as all assertions of fact actually are—the distinction dissolves. Because every psychological state with respect to the truth of x corresponds to a positive proposition about the probability of x.
But that rare definition only holds sway in the rarefied community of the ivory tower. Not the wider public discourse. And only the latter matters outside the narrow confines of an obscure community. Anyone who grounds their beliefs and attitudes in reality accepts this and makes their peace with it. They don’t get pointlessly outraged by it. Language is what language is. And we should work with how things are, not how we wish they were.
Framing the Discussion in Correct Terms
I then broke this debate down into two propositions:
- (1) I have said there is rarely any point in arguing over definitions. The only thing that matters is whether you understand how someone else is defining a term, and that they understand how you are defining that term. Anything else results in failed communication, defeating the purpose of words altogether.
Outside that single statement, there is no unique sense in which one definition is “right.” One can talk about which definitions are most commonly understood (so that if you use a word without defining it, what can you expect your audience to have “heard” you say?), but that’s an empirical question, not an analytical or logical one. Likewise which definitions are most “useful” can vary enormously. Because people can disagree as to which outcome measures are the more important (a word can be “useful” in many different ways; which of those ways are we to prefer and why?); and even when acknowledging each other’s chosen utility function, the question of whether a certain definition actually is useful in that sense is again an empirical question, not an analytical or logical one.
And this distinction is important. Empirical questions can only be resolved by reference to facts, not logic. So is there anything really to discuss here, as to how “atheist” is defined? For example, if most people simply understand it to mean one who isn’t convinced theism is true, should we go around acting like that’s not what most people understand it to mean? Or do most people understand it that way? These are questions in communication theory.
And then:
- (2) I have said that when we treat epistemic assertions coherently, they are always probabilistic; and when you reduce every assertion to its assertion about probability, the question of who is an atheist dissolves as moot.
I will set aside the rare exceptions (e.g. certain propositions about “uninterpreted present experience” can be, epistemically, absolutely true or false; and incoherent statements have no content capable of being true or false), because they will not be relevant to this issue, which is a question about intelligible assertions regarding external reality, which always have a nonzero probability of being false. Because all such statements have a nonzero probability of being false, all such statements reduce to the assertion of a probability (or range of probabilities). Always.
Once we recognize that, the question “Do you believe a god exists?” can only ever really be answered with a declaration (however couched in coded words) of what you believe to be the probability that a god exists. Even claiming you do not know the probability entails the declaration of an epistemic probability: that the affirmation of the fact in question is no more or less likely than the denial. Because if you did not believe that, then it is logically necessarily the case that you believe the affirmation of the fact in question is more or less likely than the denial. This is a necessary consequence of the laws of logic. Either q (affirmation is as likely as denial) or ~q (affirmation is not as likely as denial). There is no third possibility available. So what epistemic probability do you assign to the existence of god? That’s all that matters. What you “call” that position is irrelevant. Except with regard to communication theory. Otherwise, honestly—is there any position you can take that does not entail an epistemic probability for god? These are questions in epistemology, not communications.
I am of course here stating what I’ve argued. Whether or in what ways McRae agrees or disagrees with either is what we wanted to discuss. I shall note again that there are two separate and distinct issues here: one is a question in communications theory (e.g. how do we successfully communicate with the public); the other a question in epistemology (e.g. what positions with respect to the existence of god are even possible and how can we logically characterize those distinctions). I had to call attention to that distinction again and again as McRae kept blurring them. We must stop blurring them.
Outcome Number One
In regard to communications theory, if we call ordinary language the Common Tongue and the peculiar dialect employed by philosophers Elvish, my point is that if you are talking to people who don’t speak Elvish but only Common, you need to speak Common; else they won’t understand you. And if they are speaking Common to you, you can’t criticize them for making errors in Elvish. Because they aren’t speaking Elvish.
And as the Stanford Encyclopedia of Philosophy says: the lack-of-belief (i.e. the broader) definition of atheist “is a legitimate definition in the sense that it reports how a significant number of people use the term.” Indeed, when you ask bunches of people from all walks of life, you will discover this is the definition most assume to be normative—in Common. Not in Elvish. But we weren’t speaking Elvish. And most people don’t. Communication will always fail if you interpret statements made in Common as if they were made in Elvish, or if you keep speaking Elvish to people who only speak Common.
And there are certainly philosophers who have agreed. Antony Flew, for example, specifically argued in his paper “The Presumption of Atheism” that the colloquial meaning should be adopted even by philosophers, in order to align philosophy with ordinary language:
I want the originally Greek prefix ‘a’ to be read in the same way in ‘atheist’ as it customarily is read in such other Greco-English words as ‘amoral’, ‘atypical’, and ‘asymmetrical’. In this interpretation an atheist becomes: not someone who positively asserts the non-existence of God; but someone who is simply not a theist. Let us, for future ready reference, introduce the labels ‘positive atheist’ for the former and ‘negative atheist’ for the latter.
In truth, if you go around in public saying “atheist,” most people will understand the term to be inclusive of both positive and negative atheism. Such that when someone says they are an “atheist,” no one will by that fact alone know whether they mean positively or negatively, requiring further explanation or inquiry. This is simply how the language is. So if you want to communicate with the public, you have to assume this is the case. Because it is. That’s just the way Common works. And you don’t have a magical nanorobotic cloud that can envelop the earth and rewire the neurons in the brains of billions of people to change that. You’re stuck with how they are wired. And to communicate successfully, you have to listen and to speak as their brains are programmed to understand. Period.
It’s not even particularly better in philosophy, as there are enough philosophers using “atheist” in both senses that one will not know which sense any specific philosopher is using unless they specifically tell you. And since many do indeed use the fully inclusive sense shared by the wider public, even if most use the hyper-technical sense that excludes negative atheism as a third and separate category from “atheist” and “theist,” however labeled, you still won’t know which philosopher is which unless they tell you. So in actual practice there is no utility in specializing the definition like that. Nothing is gained in efficiency or effectiveness of communication by that even within philosophy. Even in philosophy, one simply just always has to stipulate which sense you mean. And that’s that.
I couldn’t get McRae to acknowledge any of this or explain why he refused to acknowledge it. He wouldn’t even recognize that most people arguing with him were taking him as attempting to dictate to them how Common works (in which event he was totally wrong), not how Elvish works (which none of them actually care about). So I gave up. As far as I could tell, he seems resolutely disinterested in adopting a sound theory of communication, particularly with the public.
Outcome Number Two
I also think McRae kept confusing ontological with epistemic probability in our discussion. It seemed he kept falsely assuming propositions can be assigned an ontological probability—which, outside purely analytical contexts divorced from the impact of evidence, is never the case.
The ontological dichotomy between “a god exists” and “no god exists” (hereafter G and ~G) is represented in the logic of epistemic probability (or P) by the rule that P(G) + P(~G) must always equal 1. If P(G) + P(~G) < 1 then you have violated the Law of Excluded Middle (you’ve left out an option, whatever fills the remaining probability space). If P(G) + P(~G) > 1 then you have violated the Law of Non Contradiction (you’ve included a possibility that is both G and ~G). But once you correctly ramify P(G) and P(~G), you have a continuum, not a binary.
We can never simply “know” whether “God exists” is true or false; we can only ever know how probable it is. And we can only know how probable it is given what we know at any given time. Thus as our knowledge changes, so might that probability. So when making verbal statements about God, only the logic of epistemic probability applies. And this follows because knowledge is “justified true belief,” yet it is logically impossible to ever have absolute knowledge that something external to us is or is not “true,” because we can only have knowledge that it is probable (or improbable—or neither probable nor improbable, which entails it’s at or near 50/50, still a probability).
This is why knowledge only exists when stated as a probability. We can hide this with all manner of couched language or elided assumptions about how “certain” we are of what we are saying at any given time. But we cannot avoid the fact that there is always a nonzero probability we are wrong in any statement we make, in any belief we have (see How Not to Be a Doofus). Which entails we are only ever really affirming the converse probability to that, which is then the probability that we are right. For instance, if we believe there is a 5% chance we are wrong about something, we are ipso facto declaring a 95% chance we are right about it. Thus, all knowledge, and thus all belief, is probabilistic.
You can “believe God does not [or does] exist or hold no belief either way (Suspend judgment),” but that is simply couched language hiding what you really mean, which is that you can believe the probability that God exists is low enough to be confident He doesn’t, or high enough to be confident He does, or neither high nor low enough to be confident either way. And those three options then exhaust all possibilities. But you cannot pick a third option when the dichotomy is “affirmation is as likely as denial, or affirmation is not as likely as denial.” Because then as defined those two options exhaust all logical possibilities. Thus you can’t escape the conclusion that you must be affirming some range of values, however wide or vague, for P(G). And that entails 1 – P(G) = P(~G). Thus every probability you assign to “God exists” entails an equally positive assertion of the probability “God does not exist.” And vice versa. There is no way to avoid this with semantics about “suspending judgment.” That’s just code for “I don’t know that P(G) is < or > P(~G)” which logically entails “for all I know P(G) = 0.5 or near enough to be uncertain whether or not G exists.”
And note here I am not talking about what we will call each probability assignment, i.e. who counts as an atheist or agnostic; I am here only talking about what is true regardless of what words we assign to these possibilities, which is that all positions regarding G (no matter how you demarcate them) entail claiming some range for P(G), and this is the only useful information to have. “What that position is called” is not useful information, except with respect to communication theory. Which I already discussed above. I feel like McRae had a hard time grasping this.
I think epistemically McRae wants to assert for himself “P(G) is near enough to 0.5 for me to lack confidence in either G or ~G.” Which is fine. As long as it’s rationally founded. That is, as long as “close to 0.5” actually is where the evidence you’ve seen rationally leaves it—one can question that (and to be honest, IMO, McRae’s probability assignment here is not even remotely plausible), but one would then be making a different argument, one to do neither with the epistemic options available nor how to name them. Otherwise, the epistemic options can be as many as you want. You can split the probability space in two, three, five, a million pieces, and name them all if you want. Then the question becomes whether people who affirm one or more of those positions “should” or “should not” be called atheists, or when they should or shouldn’t. Which then is only a matter of effective communication: what the people you are communicating with already understand the relevant words to mean.
McRae wanted to insist on dividing the possibility space into three (high P(G); low P(G); and middling P(G)), and incorrectly assumed doing that requires separate words for each division of that space (“theism”; “atheism”; and then “agnosticism” or “nontheism,” or something). Neither is true. Dividing the space into three rather than two is simply an arbitrary decision. We could divide it into two, three, or ten if we wanted to. It would dictate nothing. How people’s brains have learned to label which demarcations will be what it is regardless of how you or any fringe linguistic community does it. And even if we do divide it into three, in that very fashion, it does not follow that we cannot name supersets of them, e.g. the last two options can together have one label inclusive of both (which is what “atheism” has evolved in common use to mean), while also at the same time having specific separate labels for each (such as “agnosticism” for the third option and some phrase like “strong atheism” for the second). There is no logical necessity that it be otherwise. And in actual linguistic fact, it simply is not otherwise.
McRae had a hard time being clear as to what he even means by “believing” something as opposed to “suspending judgment.” But these are again just code words for confidence levels. “Believing x” simply means “Confident that x is true.” “Disbelieving x” simply means “Confident that x is false.” And “suspending judgment” simply means “Not confident that x is true or false.” Which is itself a belief. Because any statement about your level of “confidence” is simply a disguised assertion of probabilities, an assertion you thus believe to be true.
Everyone’s threshold may differ from everyone else’s, but each individual is only reasoning coherently if their own thresholds are consistent. At some probability of x, you will “be confident that x” and therefore “believe x“; or “be confident that ~x” and therefore “disbelieve x.” And at every other probability, you will “lack confidence or belief that x or x,” a.k.a. “suspend judgment.” In which case you believe the probability is too close to call. And in practice, people usually demarcate more than just those three options, describing instead different and varying degrees of confidence or belief. We also have to distinguish this from the fact that people will also make decisions not just on their degree of belief, but also on their calculated risk of being wrong—such that when it costs more to be wrong, we usually require a higher confidence that something is true (or false) before acting on it.
One must not confuse these things. Risk tolerance is different from degrees of belief; which is different from what you choose to call each degree of belief; which might then even be different from what you need to call each degree of belief in order to be understood when communicating with other people (or to understand them in turn).
Instead, McRae insisted his tripartite division was “not arbitrary as it is a function of logic.” Wrong. He was choosing to divide the probability space into three areas: high-P, low-P, and middling-P. There is no objective reason why anyone has to do that. We could have divided the total area into six, or ten, simply by adding more labels to describe each division (such as “high confidence,” “very high confidence,” and so on). And in practice people often do exactly that. People can also divide the space simply into two if they want: high-P and low-P, being merely whether someone thinks P(G) is above or equal-or-below 50%. Nothing in logic prohibits their doing that. And even if we divide it into three, we still have to arbitrarily choose how much lower than 50% will count as “low-P” and not “middling-P” (similarly for “high-P”). Which is always just arbitrary. There is no demarcation dictated by logic.
So McRae can’t “force” people to change how they use words by resorting to logic. Every system of demarcating words suggested here is perfectly logical. Overlapping terms is perfectly logical. Binary terminology is perfectly logical. Trinary terminology is perfectly logical. It’s all perfectly logical. So the only consideration that remains is what terminology people actually use, and thus will actually be understood. Thus it just goes back to communication theory.
Bad Analogies
McRae tried to argue such things as that Christian misuse of the word “evolution” to argue against evolution theory is like the Common-tongue “misuse” of the Elvish term “atheist.” But there is no analogy here. The word “evolution” is used in all sorts of ways in Common (“language evolves”; “I evolved as a person”; “evolution is any change over time”; and so on). That in Elvish it might specifically refer to speciation by natural selection is irrelevant to this fact, and is never an argument for changing common usage. If Christians specifically mis-define “evolution” when arguing about the scientific theory, then they are the ones violating the principles of language; but people using colloquial definitions of “atheist” in common discourse aren’t doing that—because that’s correct usage in Common. Only if they tried to insist the Common definition was the “only” one used in Elvish would they be breaking the rules of language. But they aren’t wrong even to say that the Common definition is sometimes still used in Elvish. Because it is.
Even McRae’s insisting God’s existence is “either true or false” gets us nowhere. Because all we have access to is the probability of its being true or false. So when people refer to what they believe (what they do or don’t have confidence in), they are only ever talking about that probability. Which spans a continuum. It is not binary. So there is no route from here to what McRae wants either.
Similarly, McRae tried arguing that he would treat differently his conclusion that there was a 70% chance my claim to have bought a cat was true and his conclusion that there was a 70% chance my claim to have been abducted by aliens was true. But that’s incoherent. If he actually believes there is a near 1 in 3 chance I didn’t actually buy a cat, then he would have to admit he’s not that confident that I did, but will only accept it on balance of probability. Because that’s literally what “there is a 70% chance you really did buy a cat” means. Likewise, if (remarkably!) I could present enough evidence to get McRae’s confidence as high as 70% that I was abducted by aliens, then he is literally conceding there really is a near 2 in 3 chance I really was abducted by aliens! He should act accordingly. Which means, he should act as if there is only a near 1 in 3 chance I wasn’t abducted by aliens. He should not be treating the two cases differently with respect to belief—only if he assigned them a different probability of being true would that make sense.
Of course it will be different if we are talking about the cost of trusting me in either case and being wrong, which might be lower for the cat thing than the aliens thing, but that’s not the same issue—that’s a matter for decision theory, not epistemology: what we should do about our degree of belief, not what our degree of belief should be.
Conclusion
McRae of course complains about confusions people fall into who aren’t philosophers and haven’t really thought much about this (or even about semantics or neurolinguistics in general), such as when they start talking about babies or even rocks being atheists. But this is no different than the confusions that result from people arguing over whether a taco is a sandwich. They don’t know why their brain tells the difference between them, so they struggle to formulate an analytical explanation for it. In result, they often come up with a false definition of taco and sandwich—and here I mean false in the sense that their vocalized definition does not correctly describe what their brain is actually doing when distinguishing tacos from sandwiches.
Any more careful thought would resolve this error. Our brains are telling us, quite simply, that a sandwich is any food between bread that is not a taco. Our brain has learned to exclude special conditions in order to recognize the difference between a sandwich and a taco. It is thus simply defining a sandwich as not a taco. There is nothing philosophically systematic about it. It’s just how language works. Similarly, rocks are incapable of having cognitive belief states, and “atheist” is only a label for a cognitive belief state. Rocks can therefore no more be atheists than they can be Republicans.
Similarly, babies cannot comprehend the question, “Does a god exist?” and therefore can have no epistemic state regarding the answer. They are therefore no more capable of being atheists than theists. But that’s only true with a highly abstract definition of “God.” If instead we define theism as any belief in the existence of anything resembling a god, then it’s actually more likely that all babies are theists, contrary to the usual line atheists maintain.
Child psychologists have long suspected that babies begin life essentially solipsistic in their cognitive disposition toward the world—which means, in this broad sense of “theism” just proposed, they are born believing they are god. But they very soon move toward displacing that belief with the belief that their parents are gods—as in, having absolute power over the world, e.g. babies readily think their parents are responsible for the rain, gravity, and everything else that happens in the world, or even in their own bodies, which is why infants get so angry that their parents don’t fix their every physical discomfort. Agency overdetection (a neurologically innate cognitive bias) then leads them to start seeing gods everywhere (in trees, dolls, the weather). Then, usually, they are subsequently, seamlessly encouraged to displace those beliefs, in their parents as gods or in a world full of gods, with the belief in an abstract celestial god—unless for some reason they find such attempts unconvincing and thus abandon their belief in gods altogether. But by that point they are no longer babies and are indeed properly described as atheists.
But the fact that average people don’t know anything about child psychology or don’t notice that “atheist” is a descriptor for a cognitive state, and thus trip up over such things, doesn’t tell us anything about what they really understand the meaning of “atheist” to be in practice, which is actually physically encoded in their brains wholly regardless of what they erroneously think that encoding to be. No matter how wrong they are about whether a taco is a sandwich when asked to reason it out, their brains already know the difference and have no difficulty with it. They spot tacos from sandwiches with 100% consistency without even having to think how. Likewise “atheism” and “atheist.”
In the end, when it comes to epistemology, the only thing that matters is what probability you assign to “God exists.” Labels are irrelevant. And when it comes to communication (understanding what other people are saying, and being understood in turn), the only thing that matters is what someone will understand you to mean when you say you are an atheist or that someone else is an atheist. And the fact is, most people won’t know whether that means a positive or a negative atheist. And that tells us their brains have been enculturated to recognize no difference between them by that word. Therefore, if we want to be understood, and if we want to understand other people, we have to use the word that way. If we want someone to know more specifically what kind of atheist we are or that someone else is, we simply have to tell them. Because neurolinguistically, that’s just how it is. This is why phrases like “positive” or “strong” atheist and “negative” or “weak” atheist had to be formulated. That fact alone is evidence of the conclusion.
Hence that being the case in no way prevents you from using more specific descriptors if you want to, like “agnostic” or “weak atheist” or whatever you want, including simply stating outright the ranges of probabilities you conclude for the existence of a god or speaking in alien languages (like Elvish) among fringe linguistic communities who understand those languages—as long as the people you are talking to have brains suitably programmed to know what you mean by those utterances. And I have to caution here, in the case of “nontheist,” no such programming exists. Not even in the brains of most academic philosophers. So no one will recognize that as meaning anything other than atheist, which means no one will be able to distinguish merely by that word between either type of atheism. It therefore has no special utility in communication.
The linguistic facts here are already settled. And the logical possibilities are arbitrary and limitless. You should speak so as to be understood, and learn how people actually use words to understand them. Everything else is fringe semantic gameplay. Warring over the meaning of “atheist” or “nontheist” is therefore a monumental waste of time. You can surely find better things to do with your limited resources in life.
Percy Bysshe Shelley wrote:
“God is an hypothesis, and, as such, stands in need of proof: the onus probandi rests on the theist. Sir Isaac Newton says: Hypotheses non fingo, quicquid enim ex phaenomenis non deducitur hypothesis, vocanda est, et hypothesis vel metaphysicae, vel physicae, vel qualitatum occultarum, seu mechanicae, in philosophia locum non habent. To all proofs of the existence of a creative God apply this valuable rule.”
Shelley’s belief the “onus probandi rests on the theist” is another probabilitistic argument, based on Newton’s scientific epistemology. What Newton’s statement “Hypotheses non fingo” means is controversial and has been variously interpreted, but I believe it means that because observation and experiment are the only ways to acquire justified true beliefs about the external world, we can exclude all hypotheses that cannot be derived from inductive reasoning. Since we cannot observe god, nor make predictions about god and then test them via experiment, the atheist can safely dismiss god as unfounded conjecture or idle speculation. This means that if the theist wants to convince others god exists, he would have to observe god acting on the physical properties of material objects, then develop a god hypothesis based on these observations, generate predictions, then test them experimentally. Theists who believe in non-anthropomorphic deities would face the additional hurdle of having to show how it is possible to reason inductively about a metaphysical substance, which is what a non-anthropomorphic god is, a logical impossibility, for how can a material substance interact with a non-material? Theists shoulder the burden for epistemological reasons.
Yes that’s all true but you are missing the point. You are arguing here from a false dichotomy. Please pay more attention to what I write.
First, it is not simply that one side or the other bears the burden of evidence. Both bear the burden of evidence. The only issue is who has already met that burden. Then, and only then, does the burden shift to the other party. Even someone who wants to sit in the middle, and say “it’s 50/50 whether God exists,” bears a burden of evidence of justifying why they think the probability of God is so high (as high as 50%!) or so low (as low as 50%!).
Notice how you list a bunch of evidence (such as about what kinds of forces and agents we usually see operating in the world) without noticing you are thereby meeting a burden of evidence by just saying that. Thus, you actually took up the burden of evidence even while trying to ague you aren’t!
If our past experience had been different, e.g. if supernatural agency was commonly observed (e.g. if you lived through the Exodus or Genesis narratives, or went to school at Hogwarts, or anything akin), you could not say the theist bears a burden of evidence to show such agents can exist. So what you are actually doing here is showing how atheists have already met their burden of evidence (by showing there haven’t been any real acts of supernatural agency, and all claims to such have been false or mythical or falsely interpreted etc.).
Thus, the reason theists bear the burden of evidence is not for the reasons you state. But for the reasons I stated: that the requisite burden to disbelieve in God has already been met; so if theists want to change any atheist’s mind, they now bear the burden of evidence. Not because they are the ones making a positive claim (both are making positive claims: about the probability of there being a God), but because one side has already met their requisite burden of evidence, which now shifts the burden onto whoever would want the result to change.
Yes, I know, you’re just restating Bayes’ theorem in plain language. The probability of a hypothesis based on the evidence is determined by its prior probability, or P(h|e * k). The problem with the Bayesian approach is that priors and posteriors are not selected on objective criteria. How do we even assess which priors and posteriors are probable or know which ones to choose anyway? There’s just no objective way of going about this. How does one objectively decide what the state of knowledge is? Is there some kind of formula? It sounds simple, but doesn’t really work in more complex situations.
I think you misunderstand Shelley’s approach to showing the onus probandi is borne by the theist. What he’s trying to say is that the only really effective way of acquiring knowledge about the external world is inductively. So far, there’s no false dichotomy here. If we can’t develop a hypothesis based on observation of physical properties and then test to see if its true using experimental methods, the chances of that hypothesis being correct are nil. This is especially applicable to God/gods, since these are assumed by theists to interact with the physical universe. Science has already shown that we can’t test for metaphysical entities anyway, so its up to the theist to show why this is false, hence Shelley’s statement: “the onus probandi rests on the theist.” Its more of a methodological, than an epistemological argument, but it does remove the element of subjectivity found in all Bayesian analyses.
P(h|e*k) is the posterior probability, not the prior probability.
And all human beliefs work this way. We almost never rely on methods to estimate probabilities that are any more objective than we are doing in history. Most human knowledge doesn’t come from science. It usually is simply our personal estimate of how likely something is given all the information we have about the world. Since “we are stuck with only subjective access to reality” is true of all epistemologies, it therefore can be a criticism of none of them.
And yes, there is a formula. It’s called Bayes’ Theorem. It logically entails that you cannot believe the posterior probability of a thing is X unless you also believe in a prior and likelihoods that entail X. And once we realize this, we can check the validity of all our beliefs, no matter how subjective they are, by testing whether we have coherent beliefs, which means asking:
(1) Does our assumption of the posterior match our assumption of the priors and likelihoods?
And:
(2) Do our priors and likelihoods match our information about the world?
All human knowledge operates this way. There is no way to escape it. Bayes’ Theorem simply is a description of what we are all already doing all the time, in all domains of knowledge.
It’s clear you have not read Proving History. If you want to understand how Bayesian reasoning describes all human reasoning, you have to read this. Otherwise you are just wasting everyone’s time here.
But to get back to your mistake: you seem not to know that k is previous e. In all Bayesian formulas, anything we put in k (background information) was previously e (evidence), and all prior probabilities are the posterior probabilities of previous runs of the equation with k as e. This is what I am trying to explain to you. Any probability you have reached about P(God) is based on k, which is your previous e, and therefore it’s always based on evidence.
There is therefore no sense in which you bear no burden of evidence. You absolutely do bear it: you must be able to prove that what you are saying is in k is actually in k. You have, presumably, already met that burden: you’ve already surveyed the evidence, run the posterior probability estimate from it, and dutifully moved it from e to k. That’s how you get to any probability that God exists, whether low or high or middling.
Now it is the theist’s burden to present some new e, some e you never found, that will move the posterior probability you derived, to one the theist believes correct. That’s why the burden is now on them. It’s not because they are the ones making a positive assertion. You both are making positive assertions: you of a low P(God), they of a high P(God). So you are wrong. The reason the burden of evidence is on them now is that you have already met your burden of evidence: all that stuff you put in k to get your P(God) from.
There is no logical way to escape this. It’s simply what has factually happened. And denying it’s what happened is denying reality.
You’re assuming that there’s only one kind of statistical inference, Bayesianism, and that all human knowledge operates according to the postulates of Bayes’ theorem, which is false. You do realize there are different approaches, like the frequentist approach to probability? Which is likelihood of P(E|~H), leading to rejection or acceptance of null hypothesis?
It’s simply not true that all epistemologies are subjective. In Bayesian statistics, choice of priors is always subjective since there’s no objective criteria allowing you to choose the priors you want to use. Contrast that with frequentism, which is actually objective because 1. it does not rely on priors 2. it determines probability based on a large number of repeated measurements 3. no statistician/experimenter will ever disagree with your p-values 4. anyone can replicate the series of repeated measurements to verify your p-values, and 5. its the dominant kind of statistical inference in science for a reason. This is why your findings in OHJ will never be accepted by the mainstream because outcomes depend on priors, which are subjective. This is not the case with things like Archimedes’ or Boyle’s law, which we can repeat over and over again and still end up with the same results.
Of course, not all knowledge is scientific, but our best way of knowing anything is through empirical observation, even with things like history, which also includes scientific disciplines like archaeology, DNA analysis etc. The utility of an endeavor is based on the extent it can incorporate induction into its methodology. This is why rejection of god based on methodological considerations is such an effective one. God is always supposed to interact with the natural world in some way, which makes it falsifiable. So far, no god’s activity has ever been detected and the probability a god will be detected in the future is basically 0%. Furthermore, god’s existence cannot be proven inductively because it is a metaphysical substance and these are logically impossible. So unless the theist can show otherwise, using induction and overcoming the logical barrier, god does not exist.
No, I am not assuming there is only one kind of statistical inference.
You really, really need to read Proving History, where I even discuss other forms of statistical reasoning and how they all reduce to Bayesian inference in empirical reasoning.
In other words, we are only talking about the logic of how we reach conclusions about epistemic probability. Nothing else is relevant here. Just what makes a conclusion about what’s likely to be true logically valid. And I demonstrate, with a formal deductive syllogism, that this is always Bayesian. All other techniques merely serve that end.
I also demonstrate that frequentism reduces to Bayesianism precisely at this point: when we convert data into a conclusion about what’s probably true. I have a whole section on that in Chapter 6 of PH. Read it. Please. And don’t keep arguing with me until you have. Because you are wasting everyone’s time here if you won’t even respond to what I’ve already demonstrated and explained about this.
This is especially true with null hypothesis reasoning, which is not an epistemic probability. This is actually a fundamental defect of frequentist statistics: all it can ever do is tell you how probable it is of some data arising by chance. It can never tell you how probable any other hypothesis is. Countless mathematicians have pointed this out. It therefore can’t be used to reach any conclusion about God, for example. If you don’t understand what I just said, read my book. Then hopefully you will understand what I’m talking about. Because right now it’s clear you do not know what I’m talking about.
The bottom line is, your belief regarding the probability that God exists, no matter what you believe that probability to be, is based on evidence. It is therefore based on having met an evidential burden. No attempt to deny this will ever be sound or valid. It is therefore false to claim that the burden of evidence only falls on theists. It falls on you both. It only now falls on the theist because you have met your burden already—including, for example, all your observations of “non intervention,” so you keep proving my own point with your every assertion here.
If you do not understand this, you will never be able to grasp what “atheism” or “theism” even means as a word in actual linguistic practice, much less what warrants asserting either.
Meanwhile, as for the completely unrelated question of the historicity of Jesus (why you suddenly started talking about that in an article about the definition of atheism and the burden of evidence escapes me), you are also wrong about everything again. For example, the claim that priors are merely “subjective” is completely false. I demonstrate this in Proving History. I demonstrate this in OHJ. If you don’t understand why it’s false, you need to figure it out. Because until you respond to my arguments for it being false, you are not addressing anything I have ever said or argued about this. So why bother?
If you are too lazy to read peer reviewed books, you can start with my article here. And if after reading that you still don’t get it, please follow up with this. Or even this. Hopefully by then you’ll know what you are talking about and thus be able to produce a relevant and useful criticism here, instead of all this uninformed and irrelevant stuff you’ve been armchairing at me. Get informed. Then criticize. Not the other way around.
Genuinely please do that.
Given the subjectivity of priors, Bayesian statistical inference isn’t warranted when it comes to establishing burden of proof. If we want to be rigorous, thereby establishing the burden beyond a reasonable doubt, we should rely on a frequentist approach. Scientific research has always relied on frequentism for a reason.
We know that, according to Christians or any other group of religious believers, god interacts with the physical universe. This makes god an hypothesis that can be empirically verified. Well you know what? The literature on things like prayer, faith healing and other supposed interactions with “the divine” is voluminous. There have been numerous studies and meta-analyses of prayer, but none have found any conclusive evidence of prayer’s efficacy; hundreds of cases of faith healing have been scientifically examined, but not one has ever been verified. If we conduct studies of prayer and faith healing over and over again, we will always end up with the same p-values, which can be verified by others. Therefore, we can conclude there is a near-zero probability that prayer, faith healing etc. works. This means our burden of proof has been met, putting the ball, the onus probandi, in the theist’s court.
What the theist has to do now is produce studies showing that god really answers prayers and heals the sick, to get from a p-value of about 0 to a p-value of about 1, an insurmountable task given the massive failure of Christianity and all other religions to document their claims.
Priors aren’t subjective in the sense you are talking about. You clearly don’t know how Bayesian epistemology or even probability theory works. Read the articles I directed you to. Until you know what you are talking about, you aren’t saying anything relevant or correct here and are just wasting everyone’s time.
Meanwhile, at the status of uninformed priors, theism always starts at 0.5, not “near 0.” We only get it to zero by adding tons of evidence (of science and history; of what hasn’t been found or observed; and so on). Thus atheists bear the same burden to lower P(God) from its initial 0.5 as theists bear to raise it from there. But atheists have usually already well met that burden (we can cite vast arrays of evidence lowering P(God) to near zero). Therefore theists only bear the burden now because of that.
You seem stalwartly insistent on never acknowledging or understanding this. Why?
In case you lost track, here are the articles you clearly need to read:
If you are too lazy to read peer reviewed books, you can start with my article here. And if after reading that you still don’t get it, please follow up with this. Or even this. Hopefully by then you’ll know what you are talking about and thus be able to produce a relevant and useful criticism here, instead of all this uninformed and irrelevant stuff you’ve been armchairing at me. Get informed. Then criticize. Not the other way around.
Genuinely please do that.
What do you think I mean by subjective? I’m not saying you’re pulling all of your priors out of thin air; obviously priors are constrained by background knowledge. Subjective =/= arbitrary, if that’s what you think.
Bayes’ Theorem works well when we have hard data, such as in fields like meteorology, where we can forecast rain, snow or hail with a high degree of accuracy. We can look at the times it’s rained, snowed etc. to calculate probability it will rain or snow on a certain day.
But in fields like history, where there’s very little hard data to come by, how on earth are you calculating probabilities? How are you assigning values to P(B|A), P(A) and P(B) to get P(A|B)? Where are they coming from? This is what I am asking. Since none of your probabilities are calculated on the basis of hard data, like objectively measured, empirically verifiable phenomena, they are obviously subjectively chosen. Which means that it’s just your opinion, making your analysis of Jesus’s historicity just another long exercise in confirmation bias.
Your assessment of Jesus’s historicity totally rejects mainstream consensus, which acknowledges that Jesus really existed. So your assignment of probabilities is based on your own findings in OHJ, which makes it not only more subjective, but viciously circular. In order to accept the probabilities you assign your parameters, you would have to assume that your findings are more probable than the mainstream consensus, established after 300 years of biblical scholarship. But how probable is that?
Mario, all human empirical reasoning is Bayesian. Even when we have poor data. There is no way to get hard results from poor data. So avoiding Bayesian structure won’t help you. Only Bayesian structure can help you validly reach conclusions under conditions of uncertainty. We compensate for poor data with large margins of error. This is explained in mathematical detail in Proving History and On the Historicity of Jesus. Please read them.
On consensus, I demonstrate the consensus is invalid in Proving History. I cite and quote there many peer reviewed scholars who agree. Please read it.
How is frequentism reducible to Bayesianism when the two approaches to probability are derived from very different sets of axioms? This has to be explained if “all human empirical reasoning is Bayesian,” unless you’re saying you have developed a new theory of probability supplanting Kolmogorov’s axioms and Cox’s Theorem. And I take it this synthesis is to be found in OHJ? Why haven’t you submitted your groundbreaking discovery to some prestigious mathematical journal, like the American Journal of Mathematics?
If you really want to know how frequentism reduces to Bayesianism, read my whole concluding section on that in Proving History, Chapter 6. Which was peer reviewed by a mathematics professor.
You have repeatedly showed me you are too lazy and disrespectful to bother reading anything to become informed. So either stop asking me questions like this or actually read what you are told and ask me informed questions about what you read.
I will start blocking you if you don’t start acting responsibly and do this.
That’s not true, I have never trolled, spammed or name-called on this blog. I am always respectful. But if it makes you happy, I will spend the $42 needed to purchase your book and I won’t ask you any more questions about Bayesianism until I’m finished reading it. Then we will discuss Bayes’ Theorem again, but with actual quotations from the book.
Huzzah. I have tried to have this discussion with both McRea and Kate (BionicDance) who, while defending the rocks and babies are atheists, readily acknowledges that adjectives are the proper way to refine that concept. (Hi, Steve.)
Binary distinction is fine for illustration purposes, but doesn’t carry much water by itself. McRae’s suspension of judgment by contrast, is to me just a cop-out. I agree that it seems to emanate from a psychological or emotional attachment to some aspect of theism that he has but, in cognitive dissonance, cannot square with his real-world experience of a naturalistic existence. It is extremely tiresome, and I compliment you on putting up with it. I gave up long ago.
Personally, I like to deal with the matter by reverting to the definition of theism, which I take to mean the belief in a supernatural entity that has attributes that can be known and that interacts with the natural world.
Thus, as you clearly state, any reference to an entity that is incapable of holding such a mental state is irrelevant to the conversation. And any entity that can evoke such a mental state can (and should) simply state that belief in a probabilistic manner. Anyone holding a true 50/50 position is either uninterested in the matter or is very confused, period. I imagine such a person to be on a high-wire, leaning into the wind as it blows from alternating directions. And I don’t put much stock in beliefs that are that winsome.
As an aside, I just watched Steven Pinker saying exactly what you said above about language and how it evolves – organically, not top-down. It seems to me that Steve is trying to be a one-man crusader in the way that second-wave feminists were successful in coining a new word for a woman’s title that did not connote marital status. But even that is not the same (or as difficult) as trying to mandate an obscure definition of an already commonly used term.
I myself have had two long “debates” with Steve over this issue. I also repeated over and over that it didn’t really matter what descriptors people used, what matters is the actual belief states people have. Yet I also argued, as you did, that different belief states can fall under a single set (such that “agnosticism” is a category in the set “atheism”). I also had to reference different philosophers who explicitly define atheism as “the lack of belief in a god”, including in the Oxford Handbook of Atheism, to counter his argument that there is only one “official” philosophical definition. And I wasn’t the only one to say these things to him over and over. Even as a psychologist, I can’t quite fathom why Steve is so obsessed with this issue or why he cannot accept plain facts that make his position nonsensical.
Practically speaking whether you are talking about the existence of intelligent life on other planets or the possible existence of a Deity being responsible for the creation of this universe wouldn’t it make sense to have a category for the unsure or undecided to separate them from the certain (pro or con)?
Think of it this way. When people are polled in this country about their position on a certain issue you normally see 3 options: For, Against, or undecided.
Why? Because it would be unfair and irresponsible to lump the undecided into one of the 2 other categories.
So we call those people the “undecided”.
So why shouldn’t there be such a category when speaking of people’s beliefs with respect to the existence of God? We have to call them something so why not call them agnostic? Once again they lack the conviction needed to call them a Theist or Atheist.
That’s just not how our language works though. No argument from “shouldn’t it work this way” is relevant, because language only works the way it actually works. So you might “want” it to be a clean and neat trinity designator (atheist, agnostic, theist), but it just isn’t.
Atheist too routinely includes agnostic. It’s a binary term: atheist and theist. In common parlance one simply has to get more specific to communicate or understand anything more specific, e.g. an “agnostic atheist” is one who is undecided but leans on the “no” side of the binary; “agnostic theist” is one who is undecided but leans on the “yes” side of the binary. If one simply says “agnostic” they usually communicate agnostic atheism well enough, but not always (you still have to ask to get clarification on what exactly they mean), and in common parlance they are still also therefore an atheist: one who lacks belief in God. That’s just the way the language has developed to operate.
I will admit to laughing out loud when the conversation turned towards tacos. Mostly because I was reminded of this Existential Comic making the same point, just with hot dogs.
https://existentialcomics.com/comic/268
Indeed. There was a whole viral meme about the tacos-sandwiches dilemma that went all over the internet and various conferences a few years ago. I was assuming readers would already know that’s what I was referencing. Although maybe it wasn’t as widely known as I thought.
Thank you for the very fine analysis. At some point I hope you cover the various schools of atheism. One school is the “positivist” school which says a “supernatural” God is neither analytic nor empirical hence meaningless. Since meaningless beings or concepts cannot exist then God cannot exist. The actual proof for this is somewhat more complex than I have laid it out but it is certainly worth exploring for your readers.
What you are referring to is called noncognitivism. It’s not a valid position to hold generally, as it only applies to some and not all (in fact not even most-in-use) definitions of God.
You can validly be a noncognitivist with certain definitions of God (the unfalsifiable-even-in-theory whack-a-mole kind). But most believers actually embrace a fully cognitivist understanding of God, one that actually can be falsified and thus is meaningful; they just refuse to accept the actual state of the evidence or to reason coherently from it. Thus, you must always be either a theist—or an atheist in the fully conventional sense. You cannot be “just” an atheist with respect only to meaningless definitions of god; as that leaves all the more widely prevalent definitions of god, which you must either reject on cognitive grounds (and thus, you are not simply a nonconitivist), or accept, which makes you a theist (and I mean “reject” here in the broadest sense of simply “not accept the existence of”).
This is true even if you want to try, in emulation of McRae, to deny the prevalence of cognitive beliefs in gods worldwide, and try to insist every or most every theist is a “noncognitive” theist. That would be contrary to fact, IMO. But its being so doesn’t matter, because I myself can formulate a cognitively meaningful definition of God that you must still either accept or not accept the existence of. So there is no way to remain an atheist solely on noncognitivist grounds. It’s logically impossible to do that.
Help me with this. Doesn’t atheism depend on how one defines “god”? It seems to me that the Common definition of “theism” specifies that one accept the theist’s definitions of god. In my case, I believe that gods exist in the important sense that a god is a mental, social and physically-based construct with social effects in the material world. Jesus in this sense is real, AND Jesus is mythical. The Catholic Church is an example of how (fictional) Jesus has resulted in artifacts and effects in the material world. My objection to Christianity is not the existence of Jesus. My objection is that Christian dogma has Jesus all wrong. Jesus is real and perhaps important because Jesus is a mythical construct (or rather an evolving family of constructs) who live in the realm of individual and social consciousness, and who can effect consciousness for better and for worse. In the past humanity arguably had more need for this kind of scaffolding of consciousness. Now we have other better ways to do that. But gods that effect consciousness are real even if based in myth. Does this make sense to you?
Thus I idiosyncratically self-define as gnostic (even with the baggage of that word). I can know and even love Jesus as a real, mythical creature (or family of creatures). I find joy and fulfillment in connecting with the menagerie of mythical and fictional beings living in me and in society. These beings include those living in fictional stories. Consciousness, I sense, IS this menagerie, talking to itself.
Fiction is “an imaginary pond with real frogs.” I forget who said that.
Thanks for listening!
Chuck
You can call yourself whatever. But if you don’t believe any actual superhuman entity governs in the world, you are an atheist in common parlance.
I think common parlance varies a great deal. The commons in this case is fragmented according to underlying worldviews. The Pew 2014 Religious Landscape Study simply asks “Do you believe in God or a universal spirit?’ followed by “How certain are you about this belief?” I respond very affirmatively to these, while being uneasy because “believe in God” is typically used to denote “religious faith” or “trust in God” which I am very low on. I think there are a lot of people like me who understand and seek to know God as a form of consciousness or as a “spirit” or as a kind of fact of human culture (but not as the creator of all material things). The only people I have met who think I am technically an atheist are fundamentalist Christians, and you and those who share your common parlance about atheists. It is possible to be a mythicist (and I am, thanks to you) while also being engaged by myths and the sources of those myths in human consciousness and history. Is it important to you to label me an atheist?
This is just begging the question. Shouldn’t you have tangible evidence of a god first, before assuming you can know him? Where’s the evidence? As an adult, you should have reasons for why you believe what you do, otherwise you’re not behaving rationally or thinking clearly.
That said, you can’t call yourself an atheist unless you’re willing to dispassionately look at all the evidence and realize that none of it supports belief in supernatural entities.
It’s not clear what you think a “God” is so I can’t answer that question. You were too vague above.
Most people assume a God is not a metaphor but an actual, individual, conscious superhuman person. We can test this by seeing how easily people can be tripped up by switching definitions on them, e.g. claiming you believe in God, then telling them you just mean a metaphor. Their surprise and confusion demonstrates they did not take the word “God” normally to mean that. Further test comes from looking at who hides this fact: many scholars and pastors who believe God is only a metaphor won’t say so publicly specifically for fear of being condemned as an atheist and fired. Which tells us what the public really thinks an atheist is.
The fact that the meaning is variable is precisely why the meaning can never be assumed exclusive rather than inclusive. When someone says they are an atheist, you cannot from that fact alone know whether they mean uncertainty or certainty there is no god. Likewise, most people consider the idea of relegating God to a metaphor to be an unusual and suspiciously coy way to claim to be an atheist without actually saying so. Unlike “atheist,” “theist” in practice almost always means the supernatural kind, not the metaphorical kind.
Which means the proper question here is: Why does your being called an atheist bother you?
It can’t be semantic reasons, if you don’t believe in the thing that the word “God” most commonly means among those who speak and hear the word. Because then it’s simply just accurate labeling according to the most frequent usages. So either you do believe in something supernaturally a God (and not just “God is a metaphor” for god-free social systems or something) and are a theist in common parlance, or you do not, and you are an atheist in common parlance.
Which is why the only question left is why this bothers you.
Hello Mario, thanks for your reply. Because of the way the page is set up I am not sure I am replying directly to your note of Dec 5 but that is my intent.
Your reply is puzzling to me so I send it back to you for clarification.
You apparently did not read my original statement in which I outline my beliefs as a self-described gnostic who generally agrees with the RC description of a mythical Jesus.
Re your 3rd paragraph, I do not call myself an atheist. I am not an atheist but people confuse me as one. That is my whole point. I repeat my question to RC to you: Is it important for you to label me an atheist?
I don’t believe in supernatural beings as commonly understood. I do believe in (fallible, limited) mythical beings called gods that often inhabit and transform human consciousness as described empirically in the historical record.
What do you believe?
There’s no excuse for religious superstition in this day and age, since information is easily accessible. This isn’t the Dark Ages. Do you not think that as an adult, you should at least have a solid basis for your claims in scientific fact and logic before believing anything? Serious question, it’s not meant to be condescending.
Well, now that I’ve looked at your first post, you’re saying that god and Jesus are like imaginary friends, and you enjoy playing with your imaginary friends, but realize they’re only figments of your imagination based on Hebrew mythology, not real supernatural entities. This is not unlike a child – or a schizophrenic – who has a rich fantasy life with his imaginary friends, the difference being that you can apparently tell the difference between fantasy and reality.
Well if you’re just pretending god and Jesus exist, but don’t really believe they exist, you’re an atheist imo.
Dear Mario, since unintentional condescension from an adult pedestal appears to be your specialty, we may be about done here. Let me give it one more try.
Can I use an analogy or is that also too much of an imaginary act? I am attracted to and instructed by fiction and poetry, not only by the substance, also by the fact that these art forms are ubiquitous in human experience. Art and story telling are a big part of what makes us human (a scientific statement for you). And yes, bad art and bad stories can make us bad humans. I am with you on that one Mario, believe me. And get this: all of art is both imaginary and material. It exists in our imaginations and it gets crafted as objects.
For example I like (ok love) Bob Dylan. Bob is largely a fictional character who inhabits our imaginations and play lists. Bob sings beautifully to Sara, an imaginary character– or is she his wife? It does not matter to me. Likewise Shakespeare has shaped the way we use language and the way we think. Shakespeare’s characters are imaginary. Shakespeare himself is largely an imaginary character and may be the ghost of some other authors. Mythology is similar to me. Much of it is opaque or even offensive to me, but some of it sings. I “believe” that Dylan and Shakespeare as well as their characters are real (because of their material artifacts and influence on actual people) as well as imaginary. Would you also call Shakespeare and his characters, and Bob Dylan and his characters my imaginary friends? I can live with that. Bye, and have a nice adulthood.
Well, there’s nothing wrong with having imaginary friends you can play with and who you think are “real,” just don’t socialize with them or talk out loud to them in public and you’re good to go.
Maybe you are joking, but there can be something wrong with “having imaginary friends you think are real,” alas. See my article What’s the Harm.
Chuck, to be fair, I don’t think Mario meant “imaginary friend is real” in the same sense you mean “characters in Shakespeare are real.” To the contrary, I’m certain he meant it in exactly the opposite sense.
I believe what Chuck is saying is Jesus and god are “real” in the world of make-believe he has created for himself to play in, but outside this world of make-believe, he knows Jesus isn’t “real.” This isn’t problematic, as long as he can still distinguish between fantasy and reality.
You used to be so amused
At Napoleon in rags and the language that he used
Go to him he calls you, you can’t refuse
When you ain’t got nothing, you got nothing to lose
You’re invisible now, you’ve got no secrets to conceal.
@Chuck Palus, I’m trying to understand what the issue is. Please help.
1) You describe yourself as gnostic.
2) People call you atheist (though you do not yourself).
3) You do not believe in supernatural being as commonly understood (you see them as mythical, and the stories about them having changed the way humans think and act throughout history).
It sounds like this is directly tied to the section “Outcome Number One.” That is, you may be using language in a perfectly valid way, but you are trying to speak “Elvish” to people speaking “Common”. When someone calls you an atheist (in Common), they are correct based on the Common definition. When you say that you are not an atheist, you may well be correct when speaking in Elvish.
No one is saying you CAN’T disagree. No one is saying that you must identify as an atheist. But if your position conforms to the definition of atheist in Common, then your disagreement is in communication. It may well be incorrect for someone to call you an atheist in Elvish, but it would be equally incorrect to say someone speaking Common is “wrong” to call you an atheist (since you do in fact fit that definition).
Dr. Carrier:
It is possible that someone could hold an atheist position with respect to theology and theological based Gods, but still be agnostic with respect to the possible existence of a Deity that is responsible for the creation of our Universe.
That person might be so concerned about the possibility of the unknown (what they don’t know) with respect to that question that they are honestly left in a perpetual state of uncertainly with respect to that question. So they can’t honestly take a position of certainty with respect to that question.
I do think that there is a need to separately categorize them from the outright believers and non-believers.
We need to call then something so why not call them agnostics? Do you not believe that it is possible for anyone to hold an “agnostic” position (justified or now) on any belief or topic? If so then why not this one?
I realize that from your point of view with what we now know about our universe, combined with proper logic and reasoning there is no good reason that anyone should find the existence of a God (of any kind ) to exist. But keep in mind that when it comes to a question of ones own belief/disbelief what we are talking about is one’s own disposition (whether it be a sound disposition or not). So as an atheist if you can at least conceive (acknowledge) someone holding a position of theist then why is it so hard to conceive (acknowledge) someone holding a position of agnostic?
The agnostic simply feels an uncertainty about knowing all there is needed to know to hold a firm belief take such a certain stance. Even if they are wrong about that, it doesn’t matter because once again we are talking about one’s disposition, not whether it is a rational disposition to hold. That is a completely separate question.
Even your arguments 50/50 probabilities argument might not sway them if they are concerned enough about the possibility (or in the theirs minds probability perhaps) of then now knowing what they don’t know.
If someone says that they are uncertain enough about something to warrant then holding a position of being agnostic on the topic we should give them the benefit of the doubt. Because once again we are talking about their disposition, regardless of how sound or logical it might be.
An agnostic is an atheist in common parlance. It doesn’t matter what god ideas they are agnostic about.
In actual fact everyone, literally everyone, is both an atheist with respect to some gods and an agnostic with respect to some other gods. The only thing that makes a difference is believing in at least one god, so only a theist can be a distinct term. Otherwise there is no such thing as an agnostic who isn’t an atheist, or an atheist who isn’t an agnostic. It’s a logical impossibility. I demonstrate this in my old original article on this (see my discussion there of Bumpypoo and Monkeybutt).
How would other historical figures fare if Carrier were to use the same Bayesian methods on them? I’m talking about someone like Herodotus or Diodorus Siculus, who are not mentioned by any contemporary sources at all, yet are regarded as having genuinely existed by scholars. Would these figures still exist after their probabilities were calculated by the Bayesian method? How can we even be sure of the validity of Carrier’s methods if we don’t know whether it would work on other characters commonly assumed to have existed? You could probably dismiss a large number of historical figures as non-existent using a subjectivist method of statistical inference like Bayesianism.
Here’s just one example from Dr. Carrier’s own blog (not that long ago): https://www.richardcarrier.info/archives/15665
I think the implication you might be making here turns out isn’t accurate. In the event historical figures have as little evidence as Jesus (or other religious figures), historians generally DON’T hold their existence in high esteem. That could mean historians really don’t have as high a confidence as lay people might think (in the existence/words/actions of that figure), OR it could mean that the historical figure is so unimportant that their existence/non-existence has no real bearing on understanding the rest of history.
I would say in MOST cases where theists attempt to make comparisons between the historical analysis of their particular religious figures and other non-religious figures in history, they are almost always unaware of the actual evidence. They tend to inflate the evidence of their own religious figures because they are inundated with information about them (often conflating theology with history). They also tend to be unaware of the (sometimes massive) evidence that exists for other figures simply because they have never studied it.
But even Carrier’s analysis of the evidence for Spartacus isn’t a Bayesian one, unlike the one for Jesus. Doesn’t it strike you as odd that of all historical figures, only Jesus is singled out for ahistoricity?
It’s true we don’t have eyewitness accounts or physical evidence of Jesus and that compared to other figures, the evidence is weak. However, eyewitness accounts and physical evidence are what you’d expect for famous rulers and generals, like Alexander, Hannibal and Caesar. If we were to apply that same standard to minor figures, than historical studies would collapse into an ocean of uncertainty. Many figures who could easily be doubted are accepted as historical: Herodotus, Diodorus, Pythagoras and the other presocratics, Gilgamesh and so on. The bar isn’t as high as you think it is.
Although it’s impossible to know what Jesus said or did with any high degree of certainty, he is mentioned by a contemporary and there are biographies of Jesus situating him in an exact historical time and place, plus he’s mentioned in a near-contemporary source, Josephus (which is accepted as authentic by the vast majority of scholars, despite what Carrier says). That’s more than enough to meet the bare threshold of actual historicity.
Yes, it’s Bayesian, Mario. I discuss priors and likelihood ratios. In plain English. If you don’t understand how English is mathematical, read Proving History, where I not only show that it is, but even provide a nonnumerical flow chart for Bayesian arguments, which is exactly what I use in my Spartacus article and every other.
Please. Read. My. Books.
Then comment.
Not the other way around.
Same for your uninformed claims about Josephus. Read what I’ve already said about that. Respond to what I’ve already said about that. Stop wasting our time by not doing that and instead saying things already refuted a dozen times by now. Even in the articles on Spartacus etc. I discuss the difference between their sources and Josephus for Jesus. So you didn’t even read the articles you were told to read. Stop behaving like this.
Putting aside that Richard has in fact argued against this whack-a-mole many a time, from Tiberius to Spartacus, you’re missing the point. Let’s say you found someone, say Herodotus for the sake of discussion, that really did have limited evidence for his existence (and, what you leave out for a strawman argument, an actually-supported contrary theory of who the person was and how stories about them emerged). Well, guess what? You would have shown that historians are being irrational assuming that person existed.
This isn’t new. Whether Homer existed or not, Troy existed or not, Moses existed or not, etc. has been debated. Opinions have changed as better data and methodologies were created.
You’re making an irrational argument from hypocrisy. It’s a fallacy.
What’s particularly hilarious about your focus on attacking Bayesianism is that it’s superbly irrelevant. Remove Bayesianism from OHJ, somehow, I guess. So what? Richard has made an argument about two possible theories about a person, and then argued for them. Can you actually specify what would be different about this approach in Bayesianism?
So let’s consider Herodotus for a second. Why is there some certainty he existed? Because we have some copies of his books. Usually, fictive people don’t write books. That’s not like Jesus. So… do you concede that that’s a relevant difference in terms of assessing if one is mythical and one isn’t?
If you go to the Wikipedia on Herodotus, you’ll see that it is conceded that the modern account of Herodotus is based on his own writing and on much later sources that are presumed to be drawing on a prior oral or scholarly tradition. So in fact the scholars of Herodotus are fully aware that there’s a margin of error around Herodotus. But it’s perfectly reasonable to say that people who write books and include information about themselves usually exist. One would need special evidence otherwise which isn’t there.
What baffles me is figuring out how you would do it. Do you admit it’s possible that Herodotus didn’t exist? Is it possible that scholars are just assuming he did because of inherited wisdom?
Bayesianism is important for Carrier presumably because you cannot have a logically valid conclusion about the probability of historical uncertainties without it. If Carrier did not rely on the Bayesian method, his argument for Jesus’s ahistoricity would be even less valid than it already is. Bayes’ Theorem is clearly not “irrelevant” as you seem to think. However, it’s not applicable to historical scholarship for a number of reasons:
We don’t have the hard data to accurately estimate probability using Bayes’ Theorem, given our fragmentary knowledge of the past.
The accuracy of our probability estimate depends on how accurately our parameters approximate the real world. Estimating the probability of these parameters is an entirely subjective process. Basically Carrier is just arbitrarily assigning numbers to parameters based on his personal estimate of their likelihood, nothing more. Until he can show that his priors have been objectively chosen, he’s basically using Bayes’ Theorem to prove what he already believes is true, otherwise known as confirmation bias.
If we wanted to, we could easily make up arguments postulating the nonexistence of Herodotus or Diodorus, the same as we can for Jesus. Just because we have writings attributed to a single individual does not mean they necessarily wrote it or that they existed. We have many examples of this, such as Hermes Trismegistus, Orpheus and Homer, who have had large bodies of writings attributed to them, even though their existence is highly uncertain.
There is no other valid way to argue to an empirical conclusion about anything without using Bayes’ Theorem, whether covertly or overtly, Mario. If you don’t understand that, read Proving history. In which I prove this, both formally and informally.
And no, no Sound Bayesian argument can produce the conclusion that Herodotus or Diodorus didn’t exist. The evidence for them is entirely superior to that for Jesus, Homer, Orpheus, etc.
And no, the writings of Diodorus are not relevantly like the writings of Homer, Orpheus, etc. So the same conclusion cannot be reached. The latter are a achronical (they have no contemporary anchors; they even span multiple centuries in style and content, thus ruling out any single author) and impersonal (they never self-reference, e.g. “Homer” never talks about himself or his own times or judgments or interactions with the world; likewise Orpheus). We have no examples of any writing like Diodorus in which the author didn’t exist. All examples like his we have, their authors existed. Thus the prior probability is high that he existed. This is exactly the opposite of Homer, Jesus, and so on.
Please learn how to use Bayesian reasoning before attempting to criticize it. Please read the articles and books you’ve been directed to. They show how to correctly reason about priors from reference classes. Learn.
In the Argonautica Orphica, Orpheus identifies himself by name and does indeed talk about himself, his times and his interactions with others.
My point is that the Bayesian method, given its subjective priors, can be used to prove just about anything when not dealing with hard data. Even Richard Swinburne is able to “prove” Jesus is the resurrected god incarnate using the Bayesian method. His argument is just as sound as yours, but no less subjective.
The Argonautica Orphica was written in the early middle ages. So no historian regards it as evidence for Orpheus being historical.
Please learn how to use evidence correctly.
It was written during late classical antiquity. Of course it doesn’t prove the historicity of Orpheus, I cited it as a counterexample to your statement that pseudonymous works are always impersonal. If it helps, Hermes Trismegistus appears in his own dialogues.
As someone just pointed out to you, I’ve done that. Repeatedly. See a full list in the opening paragraph of my article on Spartacus, with links or page numbers. For an example of my method being used to model a different historicity question in Biblical studies, see Wallach’s peer reviewed example. For a logical analysis of why the results aren’t what you suspect, see my article on Hannibal.
I have to say, again and again, you seem weirdly ignorant of what I’ve written, on any subject you comment on. Can you please try harder to actually check what I’ve written on these subjects and actually read them before commenting?
The results would be the same for Herodotus, BTW, for whom we have much more evidence than for Jesus (including contemporaries, contrary to your not knowing that), so the likelihood ratio strongly favors his historicity, and who isn’t in any myth-heavy reference classes (he didn’t begin as a worshipped celestial superman, nor get placed for the first time on earth in patent mythologies), so he doesn’t start with any low prior either.
Likewise Diodorus, who is more poorly attested, but does not belong to any reference class that renders the base rate of historicity low, and who gives us an enormous collection of evidence for his existence and time of living: his own extensive writings, which greatly weight the likelihood ratio in favor of historicity. Precisely the sort of thing we don’t have for Jesus!
My point is this: how do you know your Bayesian analysis of Jesus’s historicity is valid if you haven’t conducted similar Bayesian analyses of other historical figures yourself? Without anything to compare it with, how can you assert with any degree of confidence that your analysis of Jesus’s historicity is correct? So far, you haven’t shown me anything.
The evidence we have for Herodotus and Diodorus and for Jesus is the same. For example, we have Paul, a contemporary of Jesus, writing letters mentioning Jesus 20 years after his crucifixion. Paul never denies having met Jesus before his vision on the Damascus road, so there’s no way of knowing for sure. We also have a gospel, Mark, dating from the 60s, but based on a much earlier oral tradition about Jesus dating from the 30s.
Regardless of whether you think Paul’s Jesus is some kind of archangel or whatever, you cannot deny that all of this literature situates Jesus within an actual historical time and place, that of 1st century Roman Palestine, in the province of Judea, wandering around Galilee and Jerusalem, in the years 30-5 AD, under the Julio-Claudian dynasty. We don’t see any of this for obviously mythical characters. Why is that?
Further, the bar for historicity is set rather quite low. Most scholars will accept someone as historical, as long as there is some evidence that may be construed as indicating actual existence. For example, there’s a dearth of contemporary evidence for Gilgamesh, but most scholar’s will accept him as historical because he’s mentioned in Sumerian king lists, albeit centuries after his purported reign. If Gilgamesh is considered historical, a person for which we have very little evidence, why not Jesus?
I have conducted similar Bayesian analyses of all those other persons I linked you to. If you don’t understand how my English words convey mathematical premises and conclusions, read Proving History. It will explain.
As for the rest, you clearly have never read On the Historicity of Jesus. Never criticize it here again until you do read it and respond to what it actually says.
Forget OHJ.
Most historians believe Gilgamesh really existed and was really a Sumerian king, even though he’s shrouded in myth and there are no contemporary documents attesting to his existence. Do you believe Gilgamesh existed? Why or why not? Why reject Jesus but accept Gilgamesh as historical? Are you aware there’s even less evidence for Gilgamesh than there is for Jesus?
I am not aware of any living expert who believes Gilgamesh was a real person.
Please cite who you are talking about.
Here’s my source on the historicity of Gilgamesh:
Most historians generally agree that Gilgamesh was a historical king of the Sumerian city-state of Uruk,[6][7][8][9] who probably ruled sometime during the early part of the Early Dynastic Period (c. 2900 – 2350 BC).[6][7] Stephanie Dalley, a scholar of the ancient Near East, states that “precise dates cannot be given for the lifetime of Gilgamesh, but they are generally agreed to lie between 2800 and 2500 BC.”[7] No contemporary mention of Gilgamesh has yet been discovered,[8] but the 1955 discovery of the Tummal Inscription, a thirty-four-line historiographic text written during the reign of Ishbi-Erra (c. 1953 – c. 1920 BC), has cast considerable light on his reign.[8]
https://en.wikipedia.org/wiki/Gilgamesh
Ah. I didn’t know that. I see they have evidence supporting the historicity of Gilgamesh that we don’t have for Jesus, such as government records and king lists preserving information from his time (that’s how they can date him). For example:
Yes, but none of these references to Gilgamesh are found in contemporary sources. They were written centuries after his reign (circa 2800 to 2500 BC), which is exactly what we don’t have for Jesus.
That’s not quite true. Follow the link. Near contemporary and credible sources for Gilgamesh are listed (remember, texts typically long predate latest extant manuscripts of them, e.g. the oldest manuscript of Tacitus dates 10th century but preserves a source dating to the 2nd century; ditto Sumerian texts vs. “manuscripts”).
Example. Example. Example.
We don’t have anything like this for Jesus.
The matter is not simply relative dating of manuscripts. It’s of reference class vs. likelihood ratios, i.e. how typically are government records of rulers faked, particularly records naming kings confirmed real (reference class) and how likely is it that all the evidence we have for Gilgamesh would exist if Gilgamesh didn’t (likelihood). When we add those up we get a probability favoring historicity (albeit not strong, which is why you’ll note historians will still express hesitation in their certitude of Gilgamesh’s existence). This doesn’t happen for Jesus. We only have mythical sources, and contemporary letters that don’t clearly place him on earth. No state documents. No ancillary corroboration.
But couldn’t we make a similar argument for the historicity of Jesus using the same kind of evidence for the historicity of Gilgamesh?
For example, the historicity of Gilgamesh is based on his association with Enmebaragesi in the SKL, the Tummal inscription and the epic of Gilgamesh, whose existence happens to be supported by archaeological evidence (vase fragments). We could make the same argument for Jesus. In all of our primary sources, Jesus is always associated with Pontius Pilate, whose existence is also supported by archaeological evidence (the Pilate stone). This archaeological evidence verifying Pilate’s historicity would enhance the notion that Jesus is also historical, just as it does for Gilgamesh.
In fact, using this method would strengthen the case for Jesus’s historicity even more than for Gilgamesh. The primary sources and archaeological evidence either date from the lifetime of Jesus or not long after it, whereas half a millennium separates the archaeological evidence for Enmebaragesi and the primary sources for Gilgamesh. Sources composed within living memory will be more accurate and reliable than those composed centuries later, which means Jesus has a much firmer historical basis than Gilgamesh.
There is no archaeology that links to Jesus. That Pilate existed is not evidence Jesus did. They aren’t even related by family much less succession of reign. So there is no analogy here.
Your reasoning is consistently always crap. And consistently always ignores everything I actually argue. It is for the latter reason, after multiple warnings, that you are hereby banned from commenting on my blog ever again.
I do not want to hear from you.
I actually do have to dispute that most people hear “atheist” and think “someone who doesn’t believe in god irrespective of certainty”. That may be more true than it used to be due to popularization, but pretty much every YouTube atheist I’ve seen has had to respond to some portion of people saying they’re not really an atheist by explaining their degree of certainty on the topic. Some of these are bad faith arguments being made by apologists, but I do think there are quite a lot of religious people who still ascribe to the idea that atheism means hard atheism and agnosticism means soft atheism. Bart Ehrman himself explained that he called himself an agnostic for some time before realizing that he feels he can be best called an agnostic atheist.
Of course, the people who matter (atheists themselves) have tended toward defining the term as such, and that should therefore, barring some compelling reason to the contrary, be what those folks are called (at least when speaking to them).
You seem to be contradicting yourself. If they all “had to respond…by explaining their degree of certainty” that proves my point: the word alone is insufficient to communicate that.
I was just responding to you noting that “And as the Stanford Encyclopedia of Philosophy says: the lack-of-belief (i.e. the broader) definition of atheist “is a legitimate definition in the sense that it reports how a significant number of people use the term”. Many people (overwhelmingly non-atheist) use the term to mean hard atheism, or a great degree of certainty. I think this is what motivates people like Steve, and you’ve noted this yourself before: They want to duck out of the entire debate by just using a different word. The problem, of course, is that not only is one then conceding ground to disingenuous people, but as you correctly note, everyone still has to explain, on a long enough timescale, their personal degree of certainty and where it comes from. It doesn’t matter if one calls oneself an agnostic, atheist, religious skeptic, non-cognitivist, etc.: no matter what, it will come down to explaining one’s read of the evidence.
I find it baffling that Steve wants to fight this one. As you correctly note, whether one is in an academic context or not, one always has to explain what one means anyways, so trying to be pedantic over a definition is never helpful because it just shifts the conversation back. “I’m not an atheist, I’m an agnostic” and “I’m an atheist but I just mean atheist as someone who lacks a belief in God for whatever reason” both require redressing the other person’s belief, deliberately mistaken or not, that what is being claimed is a philosophical, logically deductive degree of certainty. Religious people will still strawman the agnostic or the atheist for not being hard atheist. Moreover, the “I’m an agnostic” phrasing I think is actually objectively misleading for a lot of people, who really think that God so clearly doesn’t exist that they would be willing to give it billions-to-one odds.
Your position on when to abide by commonly held definitions is not clear to me. When readers objected elsewhere to your definition of feminism, your replied that one can “define one’s own feminism” in the same way that one can “define one’s own atheism.” That defense fails by the very reasoning that you follow in this article. Those objecting were proposing what to them was the “common” definition of the term, which, for most people today, signals a commitment not only to the equal treatment of women but also to a form of social and political leftism.
You might counter that people only have this impression because of the success of dishonest anti-feminist activism, but that would merely expose another flaw in your insistence that we all “speak Common”—which is that common definitions are prone to be slippery and vulnerable to manipulation, because they are often settled on in ignorance and without proper analysis. In my view, we are perfectly justified—even obliged—to defend our own definitions of terms, especially if we think that we jeopardize something important by yielding those definitions to flawed public understanding. You, I assume, would still insist on your own definition of feminism, no matter what you thought the term meant to the public.
Isn’t this an inconsistency in your thinking? If not, could you please explain where you think the right to “define one’s own atheism” ends and the need to “respect common parlance” begins? And why do you seem to have a different standard in other areas of culture and politics?
Read the article you are commenting on. I explicitly say individuals get to name themselves (no one has to call themselves an atheist if they don’t want to, because anyone can tell you what label they prefer to be labeled as). The only thing they can’t do is make false claims about linguistic convention (they can’t say “even according to common parlance I am not an atheist” when that’s empirically false). Those are two different things, and relate to the question of communication theory, as explained in my article above.
So let’s assume you meant to say something about communication theory and not self-labeling.
When it comes to that, I have not said feminists can define themselves any way they want. There are conventional limits on who can still call themselves a feminist and still be using the English language. But those limits are much broader than linguistic imperialists always assume or insist. Just as with the word atheist.
“Feminist,” like “atheist,” is inclusive of many more variant ideas than any singular one, and thus no one can “assume” someone who identifies as a feminist holds to only one version of feminism. Just as no one can assume an atheist is a positive atheist rather than a negative atheist, or an antitheist, and so on, just by hearing that they are an “atheist.” Because the latter term is in actual linguistic convention too broad to allow such specific inferences. Just like the word “feminist.”
There’s another reason to reject Jesus mythicism, although it doesn’t necessarily disprove it: Occam’s razor or parsimony. This is the principle the likeliest explanation is based on the least number of assumptions. For instance, Carrier’s theory of Christian origins is a very convoluted one. He wants us to believe that Jesus started out as a divine being manufactured from divine sperm, who lived and died in outer space. This being first appeared in visions and hallucinations to Paul, who then spread Jesus’s message to others. Jesus was then gradually historicized, even though euhemerization only applies to men becoming gods, not vice versa. This historicization is attributed to some conspiracy within the church, although we have no documentary evidence there was ever such a conspiracy, nor have there ever been any Christian sects that denied the historicity of Jesus.
Now contrast this highly convoluted explanation with its many assumptions with mainstream consensus: Jesus was a historical figure who founded Christianity. Based on Occam’s razor, this is the likeliest explanation since it’s the most straightforward.
Actually historicity is also convoluted in its required ad hoc assumptions. In fact more so, as the assumptions I employ for mythicism have evidence and thus aren’t ad hoc.
I demonstrate this in OHJ. Then summarized it in Chapter 12. Please read the book.
I don’t disagree with anything in the article. However, my experience until now with common parlance is that people are quite aware of the distinction between atheist and agnostic. This holds even true for Arabic speakers with the two corresponding words ملحد and لا أدري respectively.
I am open though to the fact that my experience can be unusual as the Arabic speaking community who even cares about these things is very small and usually care due to personal beliefs, academic specialization or being Islamic/Christian apologists so a huge selection bias is at work.
My everyday experience in the UK is also heavily influenced by the fact that I live in Cambridge (which is a small city and on average has a very liberal and highly educated population) and work in a molecular biology and bioinformatics institute so my colleagues are mostly biologists and software engineers where these distinctions are widely understood even among the religious in them.