Innumeracy is more of a threat than scientific illiteracy. And I want to illustrate this today.

A Problem Atheists Should Care About

Ordinary literacy is not so much a problem online, since participation on the internet requires basic literacy—although of course, insofar as ordinary illiteracy cuts people off from participating in the online community, that remains a problem all its own. But here I’m concerned with the damage that can be done by net users who are innumerate, just as much as net users who are science illiterate (look at arch-conservative and creationist online communities to see what I mean for the latter).

I say innumeracy is “more” of a threat because atheists, currently the fastest rising native demographic in the U.S., are almost as likely to be innumerate as the general public is to be scientifically illiterate. Atheists, at least in liberal democracies, tend to be pretty good on science literacy. In my personal experience, the more outside the comfort zone of “liberal democracy” you get, the more science literacy drops among the atheists you’ll still find, but even then they will tend to be better on science literacy relative to their own local populations.

I think basic literacy + internet access + atheism/skepticism/humanism generally, gradually, cures most science illiteracy. It only fails at that on the edges of bigotry and tribalism, where, even among atheists, science is rejected or ignored that does not agree with their preferred political or cultural narrative. The values of skepticism are supposed to cure that, yet lately seem to have been insulating people from it instead, as others have commented on before, e.g. this and its followups, or this, or this, or this, or this, or the many examples of members of the skepticism community engaging in climate science denialism, a phenomenon Skeptical Inquirer had to fence with recently, or gullibly yet fanatically promoting this or that political or economic theory on no empirical evidence.

But apart from that, atheists generally do well on science literacy. They are not as good at mathematical literacy (certainly some are, but I’m speaking as to the whole). And that can create all manner of problems, from falling for Christian arguments or not understanding scientific arguments to building or reinforcing false beliefs about political, cultural, racial, religious, or economic facts. It can also lead to instances of false skepticism—being “skeptical” of facts that are actually well-founded, due to some fallacy or other, the most common being the self-made straw man, where a skeptic finds a fact stated incorrectly and then assumes the fact is false (because they want it to be), even though the evidence shows that when the errors or misstatements are removed, the fact remains.

Thinking mathematically is important. It catches and corrects many mistakes. It causes the right questions to be asked. And it helps get the right answers. Experts have been saying this for years. Here are some examples of what I mean.

Habermas and the Devious Trick of Excluding the Middle

I’ll ease you into my point by picking a “ra ra” example that atheists easily get behind and usually already are suspicious of: Gary Habermas’s frequent use of “statistics” to make an argument for the resurrection of Jesus from expert consensus, an argument that is then borrowed and regurgitated by mathematically gullible Christian apologists everywhere (up to and including William Lane Craig, when he isn’t lying about whose argument he is using). Atheists are usually already deeply suspicious here, but not usually for the most mathematical reason. So it’s a good, “safe” example.

Habermas claims to have cataloged thousands of articles on the resurrection of Jesus (the number of thousands always changes, presumably because he keeps expanding his database) and found that (roughly; the exact number varies depending on which article you read) 25% of “writers” on the subject of the resurrection of Jesus sided against an empty tomb and 75% “for.” In debates this gets translated into “75% of experts agree there was an empty tomb.” Which is false. And it’s false because of a mathematical mistake in the translation from what he actually said to what gets claimed publicly…a mistake, to my knowledge, Habermas makes no effort to correct, and which I suspect he is happy to encourage (and that’s if we charitably assume he is numerate enough to know it’s a mistake).

The latest article on this that I’ve read (I don’t know if he has published anything more recently on it) is Gary Habermas, “Experiences of the Risen Jesus: The Foundational Historical Issue in the Early Proclamation of the Resurrection,” in Dialog: A Journal of Theology 45.3 (Fall 2006): 288-97. Notably in that article he no longer says 75%, but from his sample of “2200” articles (which has increased since the 1400 he claimed in an earlier article) he now waffles by saying “70-75%.” But he doesn’t tell us how he calculated that—he doesn’t give any numbers, or name or date any of the items in his sample that are being set in ratio to each other.

Which is usually where atheist critics pounce: Habermas doesn’t release his data (still to this day; even after repeated requests, as some of those requesting it have told me), so his result can’t be evaluated. That makes his claim uncheckable. Which is a perversion of the peer review process. That basically makes this bogus number propaganda, not the outcome of any genuine research methodology. The closest I have ever seen him come to exposing how he gets this result was in his article “Resurrection Research from 1975 to the Present: What Are the Critical Scholars Saying?” in the Journal for the Study of the Historical Jesus (June 2005): 135-53.

There it is revealed that it is not 75% “of scholars,” but 75% of writers (regardless of qualifications) who have published articles arguing specifically for or against the empty tomb (he never gives an actual count that I know of). But those who publish on a specific issue do not represent a random sample, but could very well represent a biased sample (the more so when you include authors with no relevant qualifications), and so there is no way to assess the actual percentage of relevant scholars in the field who share those published conclusions. You would need a scientifically controlled randomized poll of verified experts. He hasn’t done that. And he shows no interest in ever doing it (despite having plenty of well-funded Christian institutes and universities he could appeal to for financing such a relatively simple project).

But my interest here is to get you to think of this mathematically. So even for that common rebuttal, look at it as a mathematician.

Suppose 75% of qualified experts in relevant fields (e.g. biblical studies, ancient history) actually reject the historicity of the empty tomb, and then those eager to oppose what was actually a general consensus against an empty tomb feel more motivated to submit those defenses for publication—especially given the readiness with which such defenses would be accepted by the plethora of religious journals. The result would be Habermas’s observed ratio of 75% in favor, yet it would be exactly the opposite of the actual consensus on the issue (which would be 75% against).

Compare someone who wanted to defend the existence of Atlantis: they would have a much harder time finding a kind reception—there are not dozens of pro-Atlantean journals out there, much less hundreds of Atlantis believers in the ranks of the academic elite—and yet even then I would not be surprised to find there were more articles in print defending Atlantis than attacking it, simply because those who don’t believe in it don’t think it worth their time to debunk, or regard one or two good debunking articles as sufficient to close the case. Only ardently denialist believers see the continual writing of such papers as worthwhile, precisely because the majority remains set against them no matter how many papers keep getting written, so they keep writing them—in frustration.

That’s just one way the sample could be biased. There are many others.

Therefore, even just on this fact alone, Habermas’s 75% statistic is completely useless. It tells us nothing about what a polled consensus of qualified experts actually is on the historicity of the empty tomb. Nothing. The rule to take away here is to always ask what is actually being counted in any statistic.

It’s only worse that Habermas counted even non-experts in his paper survey. Some of the names he does reveal in the JSHJ article as among those he counted are not qualified to derive reliable independent conclusions on a question of ancient history, like Richard Swinburne (who has zero qualifications in the study of ancient history and is only trained in modern philosophy and theology, and even then only fifty years ago). Hence Habermas’s “study” did not distinguish, say, professors of ancient history from professors of philosophy who can’t even read Greek—or even, so far as we know, distinguishing them from entirely unaccredited Christian apologists (since Habermas does not release his data, it cannot be ascertained how many of the thousands of articles he is including were written by completely unqualified Christian missionaries and the like). In short, he used no evident standard of qualification: every author was counted as equal to every other. That obviously biases the sample heavily toward Christian believers, and not objective, well-trained experts.

Another common objection atheists will raise is that even his own numbers destroy Habermas’s argument. If we granted him the benefit of the doubt (even though we now know we shouldn’t) and assume he is only counting qualified experts, his own math tells us that 25-30% of qualified experts reject the historicity of the empty tomb. That is by definition the absence of a consensus. That shows quite clearly a huge divide in the expert community, one that is suspiciously close to the ratio between professed Christians and non-Christians in that same community (a big red flag for ideological bias). So we cannot say the expert consensus supports the “fact” of an empty tomb. Even using Habermas’s own math.

This is an important point, because this is another common mathematical error: ignoring the outliers. Jumping from “70-75%” to “most” is a trick designed to make you think that “most” means 95% or something, when really a huge number (from a quarter to almost a third) disagree. We want to know why. And thinking about the math compels you to ask why. And how many. Hence always ask about the outliers in any statistic: how many people are not in agreement with what is being claimed to be the “majority” or the “norm,” and why.

A third point, one a bit rarer to hear because atheists debating this point often don’t check, is to look at not just what and who is being counted, but its relative value. Some random Christian hack arguing for the empty tomb with arguments even Habermas agrees are bogus, should not be allowed to count at all. Yet Habermas makes no distinction for quality or merit of argumentation. Which destroys the whole point of trying to ascertain (much less argue from) an expert consensus. If the consensus you are polling is based on obviously false claims and invalid methodologies, then that consensus is not worth anything. At all. (I show this is pretty much the case for the historicity of Jesus in chapter one of Proving History.)

So it is very telling that Habermas says (in his article for JSHJ) that “most” of the scholars he counted on the pro side of the empty tomb debate “hold that the Gospels probably would not have dubbed [women] as the chief witnesses unless they actually did attest to this event,” which apart from being incorrect (not a single Gospel identifies any woman as its source, much less “chief” source—we merely “presume” this because the story puts them there, but that becomes a circular argument, as all the Gospels after Mark have men verify the fact, while Mark says no one even reported the fact, not even the women!), is also based on incorrect claims about the ancient world (women’s testimony was fully trusted—as even Habermas admits in the very same paragraph! See my complete discussion of this point in chapter eleven of Not the Impossible Faith). If even Habermas admits “most” of his 75% are relying on an invalid argument, then even if the count really were 75%, it’s still wholly invalid, because “most” of those scholars are thereby in fact wrong—by Habermas’s own inadvertent admission, no less. Thus he shouldn’t be counting them in defense of the fact. Yet he does! And every other Christian apologist aping him just does the same, not even realizing what a total cock-up this is.

And yet none of that is even the most damning.

Here is where my point about numeracy really kicks in. More egregious than all those other faults I’ve already mentioned, Habermas’s study only counted people who specifically wrote articles on the empty tomb pro or con. You might not immediately see what’s mathematically wrong with that. But when you start seeing everything mathematically, you will see the mistake right away. It sticks out like a sore thumb.

In any poll counting opinions or conclusions, those who say yay and those who say nay almost never constitute the entire sample polled. In fact, quite frequently, the majority—sometimes even the vast majority—say neither. That’s right. Habermas’s “study” did not count agnostics, people who believe the evidence currently leaves the question undecided, or who haven’t exhaustively checked both sides of the debate and thus personally admit they don’t know, or those who even claim it can’t be known on present evidence whether there really was an empty tomb. And yet in my personal experience these three categories actually define most scholars with Ph.D.’s in relevant fields. For every article “against” an empty-tomb counted in Habermas’s study I’m certain we can find at least two agnostics (and probably more), so even from Habermas’s own math I can be confident at least 50% of qualified experts do not believe there was an empty tomb—because they either believe there wasn’t one or do not believe it to be known one way or the other.

Just do the math. If Habermas counts 3 writers pro and 1 writer con (for 75% against 25%, his most favorable statistic, which he waffled on later), and if for every writer con we can find at least 2 qualified experts who have never and would never publish on the matter because they deem it an unknowable question, then the ratio will be 3 pro and 3 non-pro (1 + 2 = 3). Notice how non-pro is a very different thing than counting just those arguing con. Yet the law of excluded middle requires us to count all the people in that middle category (the “I don’t knows”). Habermas tries to hide them by only counting the tail ends of the spectrum (the ones arguing pro and the ones arguing con, hoping you don’t notice the huge number of experts he just excluded by that false dichotomy). But when we bring them back in, we start to see the expert community might actually be at least evenly divided on the question (because with agnostics estimated in, it’s more likely going to be closer to 50/50), and given the previous problems already noted (which entail his 75% is probably already hugely inflated), it starts to look like the majority consensus of experts is not in favor of the historicity of the empty tomb.

Just do the math again: if really the expert community has as many experts in each category as in every other (as many who argue pro as who argue con, and just as many who conclude it can’t be known either way on present evidence), then only 33% of the expert community believes the empty tomb is a fact (1:1:1 = 33% pro : 33% con : 33% agnostic). And I’ll bet the number is even lower than that. Because I’m personally fairly certain the agnostics are a far larger proportion of the expert community than either the pros or the cons (certainly when we limit our polling only to experts with relevant qualifications, as we should). Given that I am certain we can find at least two agnostics for every expert who argues con, I expect it’s really closer to 1:1:2, which is 25% pro : 25% con : 50% agnostic. Which would mean 75% don’t conclude there was an empty tomb, the exact opposite of Habermas’s claim. And this, even from his own numbers, and obvious facts he omits, and despite the fact that even experts in this area are majority Christian!

But it gets worse. Because Habermas admits “most” of the arguers pro rely on what even he agrees is an invalid argument, that means more than 50% of those counted in the pro column (“most”) should be eliminated from it (because “most” entails “more than half”). So if we had 3 pro and 3 con and 3 agnostic, now we have to subtract 1.5 from the pro count. Possibly we’d have to reduce the other columns, but we can’t know from anything Habermas has said, and he hasn’t released his data, so all we know for certain is that more than half of those arguing pro must be discounted, by his own inadvertent admission. And that leaves us with 1.5:3:3, which is the same proportion as 3:6:6, which divides the total community into fifteen portions (3 + 6 + 6 = 15), of which less than 20% is occupied by experts who believe there was an empty tomb (1/15 = 0.0667; 0.0667 x 3 = 0.20). Or if we use that more realistic proportion of 1:1:2 I just showed is not unlikely (twice as many agnostics as argue either pro or con), and then cut the first in half, we get 0.5:1:2, which is the same as 2:4:8, which divides the sample into fourteen (2 + 4 + 8 = 14), of which less than 15% is occupied by experts who believe there was an empty tomb (1/14 = 0.0714; 0.0714 x 2 = 0.143). And that’s still before we remove non-expert opinions from his totals, which very likely will drop that percentage even further. But even without that, already a mere 15% approval of the empty tomb’s historicity is getting suspiciously near the proportion of hard-core Biblical inerrantists in the expert community. Now we’re staring at a huge red flag for bias. In any event, a definite minority; because by these estimates, four times more experts do not believe the empty tomb is an established fact as do…or even six times more (if the pros make up 15% instead of 20%).

It’s not looking good for the empty tomb. And all because we can do a little math.

And that’s using nothing more than Habermas’s own numbers, which are already inflated and bogus, plus just a few undeniable likelihoods he tries to conceal from his own arithmetic.

So really, even based on the bogus data Habermas himself presents in defense of the claim that “most experts believe there was an empty tomb,” we can conclude it’s far more probable that most experts do not believe there was an empty tomb—either being certain there wasn’t, or admitting they don’t know. And, lo and behold, now we know that’s the case, because Habermas has since dropped this claim about the empty tomb. It’s no longer something he insists most scholars believe. So much for that.

Race-Baiting and Poverty

Now it’s time to see this mathematical thinking do something a little bit scarier for privileged white skeptics. Last year Robert Ross wrote “Poverty More than a Matter of Black and White” for Inequality.org, whose description aptly evokes the point: “Far too many Americans still see poverty and poor people through a racial prism that distorts demographic realities—and undermines efforts to narrow income inequality.” He shows mathematically how racism is used to manipulate the white population into opposing policies that would benefit them, by fostering the myth that poor = black (or Mexican or [insert feared race here]), and anything aiding black people is more readily opposed by white voters (in far greater proportion than those same white voters would admit, or are even consciously aware). Even liberals often buy into the myth, because they are all too often innumerate, and don’t think about what’s being said when poverty statistics are discussed.

We all know poverty exists at higher rates among blacks. In America the rate of poverty in the black population is about twice that of the white population. But it is then all too often assumed that poverty is a black problem, and that most black people are poor. Neither of which is true. And yet even if you never explicitly thought about it, you probably have spoken or acted as if you believed either at some point or other. By exploiting this numerical illusion (twice the risk erroneously becoming “twice as many black people are poor as white people are”), propagandists can sell the idea that programs in aid of the poor are really programs that disproportionately aid black people. Hence the conservative “welfare mom” trope, which is usually always racialized. It’s never a poor white Christian girl on welfare, even though in fact that describes more mothers on welfare.

That’s right. More “welfare moms” are Jesus-loving white girls than fall into any other category. “White people make up 42 percent of America’s poor, black people [only] about 28 percent” (and rates of religious belief in both racial groups is higher among the poor than the population average). In practice, about 61% of those on any kind of public assistance are white; only 33% black (broken down by type of assistance, the ratios vary, but regardless, in any given category there are more white people taking benefits, e.g. whether it’s food stamps, medicaid, or social security, or there are as many white people taking benefits as black people, e.g. Aid to Families with Dependent Children, the latter due to a racial disproportion in broken homes).

So the race-baiting used by FOX News and other conservative information sources is really using the racism of white people to dupe them into not caring about the poor. All because most people can’t do math. Black people are twice as likely to be poor as white people are; but black people only make up a small proportion of the overall population. So in raw numbers, in America, those living in poverty, even those living in extreme poverty, are predominately white. This is a form of the base rate fallacy, confusing “twice as many” in terms of proportion, with “twice as many” in terms of actual numbers.

It’s the significance of this that Ross wants to call attention to:

Many white people who don’t live anywhere near poverty, even many who consider themselves liberal, think blacks compose most of the poor. Large numbers of these white Americans feel no emotional connection to the problems poor people face. They perceive poverty as a problem of some other community, not their own. If those white Americans who felt this way actually had to confront the demographic reality of poverty, if they came to understand that white people make up the single largest group of the poor, how white America thinks about poverty and policy might start changing. Well-meaning white Americans have for decades been aware that black people face the risk of poverty [more] than whites. But “poverty,” we all need to understand, is more and different than “race.”

Certainly, it shouldn’t matter. We should be as alarmed about black poverty as white poverty and just as keen to take action to solve it. We should see it as a failure of America and rise up as Americans to help our fellow Americans who are being let down. Race shouldn’t make a difference. But in actual practice, emotionally, subconsciously, all too often, it does. And innumeracy can be used by conservatives who want to exploit that latent racism in the population to oppose any action benefiting the poor, by tricking people into thinking poverty is a race thing, and therefore “not white people’s problem.” And they do this so well, in fact, even liberals sometimes fall for it.

Those Troublesome Rape Statistics

Innumeracy often leads to the misuse of rape statistics, even by those who mean well. This then results in the misogynists assuming all rape statistics are bullshit, because some random person misstated what they actually say (black-and-white thinking typifying the authoritarian personality type that I find common among misogynists). When you hear a rape statistic, or want to use one to make an argument or even a casual statement, you do need to make sure exactly what the statistic you are using is measuring.

For example (and here we’ll just assume we’re talking about statistics for women; often the inclusion of male rape victims is overlooked or not inquired about):

  • Is it counting only rape, or both rape and attempted rape (the latter being bad, and indicative of rape culture, but still not exactly the same thing as being raped), or rape and all other forms of sexual assault (e.g. groping, pinching)?

This matters for what you are claiming. If you are making a point that requires you to be stating the incidence of actual rape, you need to make sure you get that number, and not mistakenly substitute some other number for it (like sexual assault incidence). If you are instead citing the rate of sexual assault, say you are citing the rate of sexual assault. That will clear your baffles of most misogynistic toeheading.

  • Is it counting the incidence rate for college women when in college or service women when on duty (etc.) or is it counting overall lifetime incidence?

This is a very crucial mathematical distinction. Because rates of rape were higher in the past than they are now, so a lifetime incidence rate for the whole population will not correspond to the future lifetime risk of a woman entering adulthood now. So if you are trying to make a point about what women entering adulthood will face today, or what the rate is today, or what the risk is today, you can’t use a statistic that is including rates in the past. You should word any use of such a statistic accordingly.

Likewise for women in service: some of the stats often quoted are of lifetime incidence regardless of where or when a rape occurred, which people then mistakenly assume is the rate in service (or perpetrated by fellow servicemen), which it is not. Some of those stats also include women even as far back as the Viet Nam war, yet rates of rape in service (or indeed anywhere) for women then was higher than the rate a woman entering the service now can expect—so you can’t use that statistic as the latter woman’s expected risk. If you want to talk about a servicewoman’s expected risk today, you can’t use rates in the past—unless you are assuming the rates have never changed, which may be a bad assumption, especially when all trend markers show decline.

And yet, the fact that these errors are easy to make, and serve up fuel for the MRA crowd to engage in more denialism, is precisely why we need to be more numerate, so we understand how to find, read, and understand a statistic and what it actually says, so we can report it and use it correctly. Because even when used and stated correctly, rape statistics are horrific enough. You don’t need to gin up the numbers by an injudicious slip of innumeracy.

For a good example of how to use rape statistics correctly (and thereby allow yourself to be optimistic in the face of evident trends, rather than remaining solely in alarmist mode), see Ally Fogg’s article College Rape and the Importance of Measuring Success. But for a really good take-down of MRA faux-skepticism of rape statistics, which shows how to analyze the math correctly, you simply must watch this fact-filled video by C0nc0rdance (e.g. MRAs who balk at the “1 in 3 women experience rape” stat will have to face the fact that it’s at least 1 in 5, lifetime incidence, for college-age women…terrible enough…but think like a mathematician: now you have to add an average of sixty years more of lifetime risk exposure…so a total lifetime incidence of 1 in 3 doesn’t sound so implausible). That video is a good example of how to look at the data and ask the right questions mathematically [It also illustrates why the UK study finding a population lifetime incidence rate there of 1 in 10 might be an undercount…although 1 in 10 is still pretty bad]. This rape prevention website also shows careful wording and citation of sources in its use of rape statistics, making it another good example of doing it right, and a handy resource to have on hand. (You should also take note of the clever way researchers got men to admit to committing rape…and found alarming numbers of rapists in the male population.)

If you want to campaign against rape and rape culture (and combat the faux-skepticism standing in your way), numeracy is your friend.

And So On

Ally Fogg documents another example of deliberate misuse of statistics (regarding domestic abuse), which can all too easily play into liberal verification bias if you aren’t mathematically literate and thus immediately led to check or question the claim by the unusually convenient nature of the claimed result, and lack of obvious causal model—two red flags anyone who thinks more mathematically will catch right away. I found similar mathematical chicanery in Michael Shermer’s inept defense of libertarian theology (I say theology sarcastically, because the Market is their God, in the same absurd way the State is for Marxists).

I have even caught Thunderf00t duping his minions by assuming their mathematical illiteracy was so appalling that they wouldn’t notice the scam he was pulling to manipulate them into sharing his racism (evinced by his dismissal and disregard of minorities and their concerns, and his explicit opposition to our building a racially integrated atheism movement)—see Thunderf00t Against Any Kind of Methodological Honesty or Common Sense Whatever to see that exposed. Any fan of his who was not outraged by his abuse of math there is clearly not an actual skeptic but a blind tribalist. Here we have thousands of atheists being duped…because they are mathematically illiterate—or don’t give a shit about mathematical truth.

Please. Give a shit about mathematical truth.

Curing Your Innumeracy

Start with the resources I already laid out regarding mathematical literacy in my resource for Critical Thinking in the 21st Century. There under Bayesian Reasoning I list several fundamental starting points, the most important being the books in the second half of that paragraph, which deal with statistical and mathematical skills and skepticism generally, and not just Bayesian epistemology. The latter is also important, but for a different reason than the general thrust of the present article. In fact, doing Bayesian epistemology well, requires the sort of mathematical literacy those other books will give you. (For something a little more advanced, but much more thorough, on how statistics gets abused even in science, see Statistics Done Wrong, which eventually you should read, IMO.)

But ultimately, the most important thing is to think mathematically. When someone makes a claim or argument, immediately model it mathematically in your head. That will often expose something odd about it that you’ll want to question. “Wait. What exactly are you counting? Or what did that study you are citing actually count?” “What about the outliers…how large a proportion of the sample group didn’t show the trend you are claiming is normal, and why are they different? And how can you claim the result is normal when all those others exist who don’t conform to the claimed trend?” “Wait. Are the two things you are numerically comparing the things you should actually be comparing?” “Hmm. That number sounds suspiciously high/low/contrary to common knowledge. Where is your data? I want to run the numbers myself, to check your math.” “Why does your graph end there?” “Why does your graph not match what you are saying?” And so on. These are the kinds of questions you should not only be inspired to ask, but have some idea of how to reach the correct answers to (or at least vet the answers you are given to see whether they are credible or fishy).

Mathematical literacy is important.

Please take it as seriously as science literacy or even English literacy. Being a good skeptic requires it. And being a good skeptic is something we should all want to be.

For much more on this same point, and a closing list of books of value on the subject well worth reading, see my article Critical Thinking as a Function of Math Literacy.

§

To comment use Add Comment field at bottom or click a Reply box next to a comment. See Comments & Moderation Policy.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading