Innumeracy is more of a threat than scientific illiteracy. And I want to illustrate this today.
A Problem Atheists Should Care About
Ordinary literacy is not so much a problem online, since participation on the internet requires basic literacy—although of course, insofar as ordinary illiteracy cuts people off from participating in the online community, that remains a problem all its own. But here I’m concerned with the damage that can be done by net users who are innumerate, just as much as net users who are science illiterate (look at arch-conservative and creationist online communities to see what I mean for the latter).
I say innumeracy is “more” of a threat because atheists, currently the fastest rising native demographic in the U.S., are almost as likely to be innumerate as the general public is to be scientifically illiterate. Atheists, at least in liberal democracies, tend to be pretty good on science literacy. In my personal experience, the more outside the comfort zone of “liberal democracy” you get, the more science literacy drops among the atheists you’ll still find, but even then they will tend to be better on science literacy relative to their own local populations.
I think basic literacy + internet access + atheism/skepticism/humanism generally, gradually, cures most science illiteracy. It only fails at that on the edges of bigotry and tribalism, where, even among atheists, science is rejected or ignored that does not agree with their preferred political or cultural narrative. The values of skepticism are supposed to cure that, yet lately seem to have been insulating people from it instead, as others have commented on before, e.g. this and its followups, or this, or this, or this, or this, or the many examples of members of the skepticism community engaging in climate science denialism, a phenomenon Skeptical Inquirer had to fence with recently, or gullibly yet fanatically promoting this or that political or economic theory on no empirical evidence.
But apart from that, atheists generally do well on science literacy. They are not as good at mathematical literacy (certainly some are, but I’m speaking as to the whole). And that can create all manner of problems, from falling for Christian arguments or not understanding scientific arguments to building or reinforcing false beliefs about political, cultural, racial, religious, or economic facts. It can also lead to instances of false skepticism—being “skeptical” of facts that are actually well-founded, due to some fallacy or other, the most common being the self-made straw man, where a skeptic finds a fact stated incorrectly and then assumes the fact is false (because they want it to be), even though the evidence shows that when the errors or misstatements are removed, the fact remains.
Thinking mathematically is important. It catches and corrects many mistakes. It causes the right questions to be asked. And it helps get the right answers. Experts have been saying this for years. Here are some examples of what I mean.
Habermas and the Devious Trick of Excluding the Middle
I’ll ease you into my point by picking a “ra ra” example that atheists easily get behind and usually already are suspicious of: Gary Habermas’s frequent use of “statistics” to make an argument for the resurrection of Jesus from expert consensus, an argument that is then borrowed and regurgitated by mathematically gullible Christian apologists everywhere (up to and including William Lane Craig, when he isn’t lying about whose argument he is using). Atheists are usually already deeply suspicious here, but not usually for the most mathematical reason. So it’s a good, “safe” example.
Habermas claims to have cataloged thousands of articles on the resurrection of Jesus (the number of thousands always changes, presumably because he keeps expanding his database) and found that (roughly; the exact number varies depending on which article you read) 25% of “writers” on the subject of the resurrection of Jesus sided against an empty tomb and 75% “for.” In debates this gets translated into “75% of experts agree there was an empty tomb.” Which is false. And it’s false because of a mathematical mistake in the translation from what he actually said to what gets claimed publicly…a mistake, to my knowledge, Habermas makes no effort to correct, and which I suspect he is happy to encourage (and that’s if we charitably assume he is numerate enough to know it’s a mistake).
The latest article on this that I’ve read (I don’t know if he has published anything more recently on it) is Gary Habermas, “Experiences of the Risen Jesus: The Foundational Historical Issue in the Early Proclamation of the Resurrection,” in Dialog: A Journal of Theology 45.3 (Fall 2006): 288-97. Notably in that article he no longer says 75%, but from his sample of “2200” articles (which has increased since the 1400 he claimed in an earlier article) he now waffles by saying “70-75%.” But he doesn’t tell us how he calculated that—he doesn’t give any numbers, or name or date any of the items in his sample that are being set in ratio to each other.
Which is usually where atheist critics pounce: Habermas doesn’t release his data (still to this day; even after repeated requests, as some of those requesting it have told me), so his result can’t be evaluated. That makes his claim uncheckable. Which is a perversion of the peer review process. That basically makes this bogus number propaganda, not the outcome of any genuine research methodology. The closest I have ever seen him come to exposing how he gets this result was in his article “Resurrection Research from 1975 to the Present: What Are the Critical Scholars Saying?” in the Journal for the Study of the Historical Jesus (June 2005): 135-53.
There it is revealed that it is not 75% “of scholars,” but 75% of writers (regardless of qualifications) who have published articles arguing specifically for or against the empty tomb (he never gives an actual count that I know of). But those who publish on a specific issue do not represent a random sample, but could very well represent a biased sample (the more so when you include authors with no relevant qualifications), and so there is no way to assess the actual percentage of relevant scholars in the field who share those published conclusions. You would need a scientifically controlled randomized poll of verified experts. He hasn’t done that. And he shows no interest in ever doing it (despite having plenty of well-funded Christian institutes and universities he could appeal to for financing such a relatively simple project).
But my interest here is to get you to think of this mathematically. So even for that common rebuttal, look at it as a mathematician.
Suppose 75% of qualified experts in relevant fields (e.g. biblical studies, ancient history) actually reject the historicity of the empty tomb, and then those eager to oppose what was actually a general consensus against an empty tomb feel more motivated to submit those defenses for publication—especially given the readiness with which such defenses would be accepted by the plethora of religious journals. The result would be Habermas’s observed ratio of 75% in favor, yet it would be exactly the opposite of the actual consensus on the issue (which would be 75% against).
Compare someone who wanted to defend the existence of Atlantis: they would have a much harder time finding a kind reception—there are not dozens of pro-Atlantean journals out there, much less hundreds of Atlantis believers in the ranks of the academic elite—and yet even then I would not be surprised to find there were more articles in print defending Atlantis than attacking it, simply because those who don’t believe in it don’t think it worth their time to debunk, or regard one or two good debunking articles as sufficient to close the case. Only ardently denialist believers see the continual writing of such papers as worthwhile, precisely because the majority remains set against them no matter how many papers keep getting written, so they keep writing them—in frustration.
That’s just one way the sample could be biased. There are many others.
Therefore, even just on this fact alone, Habermas’s 75% statistic is completely useless. It tells us nothing about what a polled consensus of qualified experts actually is on the historicity of the empty tomb. Nothing. The rule to take away here is to always ask what is actually being counted in any statistic.
It’s only worse that Habermas counted even non-experts in his paper survey. Some of the names he does reveal in the JSHJ article as among those he counted are not qualified to derive reliable independent conclusions on a question of ancient history, like Richard Swinburne (who has zero qualifications in the study of ancient history and is only trained in modern philosophy and theology, and even then only fifty years ago). Hence Habermas’s “study” did not distinguish, say, professors of ancient history from professors of philosophy who can’t even read Greek—or even, so far as we know, distinguishing them from entirely unaccredited Christian apologists (since Habermas does not release his data, it cannot be ascertained how many of the thousands of articles he is including were written by completely unqualified Christian missionaries and the like). In short, he used no evident standard of qualification: every author was counted as equal to every other. That obviously biases the sample heavily toward Christian believers, and not objective, well-trained experts.
Another common objection atheists will raise is that even his own numbers destroy Habermas’s argument. If we granted him the benefit of the doubt (even though we now know we shouldn’t) and assume he is only counting qualified experts, his own math tells us that 25-30% of qualified experts reject the historicity of the empty tomb. That is by definition the absence of a consensus. That shows quite clearly a huge divide in the expert community, one that is suspiciously close to the ratio between professed Christians and non-Christians in that same community (a big red flag for ideological bias). So we cannot say the expert consensus supports the “fact” of an empty tomb. Even using Habermas’s own math.
This is an important point, because this is another common mathematical error: ignoring the outliers. Jumping from “70-75%” to “most” is a trick designed to make you think that “most” means 95% or something, when really a huge number (from a quarter to almost a third) disagree. We want to know why. And thinking about the math compels you to ask why. And how many. Hence always ask about the outliers in any statistic: how many people are not in agreement with what is being claimed to be the “majority” or the “norm,” and why.
A third point, one a bit rarer to hear because atheists debating this point often don’t check, is to look at not just what and who is being counted, but its relative value. Some random Christian hack arguing for the empty tomb with arguments even Habermas agrees are bogus, should not be allowed to count at all. Yet Habermas makes no distinction for quality or merit of argumentation. Which destroys the whole point of trying to ascertain (much less argue from) an expert consensus. If the consensus you are polling is based on obviously false claims and invalid methodologies, then that consensus is not worth anything. At all. (I show this is pretty much the case for the historicity of Jesus in chapter one of Proving History.)
So it is very telling that Habermas says (in his article for JSHJ) that “most” of the scholars he counted on the pro side of the empty tomb debate “hold that the Gospels probably would not have dubbed [women] as the chief witnesses unless they actually did attest to this event,” which apart from being incorrect (not a single Gospel identifies any woman as its source, much less “chief” source—we merely “presume” this because the story puts them there, but that becomes a circular argument, as all the Gospels after Mark have men verify the fact, while Mark says no one even reported the fact, not even the women!), is also based on incorrect claims about the ancient world (women’s testimony was fully trusted—as even Habermas admits in the very same paragraph! See my complete discussion of this point in chapter eleven of Not the Impossible Faith). If even Habermas admits “most” of his 75% are relying on an invalid argument, then even if the count really were 75%, it’s still wholly invalid, because “most” of those scholars are thereby in fact wrong—by Habermas’s own inadvertent admission, no less. Thus he shouldn’t be counting them in defense of the fact. Yet he does! And every other Christian apologist aping him just does the same, not even realizing what a total cock-up this is.
And yet none of that is even the most damning.
Here is where my point about numeracy really kicks in. More egregious than all those other faults I’ve already mentioned, Habermas’s study only counted people who specifically wrote articles on the empty tomb pro or con. You might not immediately see what’s mathematically wrong with that. But when you start seeing everything mathematically, you will see the mistake right away. It sticks out like a sore thumb.
In any poll counting opinions or conclusions, those who say yay and those who say nay almost never constitute the entire sample polled. In fact, quite frequently, the majority—sometimes even the vast majority—say neither. That’s right. Habermas’s “study” did not count agnostics, people who believe the evidence currently leaves the question undecided, or who haven’t exhaustively checked both sides of the debate and thus personally admit they don’t know, or those who even claim it can’t be known on present evidence whether there really was an empty tomb. And yet in my personal experience these three categories actually define most scholars with Ph.D.’s in relevant fields. For every article “against” an empty-tomb counted in Habermas’s study I’m certain we can find at least two agnostics (and probably more), so even from Habermas’s own math I can be confident at least 50% of qualified experts do not believe there was an empty tomb—because they either believe there wasn’t one or do not believe it to be known one way or the other.
Just do the math. If Habermas counts 3 writers pro and 1 writer con (for 75% against 25%, his most favorable statistic, which he waffled on later), and if for every writer con we can find at least 2 qualified experts who have never and would never publish on the matter because they deem it an unknowable question, then the ratio will be 3 pro and 3 non-pro (1 + 2 = 3). Notice how non-pro is a very different thing than counting just those arguing con. Yet the law of excluded middle requires us to count all the people in that middle category (the “I don’t knows”). Habermas tries to hide them by only counting the tail ends of the spectrum (the ones arguing pro and the ones arguing con, hoping you don’t notice the huge number of experts he just excluded by that false dichotomy). But when we bring them back in, we start to see the expert community might actually be at least evenly divided on the question (because with agnostics estimated in, it’s more likely going to be closer to 50/50), and given the previous problems already noted (which entail his 75% is probably already hugely inflated), it starts to look like the majority consensus of experts is not in favor of the historicity of the empty tomb.
Just do the math again: if really the expert community has as many experts in each category as in every other (as many who argue pro as who argue con, and just as many who conclude it can’t be known either way on present evidence), then only 33% of the expert community believes the empty tomb is a fact (1:1:1 = 33% pro : 33% con : 33% agnostic). And I’ll bet the number is even lower than that. Because I’m personally fairly certain the agnostics are a far larger proportion of the expert community than either the pros or the cons (certainly when we limit our polling only to experts with relevant qualifications, as we should). Given that I am certain we can find at least two agnostics for every expert who argues con, I expect it’s really closer to 1:1:2, which is 25% pro : 25% con : 50% agnostic. Which would mean 75% don’t conclude there was an empty tomb, the exact opposite of Habermas’s claim. And this, even from his own numbers, and obvious facts he omits, and despite the fact that even experts in this area are majority Christian!
But it gets worse. Because Habermas admits “most” of the arguers pro rely on what even he agrees is an invalid argument, that means more than 50% of those counted in the pro column (“most”) should be eliminated from it (because “most” entails “more than half”). So if we had 3 pro and 3 con and 3 agnostic, now we have to subtract 1.5 from the pro count. Possibly we’d have to reduce the other columns, but we can’t know from anything Habermas has said, and he hasn’t released his data, so all we know for certain is that more than half of those arguing pro must be discounted, by his own inadvertent admission. And that leaves us with 1.5:3:3, which is the same proportion as 3:6:6, which divides the total community into fifteen portions (3 + 6 + 6 = 15), of which less than 20% is occupied by experts who believe there was an empty tomb (1/15 = 0.0667; 0.0667 x 3 = 0.20). Or if we use that more realistic proportion of 1:1:2 I just showed is not unlikely (twice as many agnostics as argue either pro or con), and then cut the first in half, we get 0.5:1:2, which is the same as 2:4:8, which divides the sample into fourteen (2 + 4 + 8 = 14), of which less than 15% is occupied by experts who believe there was an empty tomb (1/14 = 0.0714; 0.0714 x 2 = 0.143). And that’s still before we remove non-expert opinions from his totals, which very likely will drop that percentage even further. But even without that, already a mere 15% approval of the empty tomb’s historicity is getting suspiciously near the proportion of hard-core Biblical inerrantists in the expert community. Now we’re staring at a huge red flag for bias. In any event, a definite minority; because by these estimates, four times more experts do not believe the empty tomb is an established fact as do…or even six times more (if the pros make up 15% instead of 20%).
It’s not looking good for the empty tomb. And all because we can do a little math.
And that’s using nothing more than Habermas’s own numbers, which are already inflated and bogus, plus just a few undeniable likelihoods he tries to conceal from his own arithmetic.
So really, even based on the bogus data Habermas himself presents in defense of the claim that “most experts believe there was an empty tomb,” we can conclude it’s far more probable that most experts do not believe there was an empty tomb—either being certain there wasn’t, or admitting they don’t know. And, lo and behold, now we know that’s the case, because Habermas has since dropped this claim about the empty tomb. It’s no longer something he insists most scholars believe. So much for that.
Race-Baiting and Poverty
Now it’s time to see this mathematical thinking do something a little bit scarier for privileged white skeptics. Last year Robert Ross wrote “Poverty More than a Matter of Black and White” for Inequality.org, whose description aptly evokes the point: “Far too many Americans still see poverty and poor people through a racial prism that distorts demographic realities—and undermines efforts to narrow income inequality.” He shows mathematically how racism is used to manipulate the white population into opposing policies that would benefit them, by fostering the myth that poor = black (or Mexican or [insert feared race here]), and anything aiding black people is more readily opposed by white voters (in far greater proportion than those same white voters would admit, or are even consciously aware). Even liberals often buy into the myth, because they are all too often innumerate, and don’t think about what’s being said when poverty statistics are discussed.
We all know poverty exists at higher rates among blacks. In America the rate of poverty in the black population is about twice that of the white population. But it is then all too often assumed that poverty is a black problem, and that most black people are poor. Neither of which is true. And yet even if you never explicitly thought about it, you probably have spoken or acted as if you believed either at some point or other. By exploiting this numerical illusion (twice the risk erroneously becoming “twice as many black people are poor as white people are”), propagandists can sell the idea that programs in aid of the poor are really programs that disproportionately aid black people. Hence the conservative “welfare mom” trope, which is usually always racialized. It’s never a poor white Christian girl on welfare, even though in fact that describes more mothers on welfare.
That’s right. More “welfare moms” are Jesus-loving white girls than fall into any other category. “White people make up 42 percent of America’s poor, black people [only] about 28 percent” (and rates of religious belief in both racial groups is higher among the poor than the population average). In practice, about 61% of those on any kind of public assistance are white; only 33% black (broken down by type of assistance, the ratios vary, but regardless, in any given category there are more white people taking benefits, e.g. whether it’s food stamps, medicaid, or social security, or there are as many white people taking benefits as black people, e.g. Aid to Families with Dependent Children, the latter due to a racial disproportion in broken homes).
So the race-baiting used by FOX News and other conservative information sources is really using the racism of white people to dupe them into not caring about the poor. All because most people can’t do math. Black people are twice as likely to be poor as white people are; but black people only make up a small proportion of the overall population. So in raw numbers, in America, those living in poverty, even those living in extreme poverty, are predominately white. This is a form of the base rate fallacy, confusing “twice as many” in terms of proportion, with “twice as many” in terms of actual numbers.
It’s the significance of this that Ross wants to call attention to:
Many white people who don’t live anywhere near poverty, even many who consider themselves liberal, think blacks compose most of the poor. Large numbers of these white Americans feel no emotional connection to the problems poor people face. They perceive poverty as a problem of some other community, not their own. If those white Americans who felt this way actually had to confront the demographic reality of poverty, if they came to understand that white people make up the single largest group of the poor, how white America thinks about poverty and policy might start changing. Well-meaning white Americans have for decades been aware that black people face the risk of poverty [more] than whites. But “poverty,” we all need to understand, is more and different than “race.”
Certainly, it shouldn’t matter. We should be as alarmed about black poverty as white poverty and just as keen to take action to solve it. We should see it as a failure of America and rise up as Americans to help our fellow Americans who are being let down. Race shouldn’t make a difference. But in actual practice, emotionally, subconsciously, all too often, it does. And innumeracy can be used by conservatives who want to exploit that latent racism in the population to oppose any action benefiting the poor, by tricking people into thinking poverty is a race thing, and therefore “not white people’s problem.” And they do this so well, in fact, even liberals sometimes fall for it.
Those Troublesome Rape Statistics
Innumeracy often leads to the misuse of rape statistics, even by those who mean well. This then results in the misogynists assuming all rape statistics are bullshit, because some random person misstated what they actually say (black-and-white thinking typifying the authoritarian personality type that I find common among misogynists). When you hear a rape statistic, or want to use one to make an argument or even a casual statement, you do need to make sure exactly what the statistic you are using is measuring.
For example (and here we’ll just assume we’re talking about statistics for women; often the inclusion of male rape victims is overlooked or not inquired about):
- Is it counting only rape, or both rape and attempted rape (the latter being bad, and indicative of rape culture, but still not exactly the same thing as being raped), or rape and all other forms of sexual assault (e.g. groping, pinching)?
This matters for what you are claiming. If you are making a point that requires you to be stating the incidence of actual rape, you need to make sure you get that number, and not mistakenly substitute some other number for it (like sexual assault incidence). If you are instead citing the rate of sexual assault, say you are citing the rate of sexual assault. That will clear your baffles of most misogynistic toeheading.
- Is it counting the incidence rate for college women when in college or service women when on duty (etc.) or is it counting overall lifetime incidence?
This is a very crucial mathematical distinction. Because rates of rape were higher in the past than they are now, so a lifetime incidence rate for the whole population will not correspond to the future lifetime risk of a woman entering adulthood now. So if you are trying to make a point about what women entering adulthood will face today, or what the rate is today, or what the risk is today, you can’t use a statistic that is including rates in the past. You should word any use of such a statistic accordingly.
Likewise for women in service: some of the stats often quoted are of lifetime incidence regardless of where or when a rape occurred, which people then mistakenly assume is the rate in service (or perpetrated by fellow servicemen), which it is not. Some of those stats also include women even as far back as the Viet Nam war, yet rates of rape in service (or indeed anywhere) for women then was higher than the rate a woman entering the service now can expect—so you can’t use that statistic as the latter woman’s expected risk. If you want to talk about a servicewoman’s expected risk today, you can’t use rates in the past—unless you are assuming the rates have never changed, which may be a bad assumption, especially when all trend markers show decline.
And yet, the fact that these errors are easy to make, and serve up fuel for the MRA crowd to engage in more denialism, is precisely why we need to be more numerate, so we understand how to find, read, and understand a statistic and what it actually says, so we can report it and use it correctly. Because even when used and stated correctly, rape statistics are horrific enough. You don’t need to gin up the numbers by an injudicious slip of innumeracy.
For a good example of how to use rape statistics correctly (and thereby allow yourself to be optimistic in the face of evident trends, rather than remaining solely in alarmist mode), see Ally Fogg’s article College Rape and the Importance of Measuring Success. But for a really good take-down of MRA faux-skepticism of rape statistics, which shows how to analyze the math correctly, you simply must watch this fact-filled video by C0nc0rdance (e.g. MRAs who balk at the “1 in 3 women experience rape” stat will have to face the fact that it’s at least 1 in 5, lifetime incidence, for college-age women…terrible enough…but think like a mathematician: now you have to add an average of sixty years more of lifetime risk exposure…so a total lifetime incidence of 1 in 3 doesn’t sound so implausible). That video is a good example of how to look at the data and ask the right questions mathematically [It also illustrates why the UK study finding a population lifetime incidence rate there of 1 in 10 might be an undercount…although 1 in 10 is still pretty bad]. This rape prevention website also shows careful wording and citation of sources in its use of rape statistics, making it another good example of doing it right, and a handy resource to have on hand. (You should also take note of the clever way researchers got men to admit to committing rape…and found alarming numbers of rapists in the male population.)
If you want to campaign against rape and rape culture (and combat the faux-skepticism standing in your way), numeracy is your friend.
And So On
Ally Fogg documents another example of deliberate misuse of statistics (regarding domestic abuse), which can all too easily play into liberal verification bias if you aren’t mathematically literate and thus immediately led to check or question the claim by the unusually convenient nature of the claimed result, and lack of obvious causal model—two red flags anyone who thinks more mathematically will catch right away. I found similar mathematical chicanery in Michael Shermer’s inept defense of libertarian theology (I say theology sarcastically, because the Market is their God, in the same absurd way the State is for Marxists).
I have even caught Thunderf00t duping his minions by assuming their mathematical illiteracy was so appalling that they wouldn’t notice the scam he was pulling to manipulate them into sharing his racism (evinced by his dismissal and disregard of minorities and their concerns, and his explicit opposition to our building a racially integrated atheism movement)—see Thunderf00t Against Any Kind of Methodological Honesty or Common Sense Whatever to see that exposed. Any fan of his who was not outraged by his abuse of math there is clearly not an actual skeptic but a blind tribalist. Here we have thousands of atheists being duped…because they are mathematically illiterate—or don’t give a shit about mathematical truth.
Please. Give a shit about mathematical truth.
Curing Your Innumeracy
Start with the resources I already laid out regarding mathematical literacy in my resource for Critical Thinking in the 21st Century. There under Bayesian Reasoning I list several fundamental starting points, the most important being the books in the second half of that paragraph, which deal with statistical and mathematical skills and skepticism generally, and not just Bayesian epistemology. The latter is also important, but for a different reason than the general thrust of the present article. In fact, doing Bayesian epistemology well, requires the sort of mathematical literacy those other books will give you. (For something a little more advanced, but much more thorough, on how statistics gets abused even in science, see Statistics Done Wrong, which eventually you should read, IMO.)
But ultimately, the most important thing is to think mathematically. When someone makes a claim or argument, immediately model it mathematically in your head. That will often expose something odd about it that you’ll want to question. “Wait. What exactly are you counting? Or what did that study you are citing actually count?” “What about the outliers…how large a proportion of the sample group didn’t show the trend you are claiming is normal, and why are they different? And how can you claim the result is normal when all those others exist who don’t conform to the claimed trend?” “Wait. Are the two things you are numerically comparing the things you should actually be comparing?” “Hmm. That number sounds suspiciously high/low/contrary to common knowledge. Where is your data? I want to run the numbers myself, to check your math.” “Why does your graph end there?” “Why does your graph not match what you are saying?” And so on. These are the kinds of questions you should not only be inspired to ask, but have some idea of how to reach the correct answers to (or at least vet the answers you are given to see whether they are credible or fishy).
Mathematical literacy is important.
Please take it as seriously as science literacy or even English literacy. Being a good skeptic requires it. And being a good skeptic is something we should all want to be.
For much more on this same point, and a closing list of books of value on the subject well worth reading, see my article Critical Thinking as a Function of Math Literacy.
Don’t know about the empty tomb, but 75% of writers think the stone was rolled back (only Matthew disagrees)
🙂
Oh, no, Matthew agrees. An angel did it!
The term “innumeracy” suggests that the problem is a lack of mathematical ability. However, even professionnal mathematicians are not immune to the trouble you describe, as knowing the myriad ways statistics can be used to bullshit you (and being on the lookout for them) requires a somewhat different set of skills than, say, algebraic geometry.
That’s a good point. Indeed, it’s a broader problem even. I recall something like that was demonstrated recently when the Monty Hall Problem was presented to a large number of mathematicians and a majority actually got it wrong. I can’t recall where that study was published, though. Analogously, scientists often get science wrong–when talking about a field or specialty outside their own.
Excellent points all Richard. Taking it a step downward from more sophisticated analysis of these kinds of issues, numeracy is important in the modern media soaked world for even more basic reasons. We get so much blasted to us through various forms of media many of our intuitive ideas about the rate of occurrence of many phenomena are seriously off base. The availability heuristic pretty much guarantees this. Nobody I know has ever made known to me they have been raped so it must be rare. The vast majority of criminal suspects I see on local news are minority so most all criminals must be minority. Every plane crash in the world is broadcast almost immediately to me so flying must be dangerous. Every school, mall, and workplace shooting is shoved in my face many times over so these places must be dangerous. On the other hand the masses of people dying or injured from backyard pools, boats, automobiles, falling down in their own homes, etc. etc. aren’t newsworthy so all these activities must be relatively safe. We end up spending resources on trying to prevent rare events instead of focusing on the much more common events where we could actually make a big difference (e.g. terrorism vs preventative medical practices). You should be a lot more worried about type II diabetes than Al Qaeda. In any case very nice article.
That’s a good point, and being aware of cognitive biases like that is another point I make (and provide a helpful bibliography on) in the same Critical Thinking resource I pointed to at the end. I highly recommend all of that, too. And you illustrate one reason why.
I’m afraid that whenever you say “mathematically” you mean “statistically” in the whole of this article. This may sound pedantic, but “mathematics” does not tell us anything about what to do with outliers or how to interpret phrases like “nine out of 10 experts claim this or that”. In fact one can have a degree in math, computer science or engineering and still suck at this stuff (yes, one would have taken courses on probability theory, but that’s not the same as taking statistics), and you wouldn’t call such a person “innumerate” per se.
But, I do agree with you that learning statistics should matter to everybody because we’re constantly bombarded with stats on many everyday subjects, and statistics & probability theory is at times tricky enough to cause even experts to confuse themselves, or to hide fraud, etc.
Only if you mean by “statistically” any discussion of ratios and quantities (since that is all I do here; I never even got to discussing actual statistical science like p-values and confidence intervals).
But that is also geometrical (ratios and thinking about them originated within geometry and reduces to geometry in concept-space).
There is very little math left that isn’t adding, subtracting, multiplying, dividing, and analyzing the geometric properties of bodies of data (e.g. bell curves).
So, yes, your point is pedantic.
I don’t think you really know what gets taught in maths and computer science. (And engineering isn’t what I’m talking about, although even engineers must use and understand statistics, as most materials and manufacturing studies depend on them.)
Having conversed quite a lot with majors in compsci, I think you have a badly misinformed notion of what they are taught. The problems that have to be solved in that field are far more on point than you seem aware. And they often are the first to notice them, because failures of the kind I warn against result in real-world failures obvious upon implementation.
Nevertheless, your general point remains valid, as I already noted upthread. Just as scientists can suck at science, so can mathematicians suck at math. And learning math divorced from application is itself a form of innumeracy. If you don’t know how to use math properly in the real world, you don’t really know how to use math.
Well, I don’t know who you’ve been talking to, but I am actually a software engineer, so it’s highly unlikely that I am badly misinformed about what compsci and engineering majors are taught.
I’m sorry, but you don’t seem to get it. Mathematicians cannot suck at math. Statistics and statistical reasoning is not the “real world application of math”. You seem to be redefining mathematics. But, ignore me. Please talk to an actual mathematician instead.
In what way is statistics and statistical reasoning not a “real world application of math”?
But never mind that. It’s a distraction. Bizarre. But a distraction.
You skipped my point that I’m not talking about “statistics and statistical reasoning.” I’m talking about doing counts and ratios. (Look at all my examples…nary a mention of p-values or confidence intervals or anything to do with statistics as a specific branch of mathematics…although I agree that’s something we should all have some familiarity with, and it’s a part of numeracy, and I agreed with you that some mathematicians aren’t fully numerate that way).
Surely if you got a compsci degree you learned how to count things and develop and question and test ratios, and about the folly of building routines on false dichotomies or counting the wrong things, and so on. You know, the sorts of things I actually talked about.
At least, good gods, I hope they taught you that stuff!
Now you are contradicting yourself. You opened by saying many mathematicians don’t know the particulars of statistical reasoning and thus suck at it (by which you mean the advanced tools used by statisticians, since that’s a particular branch of mathematics that not all mathematicians study). Now you are saying that’s impossible.
Huh?
I sincerely hope that you never make the mistake of asking the mathematician at your dinner table to calculate the tip. Then you will find out how relevant numbers are to mathematicians. As a rough guide, if you–a non-mathematically trained person–understand anything that’s written, see any numbers greater than 5, decimals or (*shudder*) percentages, or don’t see a tombstone or QED mark at the end of each segment, it’s not mathematics.
I find your comment unintelligible, Nepenthe. Sorry. No idea what your point is.
The point is that Koray is correct and what you refer to in the article as “mathematics” would be better considered “statistics and arithmetic”. Mathematics is about abstraction and proof, not data.
That’s a weird definition of a common English word. “Arithemtic isn’t mathematics.” Right. Wait. What?
It’s just silly to say that mathematicians don’t work with data, even more so to say that when they do, they aren’t doing mathematics.
Dr. Carrier, thank you for writing this post. Combating innumeracy is very important, especially when it comes to social justice issues (were many opponents of social justice have few scruples about abusing statistics for their ideological goals).
However, I was a bit confused by the following. Dr. Carrier wrote:
I read the blog post you linked and it seems that the two studies looked at a sample of college men and young men enlisted in the Navy, respectively. However, you seem to suggest that their conclusions hold for the male population overall.
I am by no means an expert in statistics (and I can be completely wrong here), but isn’t it sometimes a bit problematic to generalize the results from psychological research carried out on college students to the general population? Also, what about young men in the Navy? Do we expect them to be representative of the male population as a whole?
It is possible that I have misunderstood what you wrote (e. g. maybe you meant that the proportions found in these samples by themselves correspond to an “alarming [absolute] number” of rapists?). If so, a clarification is greatly appreciated.
I didn’t specify. But it’s always worthwhile to point this out. This is exactly the sort of question you should ask and think about.
But indeed, do think about it mathematically. One would have to suppose that men who don’t go to college are significantly less likely to commit rape for that sample bias to matter–which I doubt (otherwise, if it’s more likely, then that’s even scarier; while the only option left is that it’s the same). But one certainly could test that someday. To wit…
Navy personnel and college students are samples so divergent demographically (insofar as American homogeneity diverges at all) that the discovery that they show similar patterns suggests they are indeed representative of the male population as a whole. Note that a study recently of men in Asia (and that from the general population) got substantially the same results. Yet Asia and America should surely be even more divergent populations. So the evidence does support a broad trend that appears to cross even cultures, much less sub-cultures.
Nevertheless, you are thinking like a good mathematician when you ask the question: is there a study of the US population as a whole that is done the way the one in Asia was (or even better)? We would certainly like to see that. It would generate much more certainty than inferring to the whole population from two common but divergent subgroups (and one cross-cultural sample).
Great post! Interestingly, on March 28, 2012, I emailed Habermas and asked, “what are the most recent numbers on critical scholars who accept/deny the empty tomb as one of the “minimal” facts? Were agnostics surveyed?”
He replied, “approximately 2/3 to 3/4 of critical scholars who work with this material favor the empty tomb. Yes, agnostics were surveyed, as part of the entire spectrum of views.”
I was giving him the benefit of the doubt, and just wondered if the number had changed by his count. So he was willingly to go as low as 66% then. It would be very intriguing to have his actual data/methods.
How can he count agnostics, when they rarely write articles defending agnosticism, he never names any, and he is only putting into ratio those arguing pro and con?
Either he was snowing you, or he didn’t understand the question.
That he doesn’t even acknowledge that counting articles in no way conveys anything about the actual field consensus is enough to suspect he’s hopelessly innumerate, or knows he’s full of shit.
But it is amusing to see his numbers continually dropping. Now it’s 67%!
I don’t usually have time to read your blogs (75% of writers say I read them 20% of the time). I read this through, using my dodgy speed reading skills, and a clear head.
This article should win an award. It’s (that) important, and extremely well written.
Thank you.
Hey, I’ve got an example of that one – hunting.
http://www.timeslive.co.za/thetimes/2013/11/21/hunting-is-a-tale-of-two-viewpoints
“According to information given by the Professional Hunters Association of SA, income generated from hunting decreased from R901-million in 2011 to R811-million in 2012.”
And
“The president of the SA Hunters and Game Conservation Association , Fred Camphor, said that hunting boosted conservation.
He said it contributed more than R9-billion a year to the economy, while 16million head of game existed on private farms – three times the amount found in national parks and reserves.”
Both in the same article.
Okay it is different hunting lobbies – but I have never yet come across two hunting groups whose figures even slightly make sense when put together like that. My sneaking suspicion is that hunters basically pull their figures out of their voluminous backsides.
I recall Habermas proclaimed he wanted to publish a bibliography on the resurrection a few years back on his website. I asked him personally about it as I was quite excited about the idea. He exclaimed that his website would have to suffice and that no published bibliography would be forthcoming due to lack of time and mass of sources.
Apart from that:
In the footnotes of his article “The Minimal Facts Approach to the Resurrection of Jesus: The Role of Methodology as a Crucial Component in Establishing Historicity” published in the Southeastern Theological Review 3/1 (Summer 2012) 15–26, Habermas states
“My bibliography is presently at about 3400 sources and counting, published originally in French, German, or English.8 Initially I read and catalogued the majority of these publications, charting the representative authors, positions, topics, and so on, concentrating on both well-known and obscure writers alike, across the entire skeptical to liberal to conservative spectrum. As the number of sources grew, I moved more broadly into this research, trying to keep up with the current state of resurrection research.
9. Strangely enough, in spite of “bending over backwards” to include radical writers who did not possess scholarly credentials, I have frequently received letters, emails, and comments over the years, complaining that I no doubt neglected many of the radical skeptics simply in order to make my numbers look better! Such responses seemed to border on a conspiracy theory of sorts. I confess that these often-emotional responses often made me want to drop the entire non-credentialed group from my study! It is not my fault that, if even after counting them, the research still did not favor these writers or their theories!
So, a considered “non-credentialed group” in this study of his does exist, although this seems to mean people like Doherty, Wells, etc.
p.s. Thanks for the excellent article and advice.
Within a political economy, if two unambiguous groups have seriously different poverty rates, the fact that the larger group will have more total poor members is irrelevant. The larger group will have more fat people, slim people, old people, young people, smart people, dumb people, rich people, poor people etc. simply because that group is larger.
But seriously different RATES within the same economy implies something other than a simple statistical distribution. Statistically, it guarantees that there is some different underlying cause to the poverty. More directly, it implies different paths both to enter and to escape poverty. In such a situation, it makes sense to regard black poverty and white poverty as two different conditions, very likely calling for two different remedies.
Unless the number of them is relevant. Which is precisely Ross’s point. And mine. A point you seem to have missed.
FOX News could pull the same trick with obesity: if twice as many blacks as whites were obese, convincing us (wrongly) that this meant most obese people were black, therefore obesity is a black problem, therefore white people can ignore it.
That’s not irrelevant. It’s how people in the real world are being manipulated. Which is as relevant as relevance gets.
That is a non sequitur. It makes sense to ask and find out if there are different causes. It does not make sense to assume there are other causes.
And IMO, I suspect you would find they don’t have different causes, other than that racism suppresses (and has suppressed) black advancement significantly more than white advancement. In other words, racism is the problem, which is precisely Ross’s point. How racism has that effect is multivariate, but it all comes back down to racism somewhere in the system, either deliberate, unconscious, or institutional. That would certainly be the first hypothesis I would test…it having so much evidential support already (e.g. studies showing significant bias in hiring and promotion based on black vs. white names on resumes), compared to any alternative I know. But as a good empiricist, I’m open to evidence to the contrary.
A third point, one a bit rarer to hear because atheists debating this point often don’t check, is to look at not just what and who is being counted, but it’s relative value.
Ahem, Richard! “IT IS relative value”…?
Fixed. Thanks!
Innumeracy is a problem among sceptics.
Alex Gabriel wrote recently ‘Almost no British Muslims – one or two percent – support execution for homosexuality.’
Some ‘sceptics’ must not understand how small a percentage one or two percent is.
Somehow this tiny percentage gets inflated by some ‘sceptics’ into a claim that Muslims are hostile to gays.
Reality check! Only 1 in 100 British Muslims want homosexuals to be killed, according to Alex’s figures, which he carefully compiled.
But people don’t understand risk when it is expressed as a numerical term.
That’s an apt point in many cases (certainly in Condell’s, which was Gabriel’s point), but not every negative reaction to such a statistic is innumerate. 1% of six hundred thousand people (the number of Muslims in the city of London) is 6,000 people. That’s a very large number of would-be murderers to have inhabiting a city. The error is not being alarmed that so many people in your city want to kill you, but in thinking that because 6,000 Muslims want to kill you, that therefore all 600,000 Muslims want to kill you, or that we should respond to the 6,000 by punishing all 600,000 as if they agreed with the 6,000 (the “kill ’em all and let God sort ’em out” strategy). It’s also erroneous to infer that because 6,000 people agree gays should be killed, that therefore those 6,000 people are actively killing gay people.