The fundamental goal of legitimate critical thinking (as opposed to the fraudulent kind) is to ascertain what is true, about yourself and the world. So the tools that constitute critical thinking must be tools for finding the truth. And that means tools for discovering which beliefs are false. And that means continually admitting to yourself that you might be wrong about something. If that scares you, then the simple fact is: you won’t be a reliable thinker.
Of course, the only way to know a belief is false is to find out that some other explanation of the evidence for that belief is as or more likely—and thus, discovering a false belief always produces one or another true belief. But the only way to know a belief is true, is to sincerely and effectively try to prove it false and fail. So it’s always about finding the false beliefs. What’s left is true. But too many people call “critical thinking” what is instead a toolbox for defending false beliefs (or avoiding true ones) rather than discovering false beliefs (and thus finding true ones). The key moment that will separate you from someone who abandons rather than defends delusions and errors is the very moment when you realize this—and change tack from an illegitimate to a legitimate critical-thinking toolbox.
Because all beliefs are motivated—you won’t ever hold strongly to any belief you are not in some way dependent on or emotionally invested in—so you need a set of tools for managing this, to ensure your investment, your emotional motivations, don’t blind you to the truth. The only way this can be reliably done is to place your top emotional investment in the very idea of legitimate critical thinking itself. You have to be so emotionally invested in discovering false beliefs that this emotional drive overwhelms any other. That way, you can actually come to abandon beliefs you emotionally or otherwise needed or were invested in, because you are more emotionally committed to knowing what’s true. If you lack that emotional commitment, you will remain consumed by false beliefs. You will then never be a real critical thinker. You will only ever be a sham thinker (or not even a thinker at all).
For example, whenever faced with any new claim or challenge to your beliefs, and indeed even when surveying all your own beliefs that haven’t faced any external challenge yet, what you want to know is “Is it true?”, not “Can I come up with a reason to reject it?” (much less “Can I come up with a reason to accept it?”). One can always rationalize an acceptance or rejection of anything, be it true or false. So the latter approach is actually a tool for defending false beliefs (usually beliefs you don’t want to change in the face of any challenge to their being true, potential or actual). But the other approach is a tool for discovering false beliefs (“yours” or “theirs”). Then, of any claim or belief, you won’t just ask “Can I come up with a reason to accept or reject it?” You will instead ask, “Do I have a genuinely adequate reason to accept or reject it?” Which requires working out what it takes for a reason to be adequate.
So answering that latter question requires more than just coming up with reasons. Many people will claim to be critical thinkers because they are “critical” of any belief or claim they don’t like—but not similarly critical of any claim or belief they do like. This hypocrisy is extremely common even among atheists, but to see it explored among the religious, I highly recommend John Loftus’s The Outsider Test for Faith, whose analysis is convertible to any other belief-divide, not just religion. Critical thinking does not mean just being “critical” (much less only selectively critical). Just as being a skeptic does not mean just doubting everything. Doubting something there is overwhelming evidence for is adopting a false belief: a false belief in the doubtfulness of what is in fact well-evidenced. So that kind of skepticism is a false-belief generator. It is built to defend, rather than discover, false beliefs. That is anti-critical thinking.
Loftus’s book focuses on faith-based epistemology, which is usually associated with religion (and indeed, for its degrading effect on people’s ability to think well in that domain, see my series on Justin Brierley). But that’s only because religion will put the faith element forward and try to defend it as an actual epistemology (anti-critical thinking in its purest form). Faith is literally the ultimate tool for defending false beliefs (and even the most liberal of religions suffer from this: see What’s the Harm? Why Religious Belief Is Always Bad). But the nonreligious also rely on faith. They just don’t call it that, to avoid admitting the significance of what they are really doing. Any false belief you actively defend out of passion or need is a faith-based belief. Motivated reasoning is faith-based reasoning. And faith-based reasoning is motivated reasoning. They are identical.
Attacking or resisting a claim you don’t want to believe, or are otherwise emotionally invested against, is exactly the same folly. To be confident something is false when there is adequate evidence to suspect it could be true is a false belief. To be confident something is false when there is overwhelming evidence it’s true is also, obviously, a false belief. But any incorrect belief in the correlation between a claim’s truth-value and its evidence is a false belief. If you believe “There is no way that can be true,” when there are actually, demonstrably, many plausible ways it could be true, then your belief is false. It would not be false to say, perhaps, “There is inadequate evidence to be sure either way,” if that is in fact the case (and not just some claim you are making that is incongruous with the actual state of the evidence). But if that is in fact the case and yet you are denying it (by insisting it “can’t be” or “has to be” one way or other), you are defending a false belief rather than discovering (and appropriately abandoning) one. Having true beliefs includes beliefs that something is ambiguous or in an appreciable zone of uncertainty—when, indeed, something is ambiguous or in an appreciable zone of uncertainty. But when you insist something is ambiguous or in an appreciable zone of uncertainty when it is not, that is a false belief.
So a commitment to getting at the truth requires rooting out false beliefs—and all beliefs that are incongruous with the state of the evidence are false beliefs. False beliefs include beliefs held in ignorance (you literally have not run into or been told or given access to the evidence that they are false). Those are still false beliefs. You should want to root them out. And that’s easy enough: just explore the evidence. For instance, while I was writing my article on Gun Control That’s Science-Based & Constitutional I had thought that sentencing enhancements for crimes committed with guns would help deter gun violence. Sounds logical enough. Surely criminals will use guns less if they know they’ll do more time if they get caught than if they effect their crimes with less lethal means. Right? But I am a committed critical thinker. I did not assume I was right. I checked first. Because maybe I was wrong. Maybe the obvious intuitions supporting this idea are bollocks.
Well, guess what? They are. I believe in evidence-based politics, as opposed to ideology-based politics. And that means, in this case, evidence-based policing. And it turns out (as you can find with links in my article) sentencing enhancements have proven to have no deterrence value (and can end up producing a number of unexpected injustices as well; in fact, longer sentences can even increase crime rates, by manufacturing more hardened and desperate criminals). In fact, they have no such effect at all; but particularly with respect to firearms. Criminals, just by choosing to be criminals, have already proven themselves too irrational and lacking in judgment or foresight to be affected by such abstract principles of deterrence. They are far more likely to be deterred by an increase in the likelihood of being caught (an emotionally immediate worry) than by any increase in the punishment for it.
Well, heck. The more you know. But this illustrates one tool in a genuine critical thinking toolbox: don’t assume you’re right. Check. And have a sound standard of evidence—know what it will take to convince you you are wrong; and make sure it’s reasonable and not absurd. Because if you are setting unreasonable standards for changing your mind, that is by its very nature a tool for defending false beliefs, and thus of anti-critical thinking. Wrong toolbox. There are many examples one could give like this. For instance, did you know the Dunning-Kruger Effect actually doesn’t exist? Neither does the Backfire Effect. Many things we think are intuitively true and that may even have had scientific research supporting them, turn out on further examination to be false. That’s why we need that further examination. Both societally (all the science that followed these effects up and found them wanting) and individually (we ourselves should check the state of these things when we can, before simply repeating “what we’ve heard,” as I show for example in Dumb Vegan Propaganda: A Lesson in Critical Thinking).
Another example of such false-belief-defending tools is of course the most obvious: recourse to fallacious reasoning. Understanding logic, and thus how to detect fallacies in your own reasoning (and not just in others’ reasoning), is key to critical thinking. Resorting to fallacies, ignoring when you are using them, that’s anti-critical thinking. You would only do that to defend false beliefs against challenge or doubt; it can never help you root them out. It’s doing the opposite: it is preventing you from rooting them out. Likewise when you selectively try to spy out fallacies in others’ reasoning but not your own: you only do that to get rid of threats to your existing doubts and beliefs; that is a method of defending and maintaining false belief. It’s even worse when you are claiming to find fallacies in others’ arguments that don’t even exist. That is literally delusional: you are then generating more false beliefs to protect your other false beliefs! But even when you are competently and correctly rooting out fallacies in others’ arguments but not your own, you are still engaged in anti-critical thinking.
In my monthly course on Critical Thinking for the 21st Century (one of ten classes I teach, all of which expand on it with even more critical thinking skills in specialist topics) I focus on three domains of mastery that are required to be “a reliable avoider and purger” of false beliefs: standard logics (fallacy-detection-and-avoidance); cognitive science (knowing all the ways your brain routinely generates false beliefs and thus how to catch and thus control and correct for them); and probability theory (which requires understanding Bayesian epistemology; because all beliefs are probabilistic, which means if you don’t know how probability works, neither are you going to reliably assign probabilities to your beliefs: see my Advice on Probabilistic Reasoning). In each case, the difference between the skills I teach, and what unreliable thinkers instead “call” critical thinking, lies in whether the tools help discover false beliefs—or help protect false beliefs.
It is thus not enough to know and learn the tools. You have to consistently employ them, all the time. If you “turn them off” sometimes, that is a blatant tool for protecting false beliefs. Because protecting false beliefs is the only reason to ever do that. You can’t just know all about logical fallacies—but then only selectively look for them (like, for example, not burn-testing your own reasoning to catch when you are using them). Likewise, you can’t just know all about cognitive biases—and then do nothing about the hundreds of cognitive biases you know your brain is relying on every day. And you can’t scoff at probability theory, making excuses for why you don’t have to put in the effort to understand it. That is inherently a false-belief defense strategy: you are essentially thereby admitting you want to be less reliable at belief-testing. Which means: you are admitting you want false beliefs; that you don’t want to discover them. Which is a tool of defending false beliefs. That’s anti-critical thinking.
I’ve provided extensive examples of this before, where I survey not just “how” someone is wrong, but what they did, methodologically, that ensured what they ended up believing or claiming was wrong. In other words, I haven’t just continued debunking false claims. Often when I do that now I take the trouble to highlight not just that they are wrong, but how they ended up being wrong: what methods they used to end up with false conclusions, and to continue defending those false conclusions—thus defending rather than detecting false beliefs. Those methods illustrate anti-critical thinking. Which means to do real critical thinking, often you just have to do the opposite of what those people are doing. Indeed, even the mere act of avoiding their methods of defending false beliefs can leave you with methods that detect them instead.
I’ve given many examples of this point before. For example, in:
- The Curious Case of Gnostic Informant: Reaction vs. Research.
- An Anatomy of Contemporary Right-Wing Delusions.
- Epistemology Test: Anthony Fauci Edition.
- Is Society Going to Collapse in 20 Years?
- There Are No Muslim “No Go” Zones.
In these I show how certain people not only ended up with false beliefs, but even elaborately defended those false beliefs. If you avoid their techniques for doing that—if you check actual peer-reviewed scholarship first instead of ignoring that step, if you check what the evidence actually is and what your sources actually say first instead of making the mistake of not doing that, if you actually fact-test and logic-test your own argument before resting your reputation on it, if you try really hard to refute yourself before trying to defend yourself, then the mere act of avoiding such methods of defending false beliefs will land you in methods of detecting them. Which is exactly where you want to land—if true beliefs are what you want to have.
I more systematically discuss these toolsets—the tools for defending false beliefs vs. the tools for detecting them—in:
- A Vital Primer on Media Literacy.
- Three Common Tactics of Cranks, Liars, and Trolls.
- Shaun Skills: How to Learn from Exemplary Cases.
- Disarming the Motte and Bailey in Cultural Discourse.
For example, in Media Literacy I show how selectively choosing what sources to trust based on an invalid criterion of trustworthiness or reliability is a technique for defending rather than detecting false beliefs. To critically think for real you have to not do that. But the only way to “not do that” is to do the opposite: to establish for yourself some reasonable and realistic criteria for what sources to trust and how much to trust them—and how to test and vet their claims to ensure that stays the case (rather than simply reject their claims when you don’t like what they are, or who is telling you what they are).
In Three Tactics I show how common methods used by people who are desperately committed to false beliefs—JAQing Off, Whataboutism, and Infinite Goal Posts—are all designed to avoid confronting uncomfortable facts and arguments, to try and make them go away, rather than acknowledging their implications. This is a method of defending false beliefs—by elaborately avoiding any attempt to detect them. All three techniques aim to distract from the facts or change the subject, thus avoiding any confrontation with reality. A fourth such technique is Motte and Bailey, where someone will defend a false position but when caught doing this will retreat to a more defensible position and claim that was their position all along; until the pressure is off, then they go right back to defending the false position again. This technique is designed to help them avoid confronting any evidence that they are wrong.
That’s all anti-critical thinking. I give many more examples in Shaun Skills, which illustrate not only how to properly critically-think YouTube media, but in the process also show how pundits on YouTube defend false beliefs rather than detect them. Those pundits should have done what Shaun did before publishing their arguments. And this is the key lesson for today. It is as important to pay attention to how people are defending false beliefs—the methods they are using to do that (and thereby to avoid discovering or admitting their beliefs are false)—and not just the methods used to catch their mistakes and debunk them. Those methods, if the pundits had used them on themselves before publishing claims and asserting beliefs, would have uncovered their false beliefs and thus left them with true beliefs instead. But their methods, the methods they used to defend those false beliefs instead, are also methods you need to make sure you are avoiding. Don’t be like them. Be like Shaun. That is the summation of real critical thinking. To learn it thus requires observing examples of behavior to replicate, and behavior to shun, so you know how to purge bad thinking and generate good thinking in its place. From there you can build lists of error-protecting techniques and how to avoid them, as I did, for example, in How to Successfully Argue Jesus Existed (or Anything Else in the World).
There are a lot of good resources online for mastering critical thinking tools. For example:
- Buster Benson’s Cognitive Bias Cheat Sheet.
- The Farnham Street Mental Models List.
- The Gapminder Resource (start at its primer page on Factfulness).
- Javier Hidalgo’s Mastering Critical Thinking.
You can go even deeper with:
- The 25 Cognitive Biases: Uncovering The Myth Of Rational Thinking (a 33-Page E-Book).
- 200 Cognitive Biases Rule Our Everyday Thinking.
- The Fallacy Files Taxonomy of Logical Fallacies.
- Bo Bennett’s Logically Fallacious.
But in every case, what you need to focus on—so as to learn the correct use, and importance, and application of every tool—is how that tool roots out rather than defends or protects false beliefs. This is different from focusing on how they might help you find and form true beliefs. They nevertheless do that, too; in fact, they are the only things that do. But you need to focus on the flip-side of that. Because if you focus on “finding the truth,” you are still in danger of missing the real lesson, which is that you need to “find what is false” to get at the truth. Just looking for what’s true can activate all sorts of cognitive biases and fallacies and probability errors—from verification bias to gullibly believing what you read because of mistaken criteria regarding who to trust, and many other common mistakes. But if you are always thinking in terms of “How does this tool help me root out any false beliefs I might have?” then you will be poised to detect rather than fall prey to error and bias.
To locate what’s true requires trying to prove each possible thing false (each claim, each belief), and failing. Because it is only that failure which guarantees a claim’s probability is high. Because when it is improbable that a false belief will have survived a test, then it becomes more probable that that belief is true and not false after all. And because all claims and beliefs make assertions about what “the explanation is” for some body of evidence, and an explanation’s probability can only be known relative to competing explanations, it is only by proving alternatives improbable that you can ever establish anything is probable. Whereas if you fail to prove your belief false because you used weak tests or rigged tests, tests designed to make it easy to fail, all so you can “declare” that you “tried to prove it false and failed,” then you are engaged in anti-critical thinking; you are then using a tool—disingenuous falsification testing—to defend rather than discover false beliefs. But if you apply genuine burn-tests—tests that really will discover your claim or belief is false if in fact it is—and then fail, you will have verified that claim or belief is probably true. That is, in fact, the only way to soundly verify a claim or belief is true (see Advice on Probabilistic Reasoning).
This means the questions a critical thinker must become comfortable with, and ask of their own beliefs as often as they can, are:
- “Is that actually true?”
- “Why do I believe that?”
- “How would I know if I was wrong?”
We can’t of course completely vet every belief we have. We have very finite resources—even just in time, much less money and other means. But we should apportion resources by the importance of the belief. And the importance of a belief really is best measured by what it will cost if you are wrong. Not just cost “in money,” but in everything else, from time to embarrassment, to public or personal harm. The greater the risks you are taking affirming belief in something, the more you ought to take special care to make sure you are right. Whereas things it would be no big deal to be wrong about you can let slide if you must, until they become important, or pressing, or you just happen to encounter challenges to them and want to explore whether they hold up. Indeed you should aim for a strong inverse correlation between the number of false beliefs you have and the cost of being wrong about them. More of your trivial or unimportant beliefs should be false than fundamental and important beliefs. Because the latter should get the lion’s share of your critical thinking efforts to check, test, and vet. But even the trivial deserves a little attention. Even just knowing that we should always ask whether what we are being told is accurate makes a big difference, because it puts a more cautious flag on our memory of it.
What’s scary about all this is that to have the required passion to know which of your beliefs are false, so as to be in any way assured of having as few false beliefs as possible (rather than a plethora of them), you have to admit that you have false beliefs—that you are, and have been, wrong about something. You may even be harboring an enormous number of false beliefs. You could even be the very sort of person you despise: an irrational rationalizer of your belief-commitments, a motivated reasoner, an anti-critical thinker, and you have just been telling yourself the opposite, avoiding anything that would expose that belief to be false.
This can be scary. But it’s an easy fix. You can just admit this of yourself—and change. You can start abandoning false beliefs by admitting you had been incorrectly defending or protecting them rather than rooting them out; but now you are committed to rooting them out. The past is past. You will now be the sort of person you like and can count on, instead of the unreliable person you had been; and you will emotionally invest in being this sort of person above all else. Truth first. Everything else, secondary. Only then will you no longer be vulnerable to emotional investment in false beliefs. And only then will all your beliefs start to hew more reliably towards the truth. And this is why excuses (like, “I need certain false beliefs”) don’t logically work; because you can’t “gerrymander” a false-belief defending strategy to only defend harmless or necessary false beliefs (even if such beliefs exist). You simply either have a reliable epistemology—or you don’t (on this point see, again, What’s the Harm?).
What I would say is less that the most fundamental belief needs to be “I will care about the truth” but rather “I will care about integrity” in some way.
In an Aristotelian sense, reasoning from actual beneficence leads immediately to a concern for truth. We can tell that lying to ourselves and to others makes us worse as people. We can tell that lies hurt others. We can tell that lies produce negative externalities. Beyond outright dishonesty, we can see that caring about what is true has to be central (a second-order value) because only understanding what is actually real and true leads us to making decisions that are useful.
A minor correction, but the key point is that we’re not tally machines, we’re people. We want to know about truths (and prioritize which truths we discover and verify and triple-check) because we act in a world and want to maximize what is good in that world. A concern for truth emerges immediately from that, in addition to the pragmatism of wanting to actually be able to operate in whatever space we happen to be in.
Certainly virtues extend beyond the merely epistemic. Here I am only focusing on epistemic virtue though. Someone who rejects all the others, can still agree with that. So it is “the least” one can do.
To explore “the most” one can do, which spans way beyond epistemic virtues of course, readers should see Your Own Moral Reasoning: Some Things to Consider.
Agreed, and agreed that functional epistemologies can’t tolerate having beliefs that are excepted from the epistemology (because there’s no way to be sure that the belief that one is excepting is benign except by analyzing it). I just have noticed that some start with epistemic virtue as if it is the only virtue, as if being a hard-nosed truthseeker is in and of itself enough to be an admirable and useful person, and thus wanted to note the context.
Although technically that would be true (as shown from The Objective Value Cascade to The Real Basis of a Moral World to Your Own Moral Reasoning), so really you are talking about a rhetorical stance and not an actual position. If you started out by pursuing that as your only virtue, you would end up with all the other virtues.
So what you really mean are people who falsely claim that is their primary virtue; but by failing to discover and thus embody any other virtues, they prove they were not telling the truth about that: they were not “doggedly pursuing epistemic virtue” after all. That’s just a rationalization, a ruse, that they employ to rhetorically justify their lack of virtue. It’s a stance. Not an outcome.
Yes, I think these people also end up being deeply intellectually shallow as well because their emotional intelligence suffers and they don’t really follow the ideas, but reason on its own won’t be enough to get them there. They’ll have to actually care . They can learn that they need to, but that process of actually starting is different. So, yes, I agree that this is a rhetorical stance not reality.
Also, on the connection between motivation and the rational need to prioritize truth, see:
Epistemological End Game
And:
What’s the Harm? Why Religious Belief Is Always Bad
End Game proves that all decisions about what to prioritize epistemically end up in normative propositions regarding self-interest. And Harm demonstrates that there is no available strategy for serving that self-interest that allows generating or protecting false beliefs (e.g. it is not logically possible to gerrymander an epistemology that will limit false beliefs to “safe” ones).
What about your critical thinking on Hinduism?
Real scholars like James Mallinson would laugh at Meera Nanda’s historical claims.
Cite a single example of James Mallinson rebutting or contradicting any claim made by Meera Nanda.
If you cannot do this, you just demonstrated you are an anti-critical thinker. You are using a tool of rhetoric to avoid discovering false beliefs. So hopefully you can show you actually had some reason to believe the thing you just said before you said it.
Let’s find out.
Sam: As always, I find myself wondering, “What the hell is your woobie?”
Mallinson is an Indologist. His work is not on the history of science. Nanda’s is. The idea that Nanda would so drastically make a mistake and Mallinson would be able to catch it despite having a different intellectual focus is silly. But apparently you think of Mallinson as the nice open-minded guy and Nanda as the meanie unbeliever, so who cares about engaging with either of them with nuance, right?
What I find so funny is that Nanda, while anti-Hindutva, is not excessively strident. In her work on ancient Hindu science, she is skeptical but characterizes most of the claims she encounters as well-meaning but mistaken. That is an incredible act of intellectual charity given that she knows that this “We had all the science with our yoga!” is the Hindu version of “Our religion predicted science!” that all religious extremists and apologists like to trot out if they think they can get away with it. She knows that this kind of rhetoric supports dangerous Hindu nationalism, and yet she engages with it charitably. And her God Market as a book is hardly excessive.
When people react with hysterics at nuance, just like Bill and his ilk with CRT and everything else, it’s not because they’re reacting to some kind of extremist irrationality. It’s because they had a thoughtless belief (or many thoughtless beliefs) and someone bothered them by thinking about it.
Virtually everyone can learn basic critical thinking, the problem is why should I care more about the truth when it would get me ostracized, keep me poor, etc., as opposed to accepting false beliefs (by avoiding critical thinking), which would make me wealthier and grant me social approval? In other words, why should I be more emotionally invested in the truth when it would probably harm me, as opposed to holding on to my false beliefs which would probably benefit me? This the heart of the problem.
The solution to that problem would be to create a society that values the truth over false beliefs, but we seem to be heading in the opposite direction.
At the end of the day it all comes down to personal gain.
A better way of putting it is, we would need to create a society where having correct beliefs benefits you more than having false beliefs.
That isn’t exactly what happens though.
People who use that method end up disproportionately miserable and screwed. Not only at the front end (you are less likely to get there with that method; that some do is then used as fallacious evidence that it is a good route there, when in fact it is not; people who disregard the concept of differential risk thus mistake “there are rich people who think x; therefore people who think x are more likely to become rich,” a fallacy of affirming the consequent) but also on the back end (people who still end up in the few who do get there that way disproportionately end up paranoid, angry, and perpetually dissatisfied, the opposite of “happy” or “contented” or “fulfilled,” another differential risk irrational people fail to take into account).
Most aspiring despots and greedsters fail, and become hopeless losers; yet they keep using the few who were instead lucky as evidence their strategy is a good one. The opposite is the case. And most successful despots and greedsters don’t have anything actually to show for it—they are not happier, and as one can track by all their fears and complaints, they appear to have ended up as miserable, or more, even than they started. The failure rate to achieve happiness even for those achieving such resources in those ways is not indicative of that being a good strategy, even apart from the fact that that strategy actually poorly performs even to achieve those resources.
See What’s the Harm? Why Religious Belief Is Always Bad; Money Buys Happiness? Not After You Hit Six Figures; That Luck Matters More Than Talent; and The Real Basis of a Moral World.
But yes, everyone would benefit from organizing society to reward truth-based approaches more directly and consistently and make fraud and delusion less prevalent pathways to success. This is why a UBI system would actually result in more people achieving success who actually deserve it on their merits, whereas without it, who ends up at the top is largely determined at random by chance accident. Not entirely, of course. But to a degree we should all prefer not be the case, and that we would all benefit from not being the case (because a society with a greater percentage of successfully empowered talent benefits everyone in that society, not just the talented).
It’s basic Game Theory: idiots who tear down society to get ahead, end up in a shittier society for it, and thus ever complain about the effects of their own shitting where they eat. That is simply not an effective or rational way to a satisfying life. Cooperating toward building and sharing a non-shitty society and its benefits always is more effective. It’s only the irrationally selfish who refuse to comprehend that who claim their way is better, when all evidence proves it’s not. Yes, it grinds down even the rational, because they pull everyone down with them. But if they instead cooperated rationally, not only would they be better off, so would everyone else, and indeed, this would be partly causal: they would be better off because everyone else would be too.
So you think we are already living in a society that rewards truth based approaches but not directly. I don’t know… maybe you’re right, I’ll check out the articles you cited.
The first one you cited says that religious belief is always bad, which sounds absolute to me. Some people are afraid of death and can’t bear the thought that they’ll never see their loved ones again. If they forgo critical thinking only when it comes to the idea that there’s a god that loves them and that they’ll see their loved ones again, without subscribing to any ridiculous or bigoted ideas very often found in these religious books, I don’t see how that’s a bad thing. But I’ll read the article to gain a better understanding of where you’re coming from.
If by “rewards” you are including all outcomes (not just, say, “currency income” but also “personal contentment,” “life satisfaction,” etc.) and if by “not directly” you mean statistically but not perfectly (the correspondence is not 100% but, say, 60% or 80%), then yes.
Available improvements therefore would include: aligning all outcome markers (like “currency income”), not just some; and improving the correlation (like from 60% to 95%).
As to “religion is always bad,” yes, please actually read the article. It is annoying when you write a whole paragraph that is already refuted in the article, in extensive detail, and in fact refuting which is the entire point of the article. Wait next time.
Carlo:
Let’s address exactly that fear of death bit.
For one, notice that this delusional unwillingness to admit even that they might not live forever and might not see their friends and family again in the afterlife means they actually aren’t mourning the dead. They don’t have to do the personal growth to face those tough facts. In reality, from personal experience I think a lot of religious believers do end up doing that growth, but they do so because they compartmentalize the beliefs. They actually are facing that there’s a chance that they’re wrong in practice.
But the problem is, Carlo, that you can’t just forego beliefs like that . Because that idea that there is an afterlife, that there is a loving God, etc. has implications . What if, as Richard pointed out recently, you need to believe in a soul for all that, and then government needs to keep abortion legal which seems to go against your idea of a soul? Well, now you are advocating for a devastating policy that harms men and women alike because you need your woobie. What if that God that makes you see your friends in the afterlife also wants you to wage some kind of holy war? Heck, what if you just want that God to also punish people you don’t like? Now instead of facing any of your own cruelties, biases and bad behavior, you have an invincible Id sociopath of your own making to make you feel warm. You have an abusive and absent Daddy who you have to keep making excuses for but really does love you!
It’s true that you can avoid believing in the bigoted nonsense by just changing your idea of God. But then you don’t have a community that can smack a book on a table and go into convulsions and have that dogmatic certainty. Religions that are not package deals of bundled propositions are vulnerable to being picked off piece by piece when any of them are challenged. And, more importantly, you are also guaranteed to be immune to any biased appeal to you based on a God if you don’t accept one blindly for other reasons.
Every unexamined motive you have is some motive someone will exploit. And the world is full of exploiters, marketers and con artists and demagogues and prosperity gospel preachers. Is it worth your life savings to not have to face death? No.
Yeah, I have to add onto Richard’s point here.
You are correct (and I suspect you and I will disagree as to the details of this but I think we’ll both agree that it’s generally true) that societies in general and ours in particular often have a sense of what is good and true that is based in part on deeply flawed mythology. There then becomes a massive reason to lie: Smile and pretend that you’re a happy housewife, nod and act as if you don’t remember that we were at war with Eastasia, etc.
But even if resistance is impossible or tactically unwise in those contexts, it is actually not to your interest to swallow those myths . I would argue that a huge amount of the very sincerely felt rage and resentment of modern conservatives in America, for example, is because they swallowed mythology that was never true. This is the land of opportunity, this is a white Christian nation, we were supposed to be on top, we were the greatest and most powerful force for liberty in history and no little upstart like China could challenge us, etc. etc. But that mythology is dysfunctional, and not just on a national level. It’s also interpersonally dysfunctional. It’s great to pat yourself on the back when you’re winning (though not actually good if you want to keep winning, let alone if you want to be a good person). But it’s much harder when you start losing.
Even when publicly assenting to a lie is essential to survival, so is knowing that it’s a lie . When you don’t, you are immensely vulnerable to whatever those with power want to do to you. They can conscript you, let you support wars that they will lose at great cost to you, embark upon dangerous economic schemes, etc. etc.
It’s even worse in societies like most relatively free societies today, where there are definitely social consequences for resistance but they are rarely lethal. Heck, they don’t even have to be that isolating. Both the extreme left and the extreme right in America have communities thanks to the Internet and prior organizing. And we actually see that a lot of the extreme right’s position comes from being isolated in the first place, people who already feel lost or confused or angry and have that used for recruitment. True, ultimately they also become more isolated from huge sections of society over time and find that their community is toxic, but that’s because it’s a community built on hate. I routinely say very controversial things, and while there have been consequences I remain relatively happy.
So being invested into holding onto lies, or even just not being invested into truth, just costs so much. It means you are fundamentally delusional. And delusions are essentially never benign. It robs you of any ability to do serious self-criticism, to determine if you are in fact the kind of person you want to be. To succeed in that kind of environment, one essentially has to be a Trump, someone who could have done nothing with his life and still lived in the lap of luxury. And his behavior is still demonstrably maladaptive even for him: He is clearly routinely miserable, doing things that it’s clear he doesn’t actually want to do, because he doesn’t have the sense of self to realize something like “Being a President sounds fun but it’s actually a ton of work and responsibility even if I try to duck the vast majority of it”. If anything goes wrong in your life and you don’t have the resources of a billionaire (or even if you do, as Elon Musk is hopefully learning), which anyone could be secure with so it’s totally trivial to note, you are at immense mental health risk by not being able to value truth.
This is all to say nothing of the fact that being a good person isn’t to get some reward. Ironically, the rewards of being a good person are precisely from the fact that one doesn’t want the reward, which then means one appreciates it when they get it.
I know that this comment is late but what is your knowledge of crime rate in ancient Rome and how does it compare to today? I’m thinking around the turn of the century Rome ( 0 bc) or whenever Rome’s height of progress took place.
I’ve recently reread Steven Pinkers book Better Angels of our Nature, which argues that violence as a whole has decreased since paleolithic times. I know you are mainly knowledgeable of 1st century but Id be interested in your thoughts as as a whole?
We don’t have any data on that.
I think Pinker only addresses two other measures of large scale violence when comparing that era: Wars and Categorical Violence.
There are problems with the war data (numbers tended to be exaggerated in histories then). But I don’t think his trend-line suffers overmuch even when adjusting for that.
Categorical Violence means the kinds of violence we allow or consider normal, e.g. we can’t count how many slaves were beaten and murdered in the ancient Roman Empire, but we can establish that a lot of them were, which is more than zero, so shows a trend-line to the present all the same. Likewise when comparing the ubiquity of the use of the death penalty or the use of casual public violence (from legal protections for men who beat woman to gladiatorial games).
But I don’t think Pinker makes claims about the ancient “crime rate.” When he does crime rate trend-lines, he only uses the data that exist. And he instead proxies to that with the categorical evidence.