Here I shall assemble some advice I now realize I always take for granted, but that I find even well-meaning people sometimes don’t know, yet will definitely benefit from.
The idiom “Doing Your Own Research” has become a joke largely because the phrase usually comes from people who are shockingly bad at that, but who want to claim the prestige and authority of having actually done it. Which is pretty much all cranks, and everyone lost in some delusion or other, conditions which today infect millions of people.
Others have already done a good job of writing up the problem:
- “Do Your Own Research!” by Neil Levy in Synthese (2022)
- “Doing Your Own Research Is a Good Way to End Up Being Wrong” by Philip Bump in The Washington Post (2024)
- “Why ‘Doing Your Own Research’ May Make You Believe Fake News” by David Vetter in Forbes (2023)
- “Support for ‘Doing Your Own Research'” by Sedona Chinn and Ariel Hasell in Misinformation Review (2023)
- “The Problem with ‘Doing Your Own Research'” by Melanie Trecek-King at Thinking Is Power! (2021)
- “How to Do Your Own Research” by Melanie Trecek-King at Thinking Is Power! (2021)
- “Doing Your Own Research a Little Bit Better” by Jonathan Jarry for the McGill Office for Science and Society (2022)
These are all valuable reads. But they all converge on the same result: the principal differences between “Doing Your Own Research” competently (like a critical thinker) and incompetently (like a crank) can be summarized as:
- Only reading sources that suit or agree with your preconceived notion vs. reading the best of both sides. It’s the difference between steel man vs. straw man, and trying to disprove the claims in a source you agree with rather than trying to validate them, before believing them (on which point, see The Scary Truth about Critical Thinking).
- Treating sources with a biased rather than informed assessment of their reliability. Again, cranks will distrust sources merely because they disagree with them (and trust sources merely because they agree with them), and defend this with fallacious rather than valid appeals to evidence; whereas critical thinkers will trust sources based on actual evidence of their past performance and methodologies and standards (on which point, see A Vital Primer on Media Literacy).
- Just “armchairing” reasons to reject what experts say and what evidence and logic it is based on (destructive skepticism) rather than legitimately evaluating its strengths and weaknesses (productive skepticism).
- Not fact-checking or logic-checking yourself, rather than burn-testing your own facts and logic before accepting your own conclusions. A crank convinces themself that anything they believe or think is sound. A critical thinker distrusts themselves, and thus makes sure their facts are correct and their reasoning nonfallacious, and are continuously asking themselves, “Wait, is that true?” and “How would I know it if I was wrong?”
So, don’t do it like a crank. Do it like a critical thinker.
Chinn and Hasell found that “support for ‘doing your own research’ may be an expression of anti-expert attitudes rather than reflecting beliefs about the importance of cautious information consumption,” and as such, the phrase is often disingenuous. But not always. As Kevin Aslett et al. found, in “Online Searches to Evaluate Misinformation Can Increase Its Perceived Veracity,” Nature (2024), “When individuals search online about misinformation, they are more likely to be exposed to lower-quality information than when individuals search about true news,” and “those who are exposed to low-quality information are more likely to believe false/misleading news stories to be true relative to those who are not.” In other words, “Doing Your Own Research” usually means just gullibly believing whatever you read on the internet; so if you endeavor to read a lot of things on the internet, what you will end up believing is mostly going to be false (because so much of what’s on the internet is false).
The process basically goes like this: you’re told something (in a mere matter-of-fact way, and maybe by an authority you have already been primed to dislike or distrust); you then find lots of people claiming that’s false (and employing a lot of rhetorical devices of persuasion and manipulation that you have no defenses against); and you don’t find equivalently-artful rebuttals (because those take work to locate, and are already coming from authorities you were primed to distrust or dislike, while most rebuttals you encounter will not be wholly competent); so you think the argument ends there, and side with the contrarians. This is irrational and uncritical. But it’s what most people do.
In reality, we confuse large numbers (“ten critics”) with large frequencies (ten voices among a thousand is only a consensus rate of 1%); we confuse artful rhetoric with reliability; and we overrate the trustworthiness of groupthink peers and underrate the trustworthiness of outsiders (like “scientists” and “politicians” and “bureaucrats,” who occupy social universes either alien to you or that you have cultivated a disdain for). The result is “Doing Your Own Research” will result in your beliefs deviating more from reality—unless you learn how to do it critically, which means consciously working to compensate for all these biases. Which requires being alert for them and taking steps to bypass the ways they disinform you—such as by taking outsiders more seriously and insiders more skeptically, being aware of and actively seeking to tell the difference between rhetoric and logic, and between assertions and evidence, and not confusing numbers with frequencies, amateurs with experts, or gut feelings with careful reasoning.
“Doing Your Own Research” only works if you adopt the perspective of the very people who invented the science of rhetoric specifically to solve this problem over two thousand years ago. The ancient Greeks realized that in a democracy, where persuading large numbers of people was the only way to effect policy, a systemic problem arose. Two sides will assert competing facts, and make competing arguments. Whose argument do you trust? Whose facts do you trust? How do you navigate this? Worse, studying this question so as to answer it will also make someone better able to manipulate it (one of the trial accusations against Socrates was “teaching students how to make the worse argument sound the better,” an outrage that, true or not, contributed to his criminal execution). So now you get an arms race, between rhetors skilled at manipulating you into believing they have the better argument, and those actually trying to promote the truth. The only way to rationally cope with this is to allow no bias (do not favor sides based on emotion, desire, or in-group/out-group considerations) and simply analyze the logic and check the facts yourself (which is why the answer to the question Will AI Be Our Moses? will always be no).
So, just because you find ten people on the internet craftily arguing against some establishment claim or other does not make grounds for doubt. You have to compare the quality of argument on both sides: who is lying, who is concealing pertinent facts, who is making mistakes, who is relying on fallacies. Not who you “assume” is doing these things, but who can you prove with evidence is doing these things. This requires skills (which I teach in my monthly course on Critical Thinking for the 21st Century).
How to Vet a Wikipedia Article
Consider Wikipedia as an example. That has been demonstrated to be as reliable as any other encyclopedia—which means, it’s not great, but it’s better than asking Uncle Joe. The principal value Wikipedia has is that it tries (even if not always succeeding) to enforce the notion that every claim in it should have a cited source—and it is that source-list that is of value. You don’t have to trust Wikipedia; you can instead use it to run down a focused set of sources of information and vet those. This is in fact how real scholars use reference books (like subject dictionaries and encyclopedias and handbooks): not as authorities in themselves, but as summaries with a bibliography that can start a breadcrumb in investigating a subject. When an academic cites an encyclopedia entry, they do not mean they trust (or even that you should trust) everything it says, but rather, they mean that you can get started there in “doing your own research,” because they vetted the summary (as experts themselves) as at least “okay” and (more importantly) they find its source list to be good enough to get started with.
It helps to have experience enough to know how Wikipedia can be distorted (which it even openly explains to you) and which subjects there are more prone to colonization by cranks and ideologues (like religion and the soft sciences, and any controversial subject), and which are more likely to be reliably policed by real experts (like math and hard sciences, and uncontroversial subjects). But I have written up some guidance on how to build that experience already in A Vital Primer on Media Literacy. I have also already covered the general skills of critical thinking across my whole critical thinking category, but a good place to start is, again, The Scary Truth about Critical Thinking. I’ve also run an example of Wikipedia reliability before, in How Not to Act Like a Crank: On Evaluating Pliny’s Alleged Mention of Nazareth, which also gives several examples of how to “Do Your Own Research” for real (and not like a crank).
The basic procedure you should always follow is:
- Chase down the sources. A cited source might in turn just summarize and cite yet another source, and so on down the line. Ideally you want to follow the breadcrumb all the way to the primary source(s). Which usually means some original study or text—essentially, where the trail ends in print, and on which all subsequent sources for the claim depend.
- Vet each level of that breadcrumb. Does each source correctly describe what is in its sources? Or is there a telephone game of distortion going on? Make any necessary adjustments to revise the claim in light of what you find. Then vet the quality of argument and evidence offered for the claim in the most primary source you could get to.
- Try to disprove it. Can you independently find expert sources that correct the claim or call it into question? Are they any good? How strong a case do they make? Which side of any dispute has the better case? Has a consensus of experts fallen on one side or the other, or is it still substantially debated even in the field? Is one side concealing important evidence or lying about it? Or arguing fallaciously?
All of this controls for framing bias (what the editors of Wikipedia chose to include or exclude, what they chose to emphasize or deemphasize, and how they chose to describe everything). Someone can easily manipulate you with framing. So don’t fall for it. Step outside the frame.
Hence “trying to disprove it” means, for any given claim you need to rely on, check on your own to see if there are any legitimate refutations or challenges to it, or any well-supported evidence against it. You must look for high-quality challenges (which generally means, from experts; amateurs can only breadcrumb you to experts). And you must asses the merits of their challenge. That may mean looking for high-quality defenses (which means, again, from experts; amateurs can only breadcrumb you to the experts) and comparing their merits.
For this, a very useful tool now is Google Scholar. Ordinary search engines (like just “Google”) can help you locate critiques or contrary claims you can check, vet, and compare (and sometimes entire sites are set up to help with this, like RationalWiki and TalkOrigins); or that can breadcrumb you to the highest quality sources, that you can then check, vet, and compare. Always practice using different keyword combinations until you find the best sources or are sure there aren’t any (or any worth a bother). But Google Scholar gives you an automatic filter here. It can more quickly zero you in on the professional publications (articles and books) of scientists and academics (it’s not as good as professional databases, but it has the advantage of being free).
You might not find direct critiques of a claim this way. But you can likely find articles and books on the same subject, in which you can find what experts actually are claiming about the matter, and then compare that with the claim you are trying to vet the merit of. You will, as always, find lots of garbage and rhetoric and apologetics and propaganda. But once you learn how to tell the difference between that and legitimate research and argument, you can start to tell if there is any actual disproof of a claim—or any important qualifications that need to be made to it.
Pause on First-Person Sources
I mentioned the goal as to find and vet the most primary sources in print (online or not). But it is worth adding that if a primary source’s authors or researchers or witnesses are still alive, you can go even further and try talking to them, but that’s only if for some reason you actually need to. My advice there is, don’t contact private persons at all (unless you are literally a professional journalist); and when you try to contact public experts or personalities, make sure you do three things: (1) use an appropriate channel of communication (e.g. don’t call them at home); (2) be professional and polite; (3) keep your query very brief and clear (long messages will be ignored); and (4) prove to them you’ve already done some of the work and thus aren’t just lazily trying to get them to do the work for you (and for no pay at that). Showing that will ensure they feel that you’ve put in enough labor that deserves some labor from them in reward. It also shows you know what you are talking about, and thus explaining things to you won’t be too much of a chore.
So, when doing that, without being verbose, do mention what you have already checked or found and why you need to consult them now. For example, give some evidence that you actually read their study, and actually understood what it said, and that your question is indeed something it doesn’t already answer. Which means you have to have actually done that—pretending to will guarantee you will be ignored, because they will see at once that what you are asking is already answered there, and thus you didn’t actually read it. In short, don’t come across as lazy or dishonest or disingenuous. Come across as someone who has done the work and earned an answer. And do all that as politely and in as few words as possible. Even if they respond but can’t answer your question, you can just as politely ask them if they know anyone who can that you can direct your query to—and/or ask for a resource or two that might help you (like a book or paper you can start another breadcrumb with).
Pause on Peer Review
Peer review is not magic; but it does matter. A paper or book that has been vetted by expert peers and released by a serious (reputable, non-crank, non-mill) publisher counts for more than those that have not. You can still find lies, errors, and garbage in peer reviewed work. And useful work exists outside peer review (especially in fields awash with biased gate-keeping). But you know peer reviewed work has at least passed one level of expert review, whereas any claim that has never survived that review is more often not going to be worth the trouble of even reading, much less citing. There are exceptions, but you need to evaluate that yourself by seeing whether an author is applying real “Do Your Own Research” and “Critical Thinking” skills, such as I am training you in here and in the other articles linked above.
In effect, everything outside peer review is just like Wikipedia: potentially useful for having provided you a handy summary and source-list you can run down so as to evaluate the claims it makes. Everything outside peer review requires far more vetting from you, and thus a lot more work, and thus a lot more selectivity in which books and articles in this category are even worth your time. By contrast, peer reviewed work should be easier to vet: it will (more often) be better organized, its source list will (more often) be on point, and it will (usually) explicitly describe its methods and premises, and how its conclusions derive from them, making it easier for you to spot “bad science” (and history and philosophy also count as “sciences” for this point) and easier for you to zero in on the essential points (as superfluous material and handwaving will usually be at a minimum).
You can actually tell when peer review fails at all this. And when you look for the same evidence as would tell you that, works outside peer review can also advertise how unreliable they are. Whereas the same skills will help you detect the opposite: when a non-reviewed work is not handwaving and meandering but cutting straight to the point, and when it is clearly explaining its premises and citing good sources that actually do establish them, and reaching conclusions from those premises with valid logic. And indeed, sometimes a case is so easy to prove, experts aren’t even needed: because even among amateurs, good critics will point you to easily checkable facts and use clear logic that settles the matter (for examples of what that looks like, see Shaun Skills: How to Learn from Exemplary Cases).
Pause on Consensus
Always get a sense of the actual expert consensus, and what it is actually based on. But to do that, you can’t just count books or articles (since cranks and idealogues can swamp a field with those). You have to look at what views are most widely held among mainstream academics (and mainstream means legitimately credentialed and not devout—much less paid—evangelists). You may still have to vet that consensus, assessing on what evidence and logic it is actually based and thus coming to see how strong or weak it really is (see On Evaluating Arguments from Consensus and The Korean “Comfort Women” Dust-Up and the Function of Peer Review in History). And a consensus can be fragmented, in flux, or difficult to suss (see, for example, Galatians 1:19, Ancient Grammar, and How to Evaluate Expert Testimony and Imperial Roman Economics as an Example of an Overthrown Consensus), or sometimes even pretended or fabricated (see, for example, Is 90% of All EvoPsych False?). But being able to discern and summarize a current consensus and its reasoning will help you not get taken in by cranks, who might lie about what the consensus is or on what evidence or logic it is based.
Burden shifting is legitimate when someone is arguing against a consensus. That’s literally the point of an expert consensus. So you need to know what the consensus actually is, and thus whether the burden of evidence against it is being met. Above all, one of the most useful outcomes of getting at a consensus is that you will also start to get a sense of what debates and disputes remain even within that mainstream consensus.
You also must learn to distinguish (and, if necessary, to correct) asserted confidence levels. “Possibly” and “might be” do not mean “Probably” or “is.” Even “probably” can be variable. If something is probable enough, you can say it is the case (it is then a “fact”). But sometimes something is more probable than not, but still not so probable as to be sure. Describe these conditions correctly, and don’t succumb to equivocation fallacies, where on one page of your own or someone else’s analysis you are talking about something possibly being the case, and then on the next page this has transformed, as if by magic, into talking about it probably being the case. Stay consistent; and spot when others aren’t. It matters.
The same goes for all similar wording, like “plausible,” which means “not more probable than not, and maybe even not very probable at all, but nevertheless still probable enough to take seriously as a real possibility,” as opposed to the implausible, which is too improbable to take seriously at all (until enough evidence arises to make it plausible). “Plausible” thus falls in the realm of “suspicion” and “probable cause” but not in the realm of “proved” or “surely likely.” Maintain that sense. And be on the lookout for proponents of conclusions who aren’t doing this.
A consensus, for example, can exist merely as to whether something is plausible, and it might change the more narrowly the specialty. Don’t confuse a consensus as to plausibility as a consensus on probability (much less of fact). And definitely note when specialists have a different consensus position than nonspecialists. For example, you might get the impression that a consensus exists among “bible scholars” that the Gospel of John was written independently of the Synoptic Gospels, but when you check what the consensus is among actual published specialists in the Gospel of John, you find the consensus there is exactly the opposite. It is clear which consensus should prevail there.
Reasonable vs. Irrational Conspiracy Thinking
Finally, there is one other feature of crankish “Doing Your Own Research” also to be on your guard against, which is specifically conspiracy thinking.
What distinguishes conspiracists from those who expose real conspiracies is that the latter actually base their conclusion (that there is a conspiracy) on logically valid results from actual evidence. In other words, they make a logically sound case from real evidence that a conspiracy exists—as was recently accomplished, for example, against both oil and tobacco companies. By contrast, crank conspiracists simply declare a conspiracy exists. At most they might marshal a congeries of facts that they purport to be evidence of it, but that logically isn’t. Sometimes they will conjure even lies to this same end (as for example when Jesus historicists claim that mythicists are concealing information that in fact they explicitly discuss, or are misusing sources when in fact they are not). Either way, it’s the same tactic: to eliminate evidence with accusations of a conspiracy producing it.
The significance of this for “Doing Your Own Research” is that the cranks will dismiss evidence against them with an assumption that a conspiracy exists among experts to hide information or lie about the facts—just as really happened in those oil and tobacco companies. That this thus really does happen will even be used as evidence of its plausibility in any other case, disregarding the scale and quality of evidence that proved those conspiracies real. So the sequence of nonsequiturs goes like this: a significant (sometimes even vast) body of evidence disproves what the conspiracist wants to believe (like, say, evidence proving We Do Need to Do Something about Global Warming or That the Earth Is Spherical); so the conspiracist declares a conspiracy exists to hide or lie about the evidence (as that is the only way all that evidence could be wrong); and then concludes all of that evidence can “therefore” be dismissed as fake or misleading; and thus what remains is just evidence of what they want to believe (like, say, that there is no global warming or that human behaviors aren’t responsible for it—or that the Earth is magically flat).
“Doing Your Own Research” then comes to mean “reading a bunch of conspiracy thinking bullshit that supports this nonsequitur-chain” and then denouncing critics as rubes who “didn’t” do this and thus “don’t know what they are talking about.” That no step of this reasoning is rational is what has justifiably led to widespread disrespect for the phrase “Doing Your Own Research.” But that cranks have abused the phrase this way should not cause anyone to disrespect the concept of “Doing Your Own Research” in a real, not crank, way. Which means, actually investigating the evidence critically (not gullibly or emotively), so as to understand why experts believe what they do (and not just what they report their conclusions to be). As Levy and others explain in their discussions, real “Doing Your Own Research” can and should lead to greater understanding, if it is conducted competently. The conspiracist plays on a truth—that we should not just gullibly trust what experts say, but vet whether they are telling us the truth or even reliably discerning it in the first place—to push a falsehood: that we should never trust what experts say.
As even Aristotle would have explained, extremes are always bad, and the ideal lies in the balance between extremes, his “golden mean.” Total skepticism is just as bad as total gullibility. Rational skepticism finds the equilibrium, the “mean,” between those two extremes, where you are as trusting and as skeptical as is reasonable to be. The goal is to look for what level of evidence is sufficient to warrant trust, and then remain consistent in your application of this standard. The crank will do neither. For neither will they have a reasonable standard by which evidence can change their mind, nor will they apply any standard consistently, setting completely unreasonable standards for anyone who says what they don’t like, and wildly gullible standards for anyone who says what they do like.
Don’t do this.
Summary
Real “Doing Your Own Research” means being reasonably critical, looking for the best case on both sides of an issue, and comparing their merits by valid metrics—which means, not your emotions or biases or assumptions, but by their actual cited evidence and actually articulated logic. A real researcher seeks to be informed; to actually understand an issue. A real researcher traces claims to their primary source, and looks for any evidence against it of comparable quality before relying on it. A real researcher weighs sources by their objectively-evidenced track-record of factual reliability, and not by their politics or position. A real researcher is neither selectively gullible nor selectively skeptical. They apply the same standards—the same warrant for belief—in every direction. And that all means that a real researcher doubts their own beliefs and findings, until they can be sure. Before running off with a premise, they ask themselves, “Wait, is that true?” And they check first. Before running off with a conclusion, they ask themselves, “Wait, how would I know it if that conclusion were false?” And they check first.
Please do all that. Otherwise, please don’t “Do Your Own Research” at all. Because science has proved that that will guarantee your beliefs will be increasingly false, not increasingly correct. To reverse the polarity on that outcome, you have to do it right.
Wonderful! This puts together so much that should be taught to and be understood and internalized by everyone. Intellectually mature critical thinking skills are the antidote to so many of the challenges we face, and also to challenges that are not well recognized.
It’s critical to point out that the people “doing their own research” fallaciously never, ever, engage in total skepticism.
They act like they do, but in reality they use all sorts of horrible heuristics: in-group biases, accepting anecdote, overtrusting their own reasoning and senses and those of other people, etc. They gullibly accept many who pose as a non-authority, not realizing the hilarious irony that, therefore, to them those people are authorities , but they won’t even do that consistently, as one can realize the moment one sees conspiracy theorists of different stripes and ideological backgrounds debating and throwing word walls at each other when both loudly insist they are independent thinkers. They would be better off being total skeptics. Yeah, maybe they wouldn’t leave the house out of fear the car would explode, but they wouldn’t be getting absorbed into cults.
Insofar as progress can be possible with some of these people, starting there, and pointing out that they have non-critically swallowed huge swaths of data from people motivated to sell them something (literally and/or figuratively), is a tactic. It quickly prevents them from constantly pivoting to the “YOU DON’T BELIEVE EVERYTHING YOU’RE TOLD, SHEEPLE” virtue signaling. It makes them justify their epistemology, which some can quickly realize is full of holes. And it forces them to recognize that, in most situations in the real world, “It’s not the ‘official story'” goes not a whit toward telling you what “it” is, because there are countless mutually exclusive alternative scenarios.
A point I like to start with conspiracy theories is something that Chomsky inoculated me with: Institutional analysis.
“Okay, let’s say your conspiracy was true”, I’ll say. “So why did that happen? What vulnerabilities were exploited? Why did those people go that bad?”
As an anarchist, telling me that the government or a corporation or some other official source could have done something bad is no news. It’s just another bit on the pile. I pointed this out frequently to 9/11 truthers: Why did they care so much about whether Bush did 9/11 when Bush was committing a hundred 9/11s on the planet, in plain sight?
Any honest institutional analysis pretty quickly dispels most conspiracy theories. One quickly realizes that the proposed theories depend on capabilities that are actually unavailable to those institutions, acting in ways that go against their incentives, depending on a unity of interests that never actually accrues, ignoring how individuals could defect in the prisoner’s dilemma to immense advantage. For example: All moon landing conspiracy theories fail immediately when you just ask, “Why didn’t the Russians say it didn’t happen? They could prove it beyond a shadow of a doubt with radar data and their own astronomers. They would have had an immense interest in disproving the US government’s claim to go to the moon”. Responding to this requires irrational epicycles about collusion between every major world power for no gain.
An important thing to do with conspiracy theorists is to get them off their script. These are often actually people of at least moderate intelligence who went down a rabbit hole and had a series of facts given to them in conjunction that seems to make sense. They will keep going back to those facts if they are off balance. In this case, even the flat Earther who is so brazen as to insist on a global conspiracy that makes the fake moon landing conspiracy seemingly make sense doesn’t actually know their history all that well. So you can pressure them on all the other times that the Soviets visibly didn’t cooperate with the Americans. “Okay, so you’re saying these global elites collaborate on lying about landing on the moon, something that they didn’t even need to do because you’re saying that the landing on the moon is just to sell the flat Earth but you also insist that they already did that, but they couldn’t collaborate to not have a Cuban missile crisis, or a Vietnam war, or the Soviet’s invasion of Afghanistan which then helped to produce al Qaeda?” They’ll usually insist that all the history is lies at that point. “Okay, prove it. Where’s your evidence?”
(Of course, that can activate an irrational fear response in them based on their ignorance and an annoyance that you “keep changing the subject”, a great irony as they will do that instantaneously if cornered, but that can at least be a bracing jolt that may get them actually thinking once they calm down).
More apropos to the core point of this article, doing those kinds of tests on yourself also helps. Use another area of expertise that you have and ask, “Does this make any sense?” HBomberguy’s flat Earth video does this really well: He instantly knew what JPEG compression was so he could instantly see, and then effectively demonstrate, a flaw in an argument. A ton of conspiracy theory people and sellers of bad information actually quickly lose critical sectors of their audience when they pose as if they have knowledge that they don’t in a field the audience knows about. We’ve seen that with Elon Musk recently with his posturing to gaming knowledge that led even alt-righter and alt-lighters like Asmongold to go after him, and with Illuminaughti who kept on losing audience after audience when she made lazily researched videos on niche topics and the people in those enthusiast communities immediately saw she did incredibly poor research. Using one’s own areas of expertise to sanity check particular claims can be very useful.
Lacking that kind of foresight (to forestall audience collapse with responsible research rather than clickbait) is why these people are doing this in the first place, though, so it is a death-loop.
If they were just grifters, they would either not depend on audience bases but just clickbait new eyes constantly for sales conversions, or they would stick to subjects they can maintain the appearance of authority in (and know how to couch everything they say to avoid undeniable falsification). But rather, people like Musk actually think they know everything and are good at this; so they cannot foresee that they will fail, and thus certainly cannot foresee the consequences of failing. And those who are untouchable (like Musk or Trump who can never become “not rich” no matter what disastrous decisions they make or outright crimes they commit) will never learn, because they can just burn a base they have lost and walk away, and not care.
Meanwhile, the things you and I are talking about, regarding their strategically inconsistent skepticism and claims of doing their own research, can be umbrella’d under a common strategy: cranks and the delusional (as well as grifters emulating them for cash—which is why these are often hard to tell apart) will trade on “respectable” modes of argument (to gain the prestige that attaches to them), fuck up the argument (use it fraudulently or incorrectly), but convince themselves or others that simply because they used a respectable argument, that therefore they have successfully defended whatever claim.
For example, ad hominem.
Accusing someone of ad hominem can be a valid critique and has prestige as such (even if they don’t know what it’s called, people recognize that ad hominem is a fallacy and why it is a fallacy); so cranks will mis-use the accusation, to claim the prestige of the argument without having actually used a valid form of the argument.
For example, they will be caught being incompetent and lying, and then accuse their critic of an “ad hominem” fallacy, for having “disparaged” them as liars and incompetent. But that is not the fallacy (catching someone relevantly lying or not knowing what they are talking about are pertinent critiques). But because it “looks” like the fallacy, they can falsely claim it, and thus convince themselves (or any ready dupes) that they have made a valid point and thus defended themselves.
The same thing happens when a target is a woman and any criticism of what she says is labeled sexist for “attacking her because she’s a woman” (without any evidence that that is why she was critiqued at all). The prestige of “that was just sexism” or “her critic is just a sexist” still attaches, yet without having been earned.
Likewise with “doing your own research,” which is a valid criticism (e.g. people who blithely remain ignorant of corporate or political or media manipulation need to “do their own research,” which is the whole basis of the actual concept of “woke” as waking up to what’s really going on), except when it isn’t (when the “do your own research” requested or completed is bogus). But the prestige of the argument still attaches even when it isn’t earned. This is why cranks resort to arguments like this.
This is also why they are inconsistent: they switch to whatever position they need to take to defend themselves, and as long as the resulting contradiction is more than two steps of reasoning away, they are immune to noticing and thus acknowledging it. If it’s ever called out, all they have to do is change the subject or move the goalposts or gaslight. Because it takes too many steps of reasoning to make it “plain in one go,” many “exit strategies” exist to rescue them psychologically. That none of them rescue them logically is irrelevant to maintaining the delusion or grift.
Notice that all crank “red pill” style language is under the same umbrella. Literally in the manosphere (who actually say “red pill”), but every conspiracy and crankery has some equivalent notion of “going woke,” and thus how everyone else is asleep and thus ignorant of “what’s really going on.” Hence every crank, even misogynists, invents their own version of “woke” (and doesn’t get the irony).
They do this because it is a respectable argument. The actual original woke folk were right, and as such, their idea spread across entire populations, to the point of becoming a threat to the cranks, grifters, and delusionals, who thus had to attack “the undesirable” woke while defending their own—which they can do by calling it something else (like becoming “red pilled” instead of “woke,” another example of hiding a fallacy behind more than two layers of reasoning so the average delusional won’t find it).
In this way cranks and delusionals (and the grifters mimicking them) can trade of the prestige of the “idea” of “going woke” without actually earning it (because real woke is based on a critical investigation of reality; whereas theirs is not, it is only constructed to look like it is).
Ok, great, we talked about being enthralled with the IDEA of doing one’s own research rather than being committed to the PRACTICE of same. We talked about getting the ego (which can include “emotion, desire, or in-group/out-group considerations”) out of the way by nailing it to a piece of wood (because we can’t see past it otherwise). But when we talk about rationality versus irrationality, we’re unavoidably talking about the conscious versus the subconscious, and it’s like Jung said, (paraphrasing), if you’re not conscious of the subconscious, you’ll think it’s fate, and you won’t even recognize that you have free will (which is what I was going through last year. That’s why I didn’t recognize that I had freedom of will!). Aside from their egos, people often can’t see past their own phobias. Every time I examine anyone’s bigotry, I’ve discovered that it traces back to phobia, which I’ve noticed always traces back to unhealed trauma. People don’t like to admit to themselves that they’ve been traumatized, because the ruling class, which maintains their power by traumatizing us, subconsciously instilled in us the victim blaming mentality… Through traumatizing us… Through programming us into seeking external validation. Through the carrot and stick approach (when the alternative to external validation is punishment, it’s no surprise how sought the former is!). You said yourself that sophistry has “prestige” incentives through its deceptive appearance!
As they say, abuse begets abuse. The same abusive way Christians are indoctrinated tends to be exactly the way they epistemically abuse everyone else now. The insults, the veiled threats, the browbeating, the gaslighting, the exaggerated displays of confidence, and the misappropriation of prestige, thereby misusing arguments to sound well-reasoned, then attacking anyone who would call that out.
But as with anyone mentally ill, there is no path forward, no healing, without the victim first admitting they are a victim and need help. We can’t cause that to happen, except by continually rousing their cognitive dissonance until they realize it themselves (as every ex-Christian will attest is the only way they ever got out). And by the time they get to that point, they will get themselves out. So they won’t need any particular help from us, but what we’ve already done to make truth, reason, and information available to them.
I think this is actually too reductive, Mario. And I think it’s trading on a far too vague definition of trauma, resulting in more of a deepity than a useful tool in combating these issues.
In the sense that trauma is literally any bad thing that happened to someone (even an event the person doesn’t recognize as bad), sure, this could be true. We often cannot pinpoint our own core reasons for the way we are, and negative experiences are just as likely to build our character as positive ones. But using this definition, it doesn’t help anything. There’s no way to meaningfully group and address this level of negative experience, and there’s also likely no way an outside person (perhaps not even the person themselves) could ever even discover all the events that contributed here to “heal”. It also doesn’t follow that healing any particular “trauma” here will change one’s ideology (which is made of an entire web of beliefs and fears).
In the scientific definition from psychology, this is likely false. People do not require a true or perceived threat of harm or death to come to poor conclusions. They need not have a pathological fear (phobia) of something to be opposed to it. Now sure, certain segments of our political system tend to trade in fear in order to organize their base, but this is not necessarily pathological in nature. More often it’s simply misinformation (or lack of information). A midwestern dad who is voting to close the southern boarder likely doesn’t perceive a Mexican immigrant as an immediate threat to himself or his family, probably isn’t afraid of them, and if actually asked probably doesn’t have any particular problem with them. If they believe immigration is negatively impacting the economy and political landscape of the nation (misinformation), then it is a rational action to reduce or eliminate that immigration.
I think the narrative you are proposing actually provides value, but only half the actual battle. Using that way of thinking, it should help you to empathize with opponents and look for places to meet them where they are. It should give you more sympathy for opponents, which is much more likely to lower their emotional defenses. But if you follow that up by insisting they have been hurt and need to heal before they can see the light, it’s going to backfire. If instead you started with empathy, then moved to working on education, you would probably change more minds. You need both parts to actually change minds.
Oh dear… When I speak of phobia, I speak of fear that has roots in the subconscious. The irrationality comes from the need for the ego to rationalize, to abstract the subconscious, irrational fear. It’s similar to how “judenhass” became “antisemitism” or Lee Atwater’s famous spiel about how in the 50s, white supremacist political activists were culturally allowed to display their hatred for black people by openly using slurs against them, but then, in the 70s, they had to switch to talking about things like “forced busing.” In each case, an abstraction is employed for the purpose of papering over hatred. “Antisemitism” is merely an abstraction of judenhass that has taken on a life of its own! Just like Atwater’s abstractions for hatred took on lives of their own. You can point to any position on the tree, but the roots are planted in hatred. How the roots express themselves in the ego of the individual is less a concern than the subconscious roots themselves. The so called “hero’s journey” is merely for the ego to plumb the depths of the subconscious and flip the switch on their inner light and step through their shadow and all that sort of thing! When you crucify the ego, it resurrects in a new form! Nietzsche’s abyss is only staring back at you because it is the subconscious, and the subconscious is on some level a distorted mirror of the conscious self, and vice versa! How can we get bigots to overcome their bigotry if they themselves, their egos are not willing to plunge the depths of their subconscious, because that’s how deep the roots of bigotry lie! It’s like how I didn’t noticed I was seeking external validation until I noticed that I had self-image issues because I was seeking validation from a completely external perspective that I had internalized! It took an extraordinary amount of focus for me to see past my own ego in a specific moment to see that that particular external perspective was absolutely worthless! And I might as well shake it off like a camel shakes off dust! That’s how I stepped through my shadow! Sure, I could’ve figured this out in therapy, but now it’s a story! 🤪
I wonder what the difference is between pseudo-scholarship and scholarship that is real, but is wrong. They can both be wrong, although, it could happen that pseudo-scholarship could actually be right sometimes too.
Calling research crank research is interesting. I have seen a lot of that, but I think the New Testament was written by crackpots, so you have people who get PhD’s and are experts in the writings of ancient crackpots. When modern crackpots discuss those writings, the scholars dislike their crackpot opinions, but ironically they are experts in the writings of crackpots. In 2,000 years, there may be people who are experts in the writings of today’s crackpots. You could reject their work today, but in the future there might be people like you who are experts in their work.
There are experts in crankery in other fields (historians of WW2 can specialize in studying Holocaust Denial; historians of modern religions can specialize in UFO cults; historians of environmental history can specialize in climate denialism; historians of anti-semitism can specialize in lizard theory; there are secular historians of Mormonism; etc.). So that’s not odd.
As to the question, what demarcates pseudo- from real sciences, that’s a huge subfield in philosophy (see Oxford, Stanford, IEP, even Wikipedia).
I have proposed answers in various places: see my discussion of what demarcates them in Is 90% of All EvoPsych False? and (in respect to philosophy itself) You Know They’re a [Good|Lousy] Philosopher If and in respect to history in my subsection on Infinite Goalposts.
From the latter I find the common theme to be that a typical crank “never has any defensible examples, rarely knows what he is talking about, gets a lot wrong, makes stuff up, never admits an error, and is generally” a “frustrating delusional fanatic.”
In more formal terms:
By that definition, all apologetics is pseudoscholarship. This does not depend on it being deliberately dishonest (a pseudoscholar might delusionally believe they are doing everything correctly), but it can be deliberately dishonest (and sometimes you can tell, but not always).
Also, though a pseudoscholar will by definition be someone who tends to produce (or only produces) pseudoscholarship, it is also possible for a real scholar to do so. In that case, they have and know all the correct skills and standards and rely on them most of the time, but resort to pseudoscholarship only occasionally (either out of laziness, or a dishonest agenda, or a particular delusion).
I provide another checklist for spotting pseudoscholarship in my Evo-Psych article, which I here have rewritten to apply to history:
In short, pseudoscholars “don’t check” and “make stuff up” and “avoid logic-and-falsification testing.” And they do this repeatedly toward a focused objective (crankery), not on scattered occasions (error).
P.S. And as I point out in the present article and elsewhere (e.g. On Evaluating Arguments from Consensus and in this other comment here), crankery/apologetics relies on stolen prestige, so it will try to accuse real scholarship of being crank on these same criteria.
For example, on the third criterion above (infinite goalposting), they will strip away any question of whether and whose arguments are fallacious, and instead claim that any theory that has a defense against every attack is therefore crank, when the actual distinguishing feature is whether the defense is fallacious, and thus whether the critique it is facing is fallacious. So that climate scientists “have an answer for everything” becomes evidence of infinite goalposts, when it’s not. Likewise, non-crank experts will be accused of relying on fallacies when they are not, or making stuff up when they are not.
Cranks/apologists need to trade on the “prestige” that these arguments carry, without “having the goods” but only pretending to. So they will accuse people of making ad hominem arguments when in fact they are not, of telling just-so stories when they are not, of making fallacious appeals to authority when they are not.
This is all facilitated by the fact that there are real versions of these arguments. For example, revealing incompetence and dishonesty in making an argument is not formally ad hominem but “looks” like it, because the scholar is made to look bad, so they trade on this insulting them to claim that is all that has been done, when in fact a relevant and substantive point has been made against them. Likewise, documenting the scale of a consensus and what it is based on is not an argument from authority but “looks” like it, because it does appeal to authorities, but it is actually appealing to the quality and corroboration of their epistemic findings, which is directly pertinent to determining what is true.
So you have to be on your guard against pseodoscholars abusing the very definitions of peudoscholarship to try and hide their status as pseudoscholars by trying to falsely make all their critics out to be pseodoscholars instead. Which means we have to always be critical thinkers and weigh who is making these critiques legitimately, and who is not.
When people analyze Biblical poetry or prophecies, I don’t think there is really such a thing as facts. I think everyone, scholar or not, is kind of guessing. Even if everyone agrees, they can all be wrong, unlike subjects where you are analyzing things you can see or measure. Poetry is about opinions a lot of times, so I am not sure the rules of scholarship fit Bible scholarship as much as other subjects.
You discuss quotes like Zechariah 6:13 in your books and articles, and you cite scholars and I assume different opinions, but they can all be wrong totally. They are not being pseudo-scholars. It is just that they are guessing what the verse was originally meant to mean because poetry and prophecies are vague and ambiguous. I think people should not assume that scholars are right when it comes to the Bible.
I am not sure what you mean.
If you mean, everyone trying to discern God’s intentions behind prophecies and songs etc. is arguing over a nonfact (there simply is no fact of the matter, because God has nothing to do with these prophecies), then indeed, that’s exactly right.
But once historians accept that (and all mainstream historians do), there remain facts of the matter regarding what each author meant (why they wrote what they did, what they thought it meant, and wanted others to understand it to mean), even lyricists; and standard methods of literary and historical analysis can indeed arrive at correct conclusions about that (indeed most of what history as a knowledge field consists of is exactly this: discerning, empirically, what authors meant).
There are some indiscernible cases (where we have lost the data needed to discern what all an author meant in this or that passage, in which case, that will be the empirical consensus: that the sense is obscure and no confident conclusion can be asserted against any other). But a lot are quite discernible. And there is a lot of good academic literature about this.
Zechariah 6 is a good example: actual experts on it all agree on what the author of it originally meant, as well as on what later Jewish interpreters like Philo re-thought it meant, and these are not arbitrary opinions but strong empirical arguments, based in evidence and sound logic. They are also correctly weighted (scholars admit to what we can be more certain of and what less, what options fit within a credible range, and so on).
Christian apologists and lazy polemicists are then doing pseudo-scholarship when they deny this or pretend it isn’t the case.
For what I mean, see: Kipp Davis Didn’t Check The Literature.
You can check the bibliography there and see extensive empirical (evidence-based) arguments across all scholarship that agrees with me as to what Zechariah originally meant and how that meaning was changed by later interpreters by the time of Philo.
Davis is the one engaging in pseudo-scholarship here by not even checking so as to know what the mainstream consensus is on this or on what evidence it is based (and it is soundly based on that evidence, which is why no real scholar disagrees with me on this point). So I am not disagreeing with any real scholars; I am simply repeating their consensus position. Davis is the one ignoring everything we are saying and falsely pretending to know better, and misleading the public on what even it is we said, much less its merits.
Davis trends pseudo-scholar on this subject a lot. See And So Kipp Davis Conclusively Demonstrates His Incompetence as a Scholar and Then Kipp Davis Fails to Heed My Advice and Digs a Hole for Himself and Kipp Davis’s Selective Confirmation and Ignoring of Everything I Actually Said in Chapter 4 of On the Historicity of Jesus. I document numerous pseudoscholarly moves across these analyses, showing Davis doing every single thing typifying pseudoscholarship.
And yet, so far as I can tell, only when it is this subject does Davis collapse into a pseudoscholar. I suspect this is because he was lazy and trapped himself in errors that he is too arrogant to admit to and thus correct, so he had to double down on crankery and lean on credentials and bluster to save face.