Here I shall assemble some advice I now realize I always take for granted, but that I find even well-meaning people sometimes don’t know, yet will definitely benefit from.
The idiom “Doing Your Own Research” has become a joke largely because the phrase usually comes from people who are shockingly bad at that, but who want to claim the prestige and authority of having actually done it. Which is pretty much all cranks, and everyone lost in some delusion or other, conditions which today infect millions of people.
Others have already done a good job of writing up the problem:
- “Do Your Own Research!” by Neil Levy in Synthese (2022)
- “Doing Your Own Research Is a Good Way to End Up Being Wrong” by Philip Bump in The Washington Post (2024)
- “Why ‘Doing Your Own Research’ May Make You Believe Fake News” by David Vetter in Forbes (2023)
- “Support for ‘Doing Your Own Research'” by Sedona Chinn and Ariel Hasell in Misinformation Review (2023)
- “The Problem with ‘Doing Your Own Research'” by Melanie Trecek-King at Thinking Is Power! (2021)
- “How to Do Your Own Research” by Melanie Trecek-King at Thinking Is Power! (2021)
- “Doing Your Own Research a Little Bit Better” by Jonathan Jarry for the McGill Office for Science and Society (2022)
These are all valuable reads. But they all converge on the same result: the principal differences between “Doing Your Own Research” competently (like a critical thinker) and incompetently (like a crank) can be summarized as:
- Only reading sources that suit or agree with your preconceived notion vs. reading the best of both sides. It’s the difference between steel man vs. straw man, and trying to disprove the claims in a source you agree with rather than trying to validate them, before believing them (on which point, see The Scary Truth about Critical Thinking).
- Treating sources with a biased rather than informed assessment of their reliability. Again, cranks will distrust sources merely because they disagree with them (and trust sources merely because they agree with them), and defend this with fallacious rather than valid appeals to evidence; whereas critical thinkers will trust sources based on actual evidence of their past performance and methodologies and standards (on which point, see A Vital Primer on Media Literacy).
- Just “armchairing” reasons to reject what experts say and what evidence and logic it is based on (destructive skepticism) rather than legitimately evaluating its strengths and weaknesses (productive skepticism).
- Not fact-checking or logic-checking yourself, rather than burn-testing your own facts and logic before accepting your own conclusions. A crank convinces themself that anything they believe or think is sound. A critical thinker distrusts themselves, and thus makes sure their facts are correct and their reasoning nonfallacious, and are continuously asking themselves, “Wait, is that true?” and “How would I know it if I was wrong?”
So, don’t do it like a crank. Do it like a critical thinker.
Chinn and Hasell found that “support for ‘doing your own research’ may be an expression of anti-expert attitudes rather than reflecting beliefs about the importance of cautious information consumption,” and as such, the phrase is often disingenuous. But not always. As Kevin Aslett et al. found, in “Online Searches to Evaluate Misinformation Can Increase Its Perceived Veracity,” Nature (2024), “When individuals search online about misinformation, they are more likely to be exposed to lower-quality information than when individuals search about true news,” and “those who are exposed to low-quality information are more likely to believe false/misleading news stories to be true relative to those who are not.” In other words, “Doing Your Own Research” usually means just gullibly believing whatever you read on the internet; so if you endeavor to read a lot of things on the internet, what you will end up believing is mostly going to be false (because so much of what’s on the internet is false).
The process basically goes like this: you’re told something (in a mere matter-of-fact way, and maybe by an authority you have already been primed to dislike or distrust); you then find lots of people claiming that’s false (and employing a lot of rhetorical devices of persuasion and manipulation that you have no defenses against); and you don’t find equivalently-artful rebuttals (because those take work to locate, and are already coming from authorities you were primed to distrust or dislike, while most rebuttals you encounter will not be wholly competent); so you think the argument ends there, and side with the contrarians. This is irrational and uncritical. But it’s what most people do.
In reality, we confuse large numbers (“ten critics”) with large frequencies (ten voices among a thousand is only a consensus rate of 1%); we confuse artful rhetoric with reliability; and we overrate the trustworthiness of groupthink peers and underrate the trustworthiness of outsiders (like “scientists” and “politicians” and “bureaucrats,” who occupy social universes either alien to you or that you have cultivated a disdain for). The result is “Doing Your Own Research” will result in your beliefs deviating more from reality—unless you learn how to do it critically, which means consciously working to compensate for all these biases. Which requires being alert for them and taking steps to bypass the ways they disinform you—such as by taking outsiders more seriously and insiders more skeptically, being aware of and actively seeking to tell the difference between rhetoric and logic, and between assertions and evidence, and not confusing numbers with frequencies, amateurs with experts, or gut feelings with careful reasoning.
“Doing Your Own Research” only works if you adopt the perspective of the very people who invented the science of rhetoric specifically to solve this problem over two thousand years ago. The ancient Greeks realized that in a democracy, where persuading large numbers of people was the only way to effect policy, a systemic problem arose. Two sides will assert competing facts, and make competing arguments. Whose argument do you trust? Whose facts do you trust? How do you navigate this? Worse, studying this question so as to answer it will also make someone better able to manipulate it (one of the trial accusations against Socrates was “teaching students how to make the worse argument sound the better,” an outrage that, true or not, contributed to his criminal execution). So now you get an arms race, between rhetors skilled at manipulating you into believing they have the better argument, and those actually trying to promote the truth. The only way to rationally cope with this is to allow no bias (do not favor sides based on emotion, desire, or in-group/out-group considerations) and simply analyze the logic and check the facts yourself (which is why the answer to the question Will AI Be Our Moses? will always be no).
So, just because you find ten people on the internet craftily arguing against some establishment claim or other does not make grounds for doubt. You have to compare the quality of argument on both sides: who is lying, who is concealing pertinent facts, who is making mistakes, who is relying on fallacies. Not who you “assume” is doing these things, but who can you prove with evidence is doing these things. This requires skills (which I teach in my monthly course on Critical Thinking for the 21st Century).
How to Vet a Wikipedia Article
Consider Wikipedia as an example. That has been demonstrated to be as reliable as any other encyclopedia—which means, it’s not great, but it’s better than asking Uncle Joe. The principal value Wikipedia has is that it tries (even if not always succeeding) to enforce the notion that every claim in it should have a cited source—and it is that source-list that is of value. You don’t have to trust Wikipedia; you can instead use it to run down a focused set of sources of information and vet those. This is in fact how real scholars use reference books (like subject dictionaries and encyclopedias and handbooks): not as authorities in themselves, but as summaries with a bibliography that can start a breadcrumb in investigating a subject. When an academic cites an encyclopedia entry, they do not mean they trust (or even that you should trust) everything it says, but rather, they mean that you can get started there in “doing your own research,” because they vetted the summary (as experts themselves) as at least “okay” and (more importantly) they find its source list to be good enough to get started with.
It helps to have experience enough to know how Wikipedia can be distorted (which it even openly explains to you) and which subjects there are more prone to colonization by cranks and ideologues (like religion and the soft sciences, and any controversial subject), and which are more likely to be reliably policed by real experts (like math and hard sciences, and uncontroversial subjects). But I have written up some guidance on how to build that experience already in A Vital Primer on Media Literacy. I have also already covered the general skills of critical thinking across my whole critical thinking category, but a good place to start is, again, The Scary Truth about Critical Thinking. I’ve also run an example of Wikipedia reliability before, in How Not to Act Like a Crank: On Evaluating Pliny’s Alleged Mention of Nazareth, which also gives several examples of how to “Do Your Own Research” for real (and not like a crank).
The basic procedure you should always follow is:
- Chase down the sources. A cited source might in turn just summarize and cite yet another source, and so on down the line. Ideally you want to follow the breadcrumb all the way to the primary source(s). Which usually means some original study or text—essentially, where the trail ends in print, and on which all subsequent sources for the claim depend.
- Vet each level of that breadcrumb. Does each source correctly describe what is in its sources? Or is there a telephone game of distortion going on? Make any necessary adjustments to revise the claim in light of what you find. Then vet the quality of argument and evidence offered for the claim in the most primary source you could get to.
- Try to disprove it. Can you independently find expert sources that correct the claim or call it into question? Are they any good? How strong a case do they make? Which side of any dispute has the better case? Has a consensus of experts fallen on one side or the other, or is it still substantially debated even in the field? Is one side concealing important evidence or lying about it? Or arguing fallaciously?
All of this controls for framing bias (what the editors of Wikipedia chose to include or exclude, what they chose to emphasize or deemphasize, and how they chose to describe everything). Someone can easily manipulate you with framing. So don’t fall for it. Step outside the frame.
Hence “trying to disprove it” means, for any given claim you need to rely on, check on your own to see if there are any legitimate refutations or challenges to it, or any well-supported evidence against it. You must look for high-quality challenges (which generally means, from experts; amateurs can only breadcrumb you to experts). And you must asses the merits of their challenge. That may mean looking for high-quality defenses (which means, again, from experts; amateurs can only breadcrumb you to the experts) and comparing their merits.
For this, a very useful tool now is Google Scholar. Ordinary search engines (like just “Google”) can help you locate critiques or contrary claims you can check, vet, and compare (and sometimes entire sites are set up to help with this, like RationalWiki and TalkOrigins); or that can breadcrumb you to the highest quality sources, that you can then check, vet, and compare. Always practice using different keyword combinations until you find the best sources or are sure there aren’t any (or any worth a bother). But Google Scholar gives you an automatic filter here. It can more quickly zero you in on the professional publications (articles and books) of scientists and academics (it’s not as good as professional databases, but it has the advantage of being free).
You might not find direct critiques of a claim this way. But you can likely find articles and books on the same subject, in which you can find what experts actually are claiming about the matter, and then compare that with the claim you are trying to vet the merit of. You will, as always, find lots of garbage and rhetoric and apologetics and propaganda. But once you learn how to tell the difference between that and legitimate research and argument, you can start to tell if there is any actual disproof of a claim—or any important qualifications that need to be made to it.
Pause on First-Person Sources
I mentioned the goal as to find and vet the most primary sources in print (online or not). But it is worth adding that if a primary source’s authors or researchers or witnesses are still alive, you can go even further and try talking to them, but that’s only if for some reason you actually need to. My advice there is, don’t contact private persons at all (unless you are literally a professional journalist); and when you try to contact public experts or personalities, make sure you do three things: (1) use an appropriate channel of communication (e.g. don’t call them at home); (2) be professional and polite; (3) keep your query very brief and clear (long messages will be ignored); and (4) prove to them you’ve already done some of the work and thus aren’t just lazily trying to get them to do the work for you (and for no pay at that). Showing that will ensure they feel that you’ve put in enough labor that deserves some labor from them in reward. It also shows you know what you are talking about, and thus explaining things to you won’t be too much of a chore.
So, when doing that, without being verbose, do mention what you have already checked or found and why you need to consult them now. For example, give some evidence that you actually read their study, and actually understood what it said, and that your question is indeed something it doesn’t already answer. Which means you have to have actually done that—pretending to will guarantee you will be ignored, because they will see at once that what you are asking is already answered there, and thus you didn’t actually read it. In short, don’t come across as lazy or dishonest or disingenuous. Come across as someone who has done the work and earned an answer. And do all that as politely and in as few words as possible. Even if they respond but can’t answer your question, you can just as politely ask them if they know anyone who can that you can direct your query to—and/or ask for a resource or two that might help you (like a book or paper you can start another breadcrumb with).
Pause on Peer Review
Peer review is not magic; but it does matter. A paper or book that has been vetted by expert peers and released by a serious (reputable, non-crank, non-mill) publisher counts for more than those that have not. You can still find lies, errors, and garbage in peer reviewed work. And useful work exists outside peer review (especially in fields awash with biased gate-keeping). But you know peer reviewed work has at least passed one level of expert review, whereas any claim that has never survived that review is more often not going to be worth the trouble of even reading, much less citing. There are exceptions, but you need to evaluate that yourself by seeing whether an author is applying real “Do Your Own Research” and “Critical Thinking” skills, such as I am training you in here and in the other articles linked above.
In effect, everything outside peer review is just like Wikipedia: potentially useful for having provided you a handy summary and source-list you can run down so as to evaluate the claims it makes. Everything outside peer review requires far more vetting from you, and thus a lot more work, and thus a lot more selectivity in which books and articles in this category are even worth your time. By contrast, peer reviewed work should be easier to vet: it will (more often) be better organized, its source list will (more often) be on point, and it will (usually) explicitly describe its methods and premises, and how its conclusions derive from them, making it easier for you to spot “bad science” (and history and philosophy also count as “sciences” for this point) and easier for you to zero in on the essential points (as superfluous material and handwaving will usually be at a minimum).
You can actually tell when peer review fails at all this. And when you look for the same evidence as would tell you that, works outside peer review can also advertise how unreliable they are. Whereas the same skills will help you detect the opposite: when a non-reviewed work is not handwaving and meandering but cutting straight to the point, and when it is clearly explaining its premises and citing good sources that actually do establish them, and reaching conclusions from those premises with valid logic. And indeed, sometimes a case is so easy to prove, experts aren’t even needed: because even among amateurs, good critics will point you to easily checkable facts and use clear logic that settles the matter (for examples of what that looks like, see Shaun Skills: How to Learn from Exemplary Cases).
Pause on Consensus
Always get a sense of the actual expert consensus, and what it is actually based on. But to do that, you can’t just count books or articles (since cranks and idealogues can swamp a field with those). You have to look at what views are most widely held among mainstream academics (and mainstream means legitimately credentialed and not devout—much less paid—evangelists). You may still have to vet that consensus, assessing on what evidence and logic it is actually based and thus coming to see how strong or weak it really is (see On Evaluating Arguments from Consensus and The Korean “Comfort Women” Dust-Up and the Function of Peer Review in History). And a consensus can be fragmented, in flux, or difficult to suss (see, for example, Galatians 1:19, Ancient Grammar, and How to Evaluate Expert Testimony and Imperial Roman Economics as an Example of an Overthrown Consensus), or sometimes even pretended or fabricated (see, for example, Is 90% of All EvoPsych False?). But being able to discern and summarize a current consensus and its reasoning will help you not get taken in by cranks, who might lie about what the consensus is or on what evidence or logic it is based.
Burden shifting is legitimate when someone is arguing against a consensus. That’s literally the point of an expert consensus. So you need to know what the consensus actually is, and thus whether the burden of evidence against it is being met. Above all, one of the most useful outcomes of getting at a consensus is that you will also start to get a sense of what debates and disputes remain even within that mainstream consensus.
You also must learn to distinguish (and, if necessary, to correct) asserted confidence levels. “Possibly” and “might be” do not mean “Probably” or “is.” Even “probably” can be variable. If something is probable enough, you can say it is the case (it is then a “fact”). But sometimes something is more probable than not, but still not so probable as to be sure. Describe these conditions correctly, and don’t succumb to equivocation fallacies, where on one page of your own or someone else’s analysis you are talking about something possibly being the case, and then on the next page this has transformed, as if by magic, into talking about it probably being the case. Stay consistent; and spot when others aren’t. It matters.
The same goes for all similar wording, like “plausible,” which means “not more probable than not, and maybe even not very probable at all, but nevertheless still probable enough to take seriously as a real possibility,” as opposed to the implausible, which is too improbable to take seriously at all (until enough evidence arises to make it plausible). “Plausible” thus falls in the realm of “suspicion” and “probable cause” but not in the realm of “proved” or “surely likely.” Maintain that sense. And be on the lookout for proponents of conclusions who aren’t doing this.
A consensus, for example, can exist merely as to whether something is plausible, and it might change the more narrowly the specialty. Don’t confuse a consensus as to plausibility as a consensus on probability (much less of fact). And definitely note when specialists have a different consensus position than nonspecialists. For example, you might get the impression that a consensus exists among “bible scholars” that the Gospel of John was written independently of the Synoptic Gospels, but when you check what the consensus is among actual published specialists in the Gospel of John, you find the consensus there is exactly the opposite. It is clear which consensus should prevail there.
Reasonable vs. Irrational Conspiracy Thinking
Finally, there is one other feature of crankish “Doing Your Own Research” also to be on your guard against, which is specifically conspiracy thinking.
What distinguishes conspiracists from those who expose real conspiracies is that the latter actually base their conclusion (that there is a conspiracy) on logically valid results from actual evidence. In other words, they make a logically sound case from real evidence that a conspiracy exists—as was recently accomplished, for example, against both oil and tobacco companies. By contrast, crank conspiracists simply declare a conspiracy exists. At most they might marshal a congeries of facts that they purport to be evidence of it, but that logically isn’t. Sometimes they will conjure even lies to this same end (as for example when Jesus historicists claim that mythicists are concealing information that in fact they explicitly discuss, or are misusing sources when in fact they are not). Either way, it’s the same tactic: to eliminate evidence with accusations of a conspiracy producing it.
The significance of this for “Doing Your Own Research” is that the cranks will dismiss evidence against them with an assumption that a conspiracy exists among experts to hide information or lie about the facts—just as really happened in those oil and tobacco companies. That this thus really does happen will even be used as evidence of its plausibility in any other case, disregarding the scale and quality of evidence that proved those conspiracies real. So the sequence of nonsequiturs goes like this: a significant (sometimes even vast) body of evidence disproves what the conspiracist wants to believe (like, say, evidence proving We Do Need to Do Something about Global Warming or That the Earth Is Spherical); so the conspiracist declares a conspiracy exists to hide or lie about the evidence (as that is the only way all that evidence could be wrong); and then concludes all of that evidence can “therefore” be dismissed as fake or misleading; and thus what remains is just evidence of what they want to believe (like, say, that there is no global warming or that human behaviors aren’t responsible for it—or that the Earth is magically flat).
“Doing Your Own Research” then comes to mean “reading a bunch of conspiracy thinking bullshit that supports this nonsequitur-chain” and then denouncing critics as rubes who “didn’t” do this and thus “don’t know what they are talking about.” That no step of this reasoning is rational is what has justifiably led to widespread disrespect for the phrase “Doing Your Own Research.” But that cranks have abused the phrase this way should not cause anyone to disrespect the concept of “Doing Your Own Research” in a real, not crank, way. Which means, actually investigating the evidence critically (not gullibly or emotively), so as to understand why experts believe what they do (and not just what they report their conclusions to be). As Levy and others explain in their discussions, real “Doing Your Own Research” can and should lead to greater understanding, if it is conducted competently. The conspiracist plays on a truth—that we should not just gullibly trust what experts say, but vet whether they are telling us the truth or even reliably discerning it in the first place—to push a falsehood: that we should never trust what experts say.
As even Aristotle would have explained, extremes are always bad, and the ideal lies in the balance between extremes, his “golden mean.” Total skepticism is just as bad as total gullibility. Rational skepticism finds the equilibrium, the “mean,” between those two extremes, where you are as trusting and as skeptical as is reasonable to be. The goal is to look for what level of evidence is sufficient to warrant trust, and then remain consistent in your application of this standard. The crank will do neither. For neither will they have a reasonable standard by which evidence can change their mind, nor will they apply any standard consistently, setting completely unreasonable standards for anyone who says what they don’t like, and wildly gullible standards for anyone who says what they do like.
Don’t do this.
Summary
Real “Doing Your Own Research” means being reasonably critical, looking for the best case on both sides of an issue, and comparing their merits by valid metrics—which means, not your emotions or biases or assumptions, but by their actual cited evidence and actually articulated logic. A real researcher seeks to be informed; to actually understand an issue. A real researcher traces claims to their primary source, and looks for any evidence against it of comparable quality before relying on it. A real researcher weighs sources by their objectively-evidenced track-record of factual reliability, and not by their politics or position. A real researcher is neither selectively gullible nor selectively skeptical. They apply the same standards—the same warrant for belief—in every direction. And that all means that a real researcher doubts their own beliefs and findings, until they can be sure. Before running off with a premise, they ask themselves, “Wait, is that true?” And they check first. Before running off with a conclusion, they ask themselves, “Wait, how would I know it if that conclusion were false?” And they check first.
Please do all that. Otherwise, please don’t “Do Your Own Research” at all. Because science has proved that that will guarantee your beliefs will be increasingly false, not increasingly correct. To reverse the polarity on that outcome, you have to do it right.