I’ve been working in the field of philosophy for decades. It has literally been my religion. I spent half my life researching it and developing my own comprehensive, coherent, evidence-based philosophy, which became my 2005 book Sense and Goodness without God: A Defense of Metaphysical Naturalism. I’ve improved it since, although in bulk it holds up (see Revisions to Sense and Goodness without God). I’ve also engaged scores of debates in the field of philosophy, and with bona fide philosophers. My Columbia University PhD is in history of philosophy. And I’ve published numerous peer reviewed papers in philosophy (example, example, example, example, example, example; even my chapter on moral theory in The End of Christianity was peer reviewed by four professors of philosophy). And that includes entries for standard academic references, and even a book, since I required in my contract that Proving History: Bayes’s Theorem and the Quest for the Historical Jesus be peer reviewed by professors of mathematics and biblical studies. I have also done a lot of work studying philosophy as a discipline and its methods and failures (see Is Philosophy Stupid? and, for example, my series on Bayesian analytical philosophy). And I publish researched articles on it a lot here (see my new philosophy category and my old one).
Fifteen years ago I wrote an article How to Be a Philosopher that was about how to be a philosopher—not methodologically, but as a life avocation. The “four tasks” I recommend to that end were (and still are) “spend an hour every day asking yourself” philosophical “questions and researching the answers,” “read one good philosophy book a month,” “politely argue with lots of different kinds of people who disagree with you on any of the answers you come to above,” and “learn how to think,” which touched on logic and critical thought. But over the last five years I have shifted my focus across all my disciplines (since my PhD is also in history, and I am well published in that field as well, even across subject fields). Now rather than only debunking bad arguments and ideas, I aim to also analyze and explain why they are bad—what methods their advocates were relying on to get all the wrong answers. Because in my interdisciplinary work I started seeing something spanning all disciplines: they all use the same false logics, whether arguing for veganism or a flat earth or Jesus or Libertarianism or even naive memory realism. In result I have been working on developing a methodology of error.
I’m not ready to systematize my results there yet. You can find examples in pretty much every debate and critical article I’ve written up here since at least 2018. But as part of that project I wanted to nail down one important question that often comes up: what exactly is it that makes someone a bad philosopher (and conversely, what makes for a good one). I have often noted that formal academic philosophy is full of bad arguments; and lousy philosophers get fallacies and pseudoscience through peer review far too often (I’ve published too many examples here to list them all, but I give a general assessment in my conclusion to my Bayesian analysis series on philosophy, and delve into the principles of it even more in Is Philosophy Stupid?). This began to dovetail with my methodology-of-error project, enough to put some thoughts down.
The Five Essential Metrics of Good/Bad Philosophy
The essential features of quality in philosophy can be divided into five categories…
Logic
A professional philosopher must be a bona fide expert, first and foremost, with logics. If they can’t spot (and thus avoid) a fallacy of reasoning, they should burn their diplomas and flip burgers or dig ditches or answer phonecalls or sew buttons, or some such thing actually useful for society. I should not even have to say this. But alas, it’s where we are: peer reviewed philosophy is rife with unchecked fallacies (example, example, example, example, example, example, example, example, example, example; and this is entirely apart from the problem of fake journals, which are plaguing all fields of knowledge now). It’s as if astronomy journals started abundantly publishing studies in astrology and stopped telling the difference between them.
But this has to be the first metric: you have to be skilled at spotting and dodging logical fallacies. If you can’t do this, you can’t do philosophy. This does not mean a good philosopher will always nail it. My point is not that good philosophers will be “immune” to fallacies of thought, but rather, it will be for them only an occasional error, and not something typical of their performance; and their errors in this respect will be work to discern, not glaringly obvious. If you’re a professional basketball player, and you can’t catch a ball tossed to you in the clear from two feet away, you need to find another job. But the same doesn’t follow if you miss a freethrow or lose a game—unless you’re missing and losing a lot more than the pro average.
And this correct grasp of logic cannot be limited to deductive logics (a serious problem in the field). It must also include inductive and mathematical (probabilistic) logics. A correct grasp of Bayes’ Theorem and its application to epistemology is, honestly, essential now. If you can’t even do sixth grade arithmetic, you can’t argue anything is probable or not—and that’s half of everything a philosopher does. The other half is analytical (extracting meaning from words and sentences), and that requires a basic grasp of semantics. See Less Wrong’s 37 Ways That Words Can Be Wrong as a veritable list of ways philosophers can mess this up. You need to not make such frequent mistakes if you want to be a good philosopher; and any philosopher you catch commonly making these kinds of mistakes you can safely classify as bad at it.
Science
A professional philosopher must also be versed in the sciences. They don’t have to be an expert in any, much less all; but they do need to have the basics down (at least what an A-level, or in England first-level, student would walk away with after completing a college intro course in any given science). And they do need to have the honed skill to research scientific findings, to read a scientific paper critically and competently, and take the findings of the sciences correctly into account in any philosophical argument they ever make. Because the sciences must always be the foundation of any philosophical argument they make—not something to ignore or bypass. Scientific findings do vary in how certain we are of them; but how to tell the difference between a poor or tentative scientific finding and a thoroughly established one (or anything in between) is an indispensable skill for a philosopher. If you can’t do this, you need to pack it in.
And this includes sciences people tend to forget are sciences: like economics, juridical science, political science, sociology, moral psychology, cognitive science, and cultural anthropology (and, of course, history). If you haven’t read at least one quality introductory book on each of these sciences (as well as the other main subjects, from physics and geology and chemistry to biology, physiology, and ecology), you won’t be a very good philosopher. You will make abundant mistakes. There is a reason centuries of accumulated scientific knowledge must define and constrain our thoughts and findings in philosophy: science is just philosophy with better data. And it would be a fallacy to ignore the highest quality results when coming to a conclusion. And this includes sciences you can’t be expected to have read-in yet, but need to as soon as they become relevant to something you intend to argue or explore—like sports science, educational science, police science, etc. Wikipedia now has massive lists of applied sciences and theoretical sciences—hence a good philosopher will always know: if there is a question of fact, there is probably a science of it. And if you want to talk about it, you’d better start there.
To illustrate what I mean, consider the subject of free will: you don’t have to already be an expert in physics or neuroscience and jurisprudence, but to be a good philosopher, you had better know not only how to check any claim you make within those scientific fields, but also that you must do so. Of any claim you make in philosophy, premise or conclusion, you need to be able to ask, “What do the pertinent sciences say about this, and how can I find out?” But you also have to know how to answer that question. Which is what I see most commonly failed at: philosophers all too often (like the string of examples I linked above) don’t think to check what the appropriate science is on any given subject they study or argue about, or are too incompetent to understand it and get it right (for a recent glaring example, see my entire debate with Carlo Alvaro on cosmology, and how it ended).
On the subject of free will, you are a bad philosopher if (before declaring confident opinions on it) you have not researched its real-world analysis in, for example, relevant U.S. Supreme Court decisions and legal edge cases (like Battered Spouse Syndrome: see Free Will in American Law: From Accidental Thievery to Battered Woman Syndrome). And you are a bad philosopher if you don’t check the neuroscience—or misunderstand it (see Was Daniel Dennett Wrong in Creative Ways?). Conversely, scientists can be really bad philosophers even when they get the science right—because they hose the logical analysis of it that is required for arriving at a reliable conclusion in philosophy. As I note in my discussion of Dennett’s work on free will, Libet experiments do not disprove free will: they merely prove the obvious fact that conscious awareness is a computation, which, being such, takes time to run. To fail at the analytical task of distinguishing the “person” who makes a decision and their “awareness” of being a person who made a decision is to be really bad at philosophy. Your consciousness of you is not you. You are a physically stored network of skills, character, and memories—which is why you do not cease to exist when you are unconscious. And yet even renowned experts in philosophy mess this up. So the other four of these five criteria still must be met. And here, scientific facts must be distinguished from philosophical conclusions.
Getting good as an interpreter of science is thus a big part of what it means to be a good philosopher. And getting there doesn’t require college (though community college intro courses abound affordably to take). You can read a textbook on a subject, for example, and whenever you encounter something there that you don’t fully understand (or perhaps are having trouble even believing), you can pause and research that one specific thing until you do grasp it (and what evidence it’s based on, for example, or how confidently it is actually known). You can start with popular source material (even Wikipedia), but you should end with something professionally academic (even a conversation with an actual scientist), until you’ve “got it,” at least as well as you need to for the task at hand. Philosophers who live like this (and thus accumulate at least undergraduate knowledge in a broad range of scientific fields) tend to be the best philosophers; and philosophers who eschew this, the worst.
Reality
The third metric for a good vs. a bad philosopher is whether they ground their philosophy in reality or fantasy. This is why almost all theology is bad philosophy. Grounding in reality does mean empiricism—evidence first. Evidence forms the data, the fundamental premises, on which any conclusion in philosophy must be built. But that means more than just science. To return to the subject of free will: a reality-based philosophy would not stay in the ivory tower and respond to high-brow literature on the subject; reality-based philosophy would go outside and look around and ascertain how free will is employed as a concept in the real world—that is, the actual world, the one that actually exists, and actually impacts the lives of actual people (see Free Will in the Real World … and Why It Matters). What does the term mean—how is it ascertained to be present or absent—in courts of law? Not hypothetical courts. Real courts. What does the term mean—how is it ascertained to be present or absent—in medical and sexual consent decisions? Not hypothetical ones. Real ones.
One of the most common failure-modes of modern philosophy is its divorce from reality. This manifests in a number of problematic ways, as once laid out by the philosopher Mario Bunge and summarized in my article Is Philosophy Stupid? But one respect that is worth reiterating here—beyond the one I already noted, of the difference between ivory-tower and real-world investigations of humanity’s nature and environment—is that abstractions and generalizations should always begin with particulars: and that means real-world particulars. Sometimes analysis leaves none to consult and we have to analyze hypothetical particulars, but even then there is a skill at doing this well or doing it poorly that too many philosophers fail at, either by being bad at analyzing the hypotheticals they create, or by ignoring the real-world examples they could be consulting instead (see On Hosing Thought Experiments).
I often state this as, “Always begin with particulars.” Rather than begin at an abstract assertion about all of humanity or reality, like that “human women are innately hypergamous,” and then “interpreting” all data through the lens of that assertion (and then confusing that for having proved it), you need to start the other way around: gather actual specific and real examples of what you want to talk about—and scientifically, which means, not cherry picking, but taking as close to a random or representative sample as possible (and if you are making claims about human nature, that means from cultures other than the ones you are familiar with)—and then building abstractions or generalizations from those particulars. Because only then, for example, will you find that “hypergamy…varies by degrees, varies across societies,” historically and globally, “and varies even within a society” because “trends are an average, not a universal description of all or often even a plurality of women,” and therefore the proposed generalization, it turns out, doesn’t really hold up very well; especially in the particulars (like what hypergamy is even measured by), which forces a re-think of what one is supposed to even mean by the term.
That is what it means to be a reality-first philosopher, and this is one of things that defines a good philosopher from a bad one. For a solid example of why reality-based and particulars-first philosophy distinguishes good philosophy from bad, see my example of how to correctly investigate and analyze the concept of “gay pride” in Peter Boghossian on Gay Pride and Hobnobbing with an Online Misogynist.
Objectivity
Too much philosophy is motivated reasoning—a philosopher wants a particular conclusion to be true, and then just finds a way to “argue” that it is true. This is contrary to the scientific method—and lest some philosopher scoff at being told they need to embrace a “scientific” method, this is the point of the previous three metrics: they already establish that an un-scientific method is unreliable. In fact, the less scientific your approach, the less reliable its results. That is why science has come to reign supreme: it reflects what happens when you increase the reliability of an approach, with more evidence, better evidence, and better analysis of that evidence. Its simply good philosophy. In fact, the best. At least on the one thing that it applies such methods to. After all, scientists become bad philosophers the moment they drop off-script and stop using the scientific method, but still think they are, simply because “they are scientists.” This is when empirical science slides into fallacious arguments from authority. But that circles back to metric one: getting logic right (as illustrated in my previous example of how to interpret Libet experiments).
Much ink gets spilled on what “objectivity” is and whether its even possible (see Objective Moral Facts for some analysis). But what I mean by the term here is simply this: having the desire and means to control for bias. That means the realized desire and mission to check yourself and avoid motivated reasoning. For a full discussion see The Scary Truth about Critical Thinking and Advice on Probabilistic Reasoning. The nutshell of it is this: you must always try to prove yourself wrong. This is the essence of the scientific method. It is literally the function of experiments and field observations and every properly scientific approach in every field of study. And of course that does not mean “pretend” to try. Pseudoscientists pretend; real scientists do it. Hence many a bad philosopher will give a show of doing this, but in fact won’t choose any hard falsification tests at all, but will game their tests to make them easy to pass. This is motivated reasoning; the absence of objective reasoning. A genuinely objective reasoner will take seriously that they could be wrong, and try genuinely hard to disprove their own premises or conclusions.
Doing that requires finding and steel-manning (not straw-manning) contrary hypotheses (alternative explanations of the same evidence or alternative solutions to the same problem). It requires getting out of the ivory tower and looking around. It requires walking across the hall and running your ideas by an actual expert in the subject. Have thoughts about economics? Better talk to an economist—or at least read them, widely and diversely, and not with an eye to “confirm” what you are thinking, but with an eye to finding the best arguments against it, if any there are. Have thoughts about cosmology? Better talk to an actual cosmologist—or at least read them, widely and diversely, and not with an eye to “confirm” what you are thinking, but with an eye to finding the best arguments against it, if any there are. And so on.
This all requires being able to take someone else’s point of view—to actually see the world, and the evidence and the problem, the way they do—so as to understand why anyone might disagree with you. Which means you have to be good at following the Dennett Rule: state an opponent’s position in your own words, and so well that they agree you got it right. Steelman, rather than strawman. People who are bad at empathizing or understanding contrary positions tend to be bad philosophers; as are people too arrogant to ever worry they are wildly wrong about something that they are sure about. Which leads me to the last metric distinguishing good philosophers from bad:
Humility
Probably the single most important quality required to be good at philosophy is the ability to admit—and thus realize—that you are wrong about something, and to change your mind accordingly. It is true that this point is sometimes used illegitimately as a browbeating fallacy. Christian apologists, as with all cultists, like to try and leverage someone’s value for humility to argue them into “considering” their completely bonkers worldview. And it’s true one should consider it. But once it catastrophically fails to check out, that approach doesn’t fly anymore. Humility must not be conflated with gullibility. But neither must arrogance be mistaken as warranted confidence.
To explain what I mean, I shall have to briefly tell my own story.
When I began my philosophical quest I fell into a hard-core Marxism. Philosophers fond of Ayn Rand then disabused me of that worldview and I became a hard-core Randroid. Then my own continued inquiry led me to realize that that was just as much bullshit as Marxism. By this point it’s the mid-1990s. And my ardent quest to work out what worldview was true (which I had begun at sea in 1991, with an old Brother word processor and a stack of books in a classified sonar space below the bow waterline) had by then led me to realize something was wrong: how could I have been so misled, and so confident, of such contrary worldviews? Clearly my methodology was broken. I then set out to focus on that as the actual problem.
What does it take to be legitimately persuaded of something? One can go on here about logic (metric one), science (metric two), objectivity (metric three), and epistemology generally. But the point I want to make at the moment is subtly different: what I found was, first, that you have to actually be serious about trying to disprove something you are being persuaded of (or confidently believe, yet perhaps can’t recall why or when you came to believe it), and, second, that this entails actually doing this. And that requires a sensible humility. I could be wrong. So…how would I know if I am? How would you know if you’re wrong? This is the single most important question any philosopher must ask—and not just answer, but actually pursue.
So anytime I think I’m right about something now, I don’t trust that, but check. I try to Devil’s Advocate against my own logic. I try to hunt down any pertinent scientific findings that might challenge it. But most of all, I try to think of credible ways things could be different, and then look to see if there are any evidences or arguments for those different ways things could be. If my theory is right, what should I expect to find? Sure. That’s something you must look for. But more importantly, if my theory is wrong, what should I expect to find? That’s what you also go looking for. This is how I was able to correct my mistaken positions on Gettier Problems, on Universal Basic Income, on Nuclear Energy, on Bayesian Epistemology—even Jesus existing, and Q being a thing (where philosophy, particularly epistemology, became essential)—and, of course, Marxism and Objectivism (and, before that, Taoism).
The single most distinctive marker of a bad philosopher is a philosopher who has been very ably shown (factually and logically) that they were wrong about something—and never admit it. Obviously, bad philosophers can make this accusation—claiming to have “ably shown” you’re wrong when in fact all they did was build an edifice of fallacies, science illiteracy, reality-ignoring, and motivated reasoning. So one does need to be able to tell the difference. For example, if you are a third party observer needing to assess who really is the good and who the bad philosopher in any cross-accusation like this, you need the skills to be able to tell which philosopher is arguing fallaciously, which is getting the science wrong, which is skipping the particulars and jumping directly to abstractions, or indeed (sometimes) which is even being dishonest. And that means everyone needs to be a good philosopher, at least a little bit—although everyone needs to be a good philosopher for a lot of other reasons, too (to be a good voter, a good citizen, a good friend, a good anything). But that means you need to hone those same five essential philosophical virtues.
Logic. Science. Reality. Objectivity. Humility.
Good at avoiding logical error. Scientifically literate. Reality-based. Objectivity-driven. Humility-motivated. That’s good philosophy; the opposite, bad philosophy.
Conclusion
Bad philosophers over-rely on fallacies, fail to check the pertinent science or get it wrong, fail to check reality or to build their abstractions and generalizations from actual particulars, fail to burn-test their own premises and conclusions, and never change their mind even when it is obvious they should. Good philosophers actively avoid fallacies and thus minimize them. They check the pertinent science and strive to get it right, and adjust their premises and conclusions to suit. They prefer to start with particulars and work their way to generalizations and abstractions; not the other way around. They burn-test all their own premises and conclusions, trying their darnedest to prove themselves wrong, even adopting the mindset of their own actual or hypothetical opponents to do it. And they correct themselves when caught in an error. They correct factual errors. They correct logical errors. They update their knowledge whenever it is found lacking. And they change their position when these corrections warrant.
And when two philosophers accuse each other of being bad at it, you need to know all this, so you know what to look for and thus discern which of those accusations is false (or if, indeed as can be, both are apt). This is an approach I have discussed before in different contexts (such as in On Evaluating Arguments from Consensus and Galatians 1:19, Ancient Grammar, and How to Evaluate Expert Testimony), but the same principles apply to anything in philosophy as in any other subject field (for example, see A Vital Primer on Media Literacy and Was Daniel Dennett Wrong in Creative Ways? and even Shaun Skills: How to Learn from Exemplary Cases). And this skill requires being able to discern legitimate argumentation from mere apologetics (for examples, see The Difference Between a Historian and an Apologist and Captain DadPool on Who Is Inventing Workarounds).
So hopefully this short discourse will be of value to you, and to anyone you need point to it.
Good to see you going over the basics again! But I have to say that no study of logic is complete without studying Hegel! You did read 1984, right? You can’t decode or unravel Orwellian logic without having a very good handle on Hegelian logic!
Hegel made no meaningful contribution to modern logics and has been rejected entirely by the field. In result, Hegel is of absolutely no use to modern analytical logics. Even in modern dialectical analytics, Hegel plays no role whatsoever. You can see how modern philosophy abandoned Hegel entirely, finding his approach was either actually illogical or too primitive to be usable in modern logics, in this standard reference account. This is because “when Hegel, for example, uses ‘logic’, or better ‘Logik’, he means something quite different than what is meant by the word in much of the contemporary philosophical scene,” and thus he wasn’t contributing to logic at all.
The lesson here is to apply those philosophical virtues: you need to catch up on the state of the field. What role does Hegel play anymore in modern logics, analytics, or even dialetics? And what even is logic? Because correctly answering that, by catching up on the latest basics of the field, would expose the fact that Hegel never actually proposes anything we actually mean by “logic.” He proposes a heuristic, not a logic, and one that has been rejected as flawed and unreliable—it was replaced by the modern scientific method, as here discussed. And it was replaced by it for a reason. Good philosophy requires understanding this. Which requires reading up to catch up on where we are, not where we were.
Hegel, much like St. Paul and Lana Del Rey, is all about what you bring to the reading of him! He’s not fucking outdated if his logic is a key to understanding the logic of the control society of 1984! Dr. Carrier! I LIVED the master-slave dialectic when I was a pedicab driver this summer! I LIVED 1984 in the past couple of weeks! Then I had to move back in with my parents in New Mexico very suddenly! I wasn’t given a couple months notice either! :/
Missing is Continuous Improvement. If your favored model is the same as it was a year ago, if nothing you held dear a year ago seems painfully or at least subtly naive, you have ceased doing philosophy. Another is Uncertainty. If you are not actively testing new ideas against a family of incompatible models each of which is somehow unsatisfactory or incomplete, you are probably in a cul-de-sac.
I think with just a little bit of work, you can fit those “missing elements” into the given categories. Try Humility and Science, for starters.
They’re right. You are just re-describing “science” and “humility.”
Although your description is analytically false.
It cannot be true that in every case P (“your favored model is the same as it was a year ago, if nothing you held dear a year ago seems painfully or at least subtly naive”), therefore Q (“you have ceased doing philosophy”). Because there are true things, and an effective epistemology steers you to it. So as long as you follow the correct procedure, you should expect to accumulate immutables, not “forever change your mind.” You should continually test the immutables, but that does not mean you should abandon all beliefs and switch to new ones every year.
Science affords an analog: many things do get overthrown, but a growing body of scientific knowledge only gets more and more confirmed yearly, and consequently many scientific beliefs are even more true now after a hundred or even a thousand years, not less.
And it cannot be true that if P (“you are not actively testing new ideas against a family of incompatible models each of which is somehow unsatisfactory or incomplete”), therefore probably Q (“you are in a cul-de-sac”). It is possible Q if P, but it is an analytical error to confuse “possible” with “probable.” The probability will be determined by evidence, either at prior probability or likelihoods.
As to priors: most “incompatible models each of which is somehow unsatisfactory or incomplete” have vanishingly small priors now, owing to centuries of accumulated evidence, and therefore they are a waste of resources to continue considering—see, for example, my discussion in Misunderstanding the Burden of Proof.
As to likelihoods: without evidence supporting alternatives, alternatives are unworthy of much time considering (just enough to ballpark their priors and likelihoods and thus assess whether more examination is warranted or not); but with evidence supporting alternatives, you now have something to investigate and evaluate. Examples are the state of the frontier fields in science: cosmology, biogenesis, cognogenesis, and the unification of fundamental physics.
I’d be very interested to hear what your opinions are on Michael Huemerian Ethical Intuitionism. It’s the best philosophy book I’ve ever read. He has plenty of TLDR lectures on it too.
Huemer is normally a decent philosopher but I haven’t read that book. I can tell from summaries that it’s wrong. But wrong is not the same thing as bad. It might also be stuck in field-wide error-modes (like this one: Open Letter to Academic Philosophy: All Your Moral Theories Are the Same).
You can see my counter-take in The Real Basis of a Moral World and I would find very helpful (if you are familiar with Huemer’s book on this) if you can summarize or even page-cite where he discusses anything comparable to my alternative to his view (I know he goes into moral realism but I don’t know whether he covers anything like mine, or how). I might then read that section and blog about it.
Dennett attributed what you are calling Dennett’s rule to Anatol Rapoport, author of the tit-for-tat rule that won the iterated prisoner’s dilemma tournament (https://rationalwiki.org/wiki/Rapoport%27s_Rules). So this is an instance Stigler’s law of eponomy (https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy), which is OK, but I just thought you should be alerted to it. Don’t mean to be overly pedantic. Although on this subject maybe pedantry is allowable!
That’s useful to know (and I see there is a Rational Wiki entry labeled thus). Although I’m not sure it counts. Dennett’s formulation is the one everyone uses. I have not seen any formulation of it from Rappaport. Much claiming it comes from him; no quoting it from him.
Can you find that for me? As in, the actual words Rappaport used to describe this rule. Where is that published? And how does it differ from Dennett’s?
Dennett may have reworded it, but he himself says he was rephrasing Rapoport, as the link cited above says. It is in Dennett’s Intuition Pumps book, but I think I also saw it in an earlier Dennett book or article.
I know. But why doesn’t that link produce a quote from Rapoport on this point?
I need to see where he said this, and what exactly he said.
If in fact he never said this, or never wrote it down, or it was just some vague notion he came up with that Dennett fleshed out into an actual rule, then Dennett is the actual inventor of the rule, and Rapoport only his muse or inspiration.
So it matters what Rapoport actually said and where. As long as the answer is “nowhere,” it should forever stay the Dennett Rule. Likewise if Rapoport’s version was less clearly formulated, for example, or not generalized, or who knows what. Without seeing what he actually said, this cannot be evaluated, and he does not deserve any credit for it (beyond being the stated inspiration).
This is actually a reply to the last post of yours in this thread, which has no “reply” button.
You did not read the Dennett quote in the RationalWiki link. Footnote 3 gives two Rapoport publications. But it also credits in-person conversation between Dennett and Rapoport.
Also, use your own methodology. If after reading those sources, you still cannot figure out how much credit Rapoport deserves, then you don’t know. So you should use the usual cop-out, hyphenated eponyms: Rapoport-Dennett rules.
Of course, what Dennett says doesn’t prove Rapoport deserves all of the credit, because academic custom says you cannot name stuff after yourself. It looks really stupid, and Dennett was anything but. So Dennett being humble does not mean he doesn’t deserve most of the credit. But it doesn’t mean he does either. We don’t know!
It is pretty obvious IMHO that Rapoport deserves some credit.
Thank you. (Readers: You can find the thread Charles is referencing here.)
I am talking about the same thing: “in the RationalWiki link. Footnote 3 gives two Rapoport publications” but I am asking what those sources actually say, because the note also claims Rapoport said he got it from “Carl Rogers,” and so it wouldn’t be Rapoport but Rogers. So we need to see what is actually said in those two sources by Rapoport. That’s why we need that.
More confusing still, there is a Rapoport publication in Footnote 1 credited, but it is to his article on steel manning—where he himself cites and quotes Robin Sloan, which would make this the Sloan rule, not the Rapoport rule. Moreover, Rapoport there (an article from 2014) seems to think that’s where it came from (an article by Sloan from 2009)—which means it cannot have been in Rapoport’s 1960s publications listed in Footnote 3.
Worse, Sloan says he got it from a debating society in San Francisco. And so it’s not his rule, either.
Whereas when we compare all of these (so far as I have exact quotes), only Dennett’s is formulated as a rule.
Point being, there are issues with sourcing here that need to be resolved before any claim can be made.
Thanks, Dr. Carrier. Great post! I examined a philosophy book in my library–Semantic Analysis, by Paul Ziff (Cornell University Press, 1960). He spends an entire chapter analyzing the meaning of the word “good.”
Do philosophers in 2024 spend huge amounts of time analyzing meaning or was that a passing fad of that time period? Thanks!
I can’t say objectively. Subjectively that’s the surviving impression (that was the era of linguistic philosophy), but I am not so sure (I still see a lot of that), and I don’t have any statistical data to show for there being a difference or how large.
Bunge might have filed that under “obsession with minioroblems” or “obsession with language” (a problem he himself was still seeing in the late 80s), but I think he would not, because a thorough analysis of the word “good” is not a miniproblem or an obsession with language: it’s actually useful (or can be). And it is generally agreed Ziff is a classic (though I haven’t read that book, I’ve come across summaries and applications of it).
It’s important to note that a massive dive on the meaning of an important word is not bad philosophy; it can be essential philosophy (someone has to do it; and it can be valuable to build on that brickwork, rather than having to reinvent the wheel time and again). What would make it bad is if it wasn’t even a good analysis of the word (if, for example, it left out meanings in use, per Ayer; or was unempirical, i.e. if Ziff just armchairs it and doesn’t do any actual field linguistics; or was overmotivated, i.e. Ziff skews the analysis toward some objective he wants rather than actually doing a proper job of covering all bases; etc.).
I should also add that that book is a teaching manual on how to do this kind of analysis. So we should expect chapters that do it to death, as they set examples for thorough semantic analysis to adapt to other uses. And I see that’s the case here: you are describing its last chapter, which aims to use all the tools taught in the previous chapters to sort out an important word in philosophy by way of example.
I was literally considered the top philosophy major in over a decade at my undergrad institution and I never read Hegel. With that said, I cannot rate him, but I know that everyone says, “you must read Hegel” like that’s some sort of rule. I read Saul Kripke, animal rights philosophers, (on my own) Eastern philosophers, Derek Parfit, Gaus, among others. I think Frank Jackson, Quine, and Baudrillard were the three hardest western thinkers I read. Fodor could be challenging at times due to all the Latin he used. I think the Buddhist logicians are underrated. They made advancements in logic that were not in vogue in the West until J.S. Mill.
Philosophy is just too big of a field to have read everyone. And, frankly, Hegel is not only kind of impenetrable (though that may be partially a translation issue – I’ve heard Kant is also easier to read in the original German though still very idiosyncratic) but also really contextual and of his time. Like, arguing that the entire universe’s metaphysics have been oriented to produce the Prussian state in his era is just laughable with the benefit of hindsight.
That having been said, a working idea of Hegel (or, more accurately, Marx’s idea of Hegel ) is pretty critical to understanding Continental philosophy. But, IMHO, the dialectic is so abstract and arbitrary (it’s at best a useful heuristic or method, sort of like the Five Whys) that one can always just read everyone who uses it as using their own specific idiosyncratic take.
I can vouch for the fact that Kant is a better read in the original German. Indeed, sometimes you really can only understand him in the original German. There might be something to study there as to the assumptions of English translators that are creating the difficulty (since this discrepancy should not inherently be the case; the strong Sapir–Whorf hypothesis has long been refuted; though my remarks about translation generally in From Homer to Frontinus may apply here).
Kant is of course quite obsolete now, but that’s not his fault. It’s simply the inevitable consequence of our being hundreds of years now beyond where he was. He was still doing something important that has had a lasting effect, and it can be helpful to trace where philosophers have gone wrong by studying the original over-persuasive fork in the road they took, which Kant often is. He is like Aristotle that way.
I should also reiterate that I don’t think Continental philosophy is good philosophy. It took the wrong methodological fork. I actually place Kant in the superior Analytical philosophy tradition. Hegel is definitely representative of the Continental.
This is because I don’t make that distinction geographically but methodologically. Continental philosophy just happens to have been excessively popular in Europe (the “Continent” as it were, impertinent misnomer as that is; to paraphrase Eddie Izzard, “You know there is more than one?”). Analytical gets associated with the English-American side; but there have been plenty of good analytical philosophers on “The Continent” not writing in English. Like Kant.
The defects of Continental Philosophy are that it amounts to editorializing over analysis, it is not rigorous but meandering and imprecise, it uses words weirdly (or tries to rewrite how words are used, weirdly), it is less scientific (and sometimes outright pseudoscientific), in fact it often ignores science altogether (not just its methods, but even its pertinent findings), being too armchair even in its fundamental process, and it “cheats” too much. For example, trying to have a thing both ways, hence avoiding falsification tests, a point also commented on here; or hiding equivocation or handwaving fallacies behind obscure wording and grammar, replacing philosophy with, essentially, literature, as if poetry, as useful as it can be, could do the requisite work, when it cannot.
Analytical philosophy specifically prioritizes analysis, science, rigor, and precision (and hence the avoidance of florid discourse that can conceal defects of argument behind artful language). It is (when done right) specifically wary of armchair solutions and polices cheating (using words weirdly gets a side-eye, for example). Analytical philosophy is, essentially, the fork taken toward what it means to be good philosophy.
Agreed. It is so frustrating engaging with “Continental” non-analytical philosophy because it’s such a grab bag of ideas and there’s little systematization.
I actually suspect part of the reason why Foucault is as popular as he is, and I share this perspective with Chomsky, is that Foucault was actually an excellent scholar with real analytical rigor stuck in a garbage intellectual environment . Archaeology of Knowledge is really great and ground-breaking and it has a very reasonable arrangement of his ideas. But internal to the chapters and organization, he has these wild digressions (and all the thick postmodern jargon that actively weakens arguments by making them vaguer), and not only does it make it a bitch to read but it also means he ends up diluting good arguments with weird bad ones, often including being out of his field of expertise. And it’s clear to me from everything I’ve seen of him, including the Foucault-Chomsky debate, that he absolutely knew the problem with making up armchair bullshit and ignoring expertise, but it’s just sort of inevitable in that environment.
Similarly, Latour’s core arguments are really fascinating, though I suspect a lot are actually based on history and scientific analysis that may be false or misleading, but again, there’s so much garbage that lets really stupid stuff be associated with a core interesting set of analyses.
I actually think pretty much every postmodernist philosopher would have been benefited greatly from being forced to write either only in prose or poetry (Sartre and Camus are a lot more engaging because of their literary power) or in pamphlet-length pieces, in the same kind of form and with the same kind of style as Singer and with as little jargon as possible. A lot of it would still be junk, but the interesting stuff worth using would be easily found and the junk wouldn’t obviously be junk.
That having been said, the issue I’m discussing can be seen even in the much more analytical, rigorous Marxist and post-Marxist literature. The Frankfurt authors actually have a ton of really interesting things to say, but then random dialectical nonsense gets thrown in just for the sake of it.
Even Foucault was (not entirely but) largely separated from the actual scientific literature on things like sexuality, sociology, and culture. Too much armchair, not enough checking premises empirically. He was not as good a philosopher as made out to be. He just happened to be sort of right about some things, and those tended (ironically) to be analytical, not empirical things. So, for instance, his analysis of power as a concept is a tour de force—of analytical philosophy. When he tries to extend that analysis into the empirical world, he starts to fail, because his methodology is wrong.
In the empirical realm his findings are either trivial (e.g. that sexuality is in some sense a social construct is true but useless data without empirical science developing the outlines of how much is constructed, where, why, how, etc.) or antiempirical (e.g. sexuality is partly biological and all cultures problematize it). In short I think he gets too much reverence. He gets credit for popularizing social constructivism as an approach. But he loses credit for not pushing for this to be engaged empirically, using scientific methods, rather than armchair editorializing.
In particular, a classic problem Foucault demonstrates is that, if you’ve discovered a mode of a thing, you may not have discovered the mode of a thing. Foucault’s cynicism led him to be able to see power pretty objectively, but as wide-ranging as his concept of power is, it’s not remotely exhaustive, precisely because he was drawing non-systematically from a limited sample size using a non-empirical methodology.
For example, it’s obviously trivially true that people are debating over what counts as true and truth, but it’s not obviously true and is in fact false that those debates are wholly arbitrary. Some people are using a non-universalist and non-honest method of determining what’s true, and some people are doing better. I think this is where he really had no response to Chomsky: One can say like a One Piece villain all one likes that “justice” is just made up, but there are some ideas of justice that are obviously better thought out and less cynical, and the fact that people reject those is no more an indication that the rejection is valid than the fact that some people are flat Earthers means that the Earth may be flat.
Similarly, Foucault was right to think about power at a micro-level, but that doesn’t make the macro-level irrelevant. In sociology, one would correct that error by, having used a symbolic interactionist approach, shifting to a conflict and functionalist approach, to make sure that one’s rounded out the analysis.
I don’t want to impute to Foucault things he didn’t argue, so I’ll say instead it is some people who cite him who make the claim that he said that things like debating what is truth are mere arbitrary power games; I am not actually aware that that is actually anything he ever argued, but I haven’t extensively enough read him to know that for sure.
But taking that as even just someone else’s inference from what he argued, that reflects precisely the point where philosophy is slipping out of sound analytical mode and into dubious continental mode. Because asking what something is is arbitrary only in respect to the sound you choose to utter as a symbol for it (the word itself); otherwise, you yourself (whoever is the one making the statement) mean to be either referring to something real or you are claiming there is nothing real to refer to, and either way, that’s an empirical and no longer an analytical claim.
Analysis can get you to all the possible meanings of a word and maybe even to the meaning you wish to discuss or test or find out if it exists; but it can’t get you to whether it does exist. And that itself is an analytical conclusion: that you can’t change what things are by changing what you call them. So no matter what you try to argue the sound-utterance “true” is “supposed” to mean, every other thing it logically can mean still exists analytically, and thus could still exist empirically. The only way to know is to find out.
And this goes even for what you will want to care about: Every definition of “true” exists. But which ones you should care about also falls to the same analytical-empirical research program. This is why you can’t just choose to walk through a wall or live forever. Reality does force you to care about some things more than others. And that’s how we end up with certain functional choices as to what the word “true” should mean. You can reduce this to power (insofar as reality has power over you), but that wouldn’t change anything else about it (reality is still real, and it still can’t be changed by changing what you call it).
Richard: That’s a point I find so frustrating in conversations with postmodernists.
The thing about postmodernists is that they are often irrelevantly correct. So, like, yes, in actual practice, it doesn’t matter what a text says or what the author’s intent was, it matters how the text will be decoded in its context. But the point is that you have to be conscious of your hermeneutic framework. If you know that someone will interpret what you say as a statement of the original author’s intent, you have to be clear. Asimov famously gainsaid Tolkien that the ring, despite Tolkien’s protestations, clearly represented technology, and I agree with Asimov: Tolkien’s actual values and perspectives were clearly subconsciously coming out, and the contrast between evil polluting imperialism and morally-flawed-but-based-in-beautiful-goodness pastoralism is his essential worldview in the text. But Asimov told people that that’s not how Tolkien himself claimed to have intended it .
So Biblical hermeneutics will often act not as if they are an interpretation requiring a complex dogmatic lens and extensive negotiation with the text, as McClellan points out (I’m not the first to recommend Dan to you but he is excellent and even has been pretty reasonably soft on the mythicist question the few times it’s been brought up – it’d be really interesting to see you two have a conversation), but as if they are indicating not just the plain unvarnished reading of the text but the only possible reading ordained by God. And it’s straight up bullshit, by their own standards .
Similarly, there’s a kind of argument that has some truth to it made by the Reza Aslans of the world that ancient people had a different perspective on truth to us, mythic truth and all that, and so thinking in modern truth terms about ancient texts can be misleading. But, as you’ve documented, it’s still clear, as a matter of historical fact , that ancient people as well as modern people often made the claim that their stories were not merely mythically or allegorically or spiritually true but literally true, and in that context they are wrong .
So, yeah, saying that the Earth is indeed on the back of a tortoise may be “true” in terms of describing some allegorical reality or saying something about the human condition or how cultures related to it, but insofar as those cultures actually cared to make a claim about Big T truth (this is something postmodernists disingenuously claim only Westerners do), they were wrong and science is much closer to right.
So scientific epistemologies based in a proper rational-empirical balance will indeed have an implicit idea of what truth means and what kind of truth we care about (e.g. predictive expressions of future experiences in reality, little noumenal arrogance, etc.), but that’s actually what people mean when they make truth claims and it’s how people actually behave when they want to actually solve a problem or determine if something is true. And I think that’s essentially a priori logical: If you are answering an undetermined question, something like a scientific epistemology is going to pop out in any universe.
And so, in terms of things like culture-jamming, it is actually quite important to identify who is making fraudulent claims to the science and who isn’t (e.g. climate change activists versus climate deniers), who acts like they have a rational empirical methodology and who doesn’t, and so forth. Even if we’re going to say that playing “the truth game” by those rules is arbitrary, postmodernists don’t seem to recognize the implication that once everyone is playing by those rules, that’s how we judge them .
All good points, Fred.
I’ll just add the footnote that there really wasn’t a “different” mode of thinking in antiquity, mythical vs. literal. They mostly took their myths literally, and when they didn’t, they simply treated them as fictionm, i.e. literature (whether in whole or in part, e.g. the use of metaphor and allegory). We do the same thing: we make moral and societal points using fiction, both as writers and as readers who allude to fiction in our presentation of lessons or examples.
We also have “myths” that we believe are literally true (like Haitians eat your pets, or the meme of an AI crying girl “tells the truth” about something even though it was fake; liberals have their “men in black” and “Chevron buried the car that ran on water” and contrails and aliens are among us and so on—and even, ironically, “the ancients didn’t take their myths literally”).
We aren’t actually all that different. The differences are mere trivia of style (the way ancients wrote and interpreted myths, as fiction or as truth, differed in details only; likewise whether our mythology incorporates the supernatural or not is trivia, not a fundament, as I point out in the conclusion of my article on Jordan Peterson).
OH MY FKN GOD, you let RANDROIDS talk you out of Marxism! This explains EVERYTHING! mind blown
Dr. Carrier! I’m a Marxist! I have an ego! You helped me discover it! You were practically my Annie O’Sullivan! Everyone thinks their thing is THE thing! That’s why Rich Mullins made his “one thing” God, or at least that’s what he told himself! But he had a misogyny problem, at least subconsciously, ’cause he was super anti abortion! Dr. Carrier! “THE” thing is naturalism! But your metaphysical naturalism isn’t the fucking same as my metaphysical naturalism! That’s the real lesson from Hegel! Inoue had that figured out! “In a way”! 🤣
No, Hegel never taught any such lesson. Hegel wasn’t even a naturalist. Hegel was a metaphysical Idealist and a whackadoo theist. He was anti-empirical and tried to do science from the armchair, which expectedly failed (he was wrong about everything, and we’ve moved on since). And he contributed nothing to the modern science of logics.
Marx and Engels were naturalists! Engels even wrote a book called “dialectics of nature”! You should read it! Just from some of the things he says, I feel like Sagan must have read that book somehow. Marx said that he flipped Hegel on his head! I would say he flipped Hegel inside out! I did the same thing with the platonic cave if you read my article (the latest on my website)! Like I said in the other comment, much like St. Paul and Lana Del Rey, Hegel is all about WHAT YOU BRING to the reading of him! It’s not Hegel’s fault if you DIDN’T LISTEN to what he MEANT! You have to FEEL Hegel, Dr. Carrier! You have to FEEL the master-slave dialectic to realize that YOU’VE ALREADY LIVED IT!
Dr. Carrier! If I learned something from Hegel it’s not wrong for me to say I learned it from him! It’s not wrong for me to call it a “lesson”! I was listening to “Shades of Cool” by the great American philosopher Lana Del Rey! Sometimes I feel like that song was about me personally! “YOU ARE UNFIXABLE! I CAN’T BREAK THROUGH YOUR WORLD!” But sometimes I feel like that song was written personally about whoever I happen to be talking to in the moment! 😛
I didn’t say you didn’t learn a lesson. I said the lesson was not in Hegel. And Hegel as a philosopher lacks any of the virtues you are claiming for him. That’s an objective fact. And it is wrong to deny objective facts, or fail to learn from them.
You can’t argue against Hegel with “objective facts,” Dr. Carrier! That’s Aristotelian logic! Those in the reality-based community (https://en.wikipedia.org/wiki/Reality-based_community?wprov=sfla1) will always be the Washington Generals to the Harlem Globetrotters of those who create their own reality! You saw this with the Iraq War! The Bush administration didn’t give a fuck whether there were WMDs in Iraq! YOU were a sucker if you thought ANY reason for invading Iraq was justified! “Reality” isn’t the same thing as nature, Dr. Carrier! That’s what “Nephew Karl” Popper said in “three worlds”! 😛
Did you read 1984, Dr. Carrier? The main character’s whole problem was that he thought he could beat the Orwellian logic with Aristotelian logic, but then he reads the Goldstein report, which says the new government runs on a (per)version of Hegelian logic! Dr. Carrier! When I got caught vaping at the Creed concert in San Antonio on Friday the 13th, I couldn’t get out of the trap they set for me with Aristotelian logic! I tried to point out the “objective facts” that ketamine was the drug that killed Elijah McClain and Ativan was the drug that killed Chris Cornell, but they still injected me with those drugs! My uncle also didn’t care that I was true/false Aristotelian logically correct when I had a meltdown at his house when he was being Islamophobic and said Muslims worship a god that’s not Allah and I told him he was full of shit! He still called the cops on me for having an autistic meltdown and had me taken to a fucking behavioral center for three days and three nights! Even though he was the one who hit me! Then my parents had to pick me up and take me back home with them finally after they could no longer support my already ascetic lifestyle! Dr. Carrier! I’m not sure I can say I “lived” 1984, but my life was just as “Orwellian” in the past couple of weeks!
The quantum computer uses Hegelian logic, Dr. Carrier! It’s in two places at once! >_<
That is neither what a quantum computer is or does, nor anything to do with Hegel.
Dr. Carrier! We live in the 1984 society! My entire life in Austin was completely destroyed just because I violated the “obedience is strength” principle! The response is always the “freedom is slavery” (or at least “confinement”) principle! 💀
I can always trust you to ignore everything I said except the one thing you can claim is fallacious! Don’t they call that the fallacy fallacy? But you don’t care what I say about the control society ’cause you think you’re safe from the control society! We all know why you think you’re safe from the control society too! But you still think you can go around cutting Gordian knots! You’re still trying to be Alexander! I have autism! I can see through you! I’m not on the internet with you, Dr. Carrier! You’re on the internet with me! Never forget that! You can zap me again and again, but you can’t zap the spirit (the pattern of activity) of Diogenes! The activity pattern of Diogenes will always tell you, “stop casting your (Jungian) shadow on me!” That’s how you’ll know it! It’s the shadow that’s telling you to focus on my “fallacies” and not allow yourself to entertain the possibility that, through my experiences as a disabled person in the control society, I’ve gained real insight on things that you can’t learn solely from reading a book! So keep telling yourself the same static story about me! I’ll keep staying dynamic! 🤩
Are you familiar wit the work of Bernardo Kastrup? I am a fan of yours mainly as a historian, and a fan of Kastrup mainly for his metaphysics. I have a hard time buying into your criterion of “good” philosophy necessarily being empirically based. You might be interested in the ChatGPT chat that helped me explore this conflict. https://chatgpt.com/share/67015b89-09c8-800d-a483-bca86c30da54
So far as I have seen, Kastrup is a pseudoscientist anchored to crank ideas about reality.
And you should never trust AI. It hallucinates, confabulates, and errs more than just picking a random person on the street to query.
Al: I am more sensitive to the issue of empiricism as the end-all be-all than Richard is.
But what I want you to think about is:
If you don’t use empiricism, what the hell else do you have?
Everyone who abandons is starts talking in wildly divergent, vague ways about entire alternate realms.
Many are logically possible. (Some are self-contradictory, which is already a pretty big blow: Empirical data is never going to contradict itself). And mutually exclusive
So which one do we pick?
This is my problem with postmodernists who talk about various modes of “knowing”.
A Native American myth may be a fascinating story that can help one arrive at “knowledge” of one’s own phenomenology, or some other context where we’re not talking about the world we’re in.
But it’s not useful for the world we are in . And, yes, we are clearly in one. Even if it’s a fictive world, like a brain in a jar, it’s a world.
So what criteria do you have to sort possible but true things from possible but false things without what is ultimately rooted in instruments that you actually have access to and that you know work and are not delusive?
There’s no answer to this.
Similarly, I agree with critics of falsifiability that it may not be the only useful approach, but what I will say is that falsifiability is one of the only ways you can be sure that you’ve said something non-trivial . In literary analysis, where you can make any argument and defend it from the text, you still need a falsifiable thesis . “Hamlet is insane sometimes and not insane other times” is a garbage thesis because it is totally trivial.
We can do empirical metaphysics! EMPIRICALLY, I’ve exhibited the phenomenon of crying while listening to Lana Del Rey! (Dr. Carrier would certainly corroborate that I would never shut up about her when I took his naturalism class). EMPIRICALLY, the external phenomenon of crying (crying as seen from the outside) is connected to physiological processes (“occurrents”) within the body! You can see this at larger scales too! We can connect the EMPIRICAL devotion that Lana Del Rey has proven to inspire in people to the fact that her Born to Die album has EMPIRICALLY spent over a decade on the billboard 200 chart! This is a rare feat! I wouldn’t quite call it a “statistical miracle,” but it’s still a big fkn deal! That’s all “social” metaphysics! It’s also entirely EMPIRICAL! But hey. Lana Del Rey is MY thing! That doesn’t mean she’s THE thing! We all have to find our own “MY thing”! 🤣
Mario: Conceptually, that’s fine. That’s really the only reasonable response, in fact. As a general approach, we’ll be arriving at metaphysical conclusions based on the data we have. Metaphysics following physics.
But the problem is that this process too is so fraught. So, for example, the dialectic. Let’s grant for discussion that the dialectic is a meaningful mechanism that occurs in empirical reality and is distinct from just any generic pattern of change. Does that mean it occurs metaphysically too? No! That doesn’t follow! In fact, it seems quite likely that Marx is right that the very idea of metaphysics having some changing process is absurd! It seems likely that only stuff in universes change, not stuff outside of them.
And, no, you can’t actually do what you just said about Lana del Rey. That’s incredibly poor reasoning. You feel great listening to her! Great. Does everyone else? Maybe they think her music is just insightful, or soothing bops. It’s actually totally silly to think that your experiences with art are going to be at all universal. You would need actual data about how people listen and why they buy to make that conclusion.
And that’s emblematic of what I’m talking about. When you don’t know, you don’t know. And so much of metaphysics is just angel on a pin reasoning. There’s no way even in theory to verify or deny it.