Tooling around looking for lists of “unsolved problems” in philosophy I must admit the best list that’s most easily found online is Wikipedia’s. I realized for general benefit I should write up how my worldview addresses these. I’ve already forayed a bit into this area with my articles Eight Philosophical Questions We’ll Never Solve?—which lists the most popular examples among the public—and How I’d Answer the PhilPapers Survey, which was constructed around what its authors thought were the most important unanswered questions (or perhaps we should say “unresolved debates”) among philosophers. The Wikipedia list has some overlap with both.
Order of Problems
I’ll first reorganize the list to better group problems together by similarity of needed solution. I’ll explain what these problems are later on, but for the moment, I’ll just present my new ordering of the twenty-two of them listed at Wikipedia.
First there are the most fundamental questions of philosophy itself:
- How do we solve the Problem of Induction?
- Does philosophy ever make progress?
And that leads to a slew of questions about epistemology, about how we can claim to know things, and what we mean when we claim to “know” them:
- How are counterfactuals true?
- How do we solve the problem of material implication?
- How do we solve Gettier problems?
- How do we justify knowledge claims?
- How do we solve Agrippa’s Trilemma?
If we step in a different direction, that all leads to questions about the physical mind—how do we ultimately explain basic mental phenomena, what are we even talking about when we discuss them:
- How do we solve Molyneux’s Problem?
- How do we resolve the Mind-Body Problem?
- What are qualia?
- Why are there qualia?
- How could we tell an AI is real?
This then leads beyond the mind, to questions about mental objects, about things we claim to “perceive” with the mind but don’t immediately seem to “exist” outside the mind:
- Why is there something rather than nothing?
- How do we know anything is real?
- What are mathematical objects?
- What are universals?
And those lead to more particular questions about the semantics of distinction, about how we demarcate one thing from another, how we can even say there are different things, and that everything isn’t just all one and the same thing—and even if it were, what would that even mean:
- What individuates universals into particulars?
- How do we resolve the Sorites Paradox?
- How do we resolve the Theseus Paradox?
- What’s the difference between science & other areas of knowledge?
And finally there are all the questions pertaining to how we should live our lives, once we have worked out all the above or concluded they have no solution, and once we have assimilated all other knowledge we as a species can claim to have achieved:
- How do we know anything is morally true?
- How do we deal with the quandary of moral luck?
I’ll take these in that order.
The Problem of Induction
The Problem of Induction is, of course, the problem of how you can conclude things will continue happening as they have in the past (or any inference of the kind) without presupposing they will do so. As David Hume originally posed the problem, you can’t argue it is probable a uniformity of nature will continue, without assuming a uniformity of nature will continue (see the Stanford Encyclopedia of Philosophy). Hume was actually wrong. But only because he hadn’t quite become aware of (or incorporate) the logic of his contemporary Thomas Bayes.
In truth, we do not argue from a premise of uniformity in inductive reasoning, but to a conclusion of uniformity, by positing that uniformity as a hypothesis and weighing it against every competing hypothesis that could produce the same observations. There are essentially only two (and various combinations of the two): random chance (an observed pattern is just accidental and thus indicative of no continuance of it), and intelligent design (some Cartesian Demon is arranging things to look that way, and it could stop doing so any time now). Both can be shown to be extremely improbable—when the data are sufficiently extensive, and no evidence exists of either alternative—relative to the conclusion of uniformity (accidental patterns become exponentially unlikely all on their own, producing extremely low likelihoods; while Cartesian Demons require exponentially improbable ancillary assumptions, producing extremely low priors). This does not declare the other hypothesis absolutely false; rather, it concludes they may yet be true, but we have no reason to believe either likely. And as long as we accept that that’s all we can say, and all we have to say, the problem is solved.
That solution does require assuming two things are true: (1) the principle of indifference correctly represents our state of knowledge in the absence of any determining information (in other words, when we have no evidence a given probability is higher than 50% or lower than 50%, it is for all we know 50% as likely to be either); and (2) deductive logic is valid (from which one can prove the requisite probability theory true, e.g. we can determine Laplacean probabilities for the alternative hypothesis of chance accident, from any data set in conjunction with the principle of indifference). And we do not run up against Gödel here, because we can prove all necessary propositions in probability theory with Willard Arithmetic, which is immune to Gödel’s incompleteness theorems. So we have a deductive rout to inductive logic. All we have to do is admit there is always some nonzero probability our conclusion of uniformity is false. After that, the more evidence we have it’s not false, the less justification we have to ever believe it is.
I already made this point in my take-down of that kooky Christian apologist Alvin Plantinga. As I summarized elsewhere (emphasis now added):
[Infinite] regress ends with facts that cannot be false (raw uninterpreted present experience); what we can infer from those facts is probabilistically true by virtue of deductively certain logic (so induction does not require a circular presumption of the future resembling the past)
In other words, statistical mathematics gives us a deductive path from any body of data to a concluding probability that a pattern that data exhibits will continue (and thus inductive inferences can be made). As I elaborate with examples in my discussion of Plantinga’s attempt to solve the Problem of Induction with God:
Really, the Problem of Induction is nothing more than the fact that there is always a nonzero probability of being wrong. But it does not follow that that probability is always the same regardless of how many observations you make. Obviously, the more observations you make [of a thing happening] … the more likely the hypothesis is that it will continue…
And that conclusion can be shown to follow deductively. Everything after that is properly inductive, and justified thereby. You might notice before we conclude this article that re-framing all inductive knowledge claims as claims to a probability solves nearly all “problems” in the philosophy of knowledge. That indicates probability theory is a very important tool that philosophers are too frequently neglecting when conceptualizing what they even consider a problem in the first place. And it’s not that philosophers have never heard of probability theory. They just under-employ it when solving or even approaching philosophical problems.
Philosophical Progress
I won’t belabor this one here, as I have already extensively answered it (see Is Philosophy Stupid?). The short of it is: yes, philosophy has obviously made quite a lot of progress, and thus clearly can continue to.
The sub-question, “Is there as much progress in philosophy as in science?,” should be answered in the negative precisely because science is simply philosophy with better data, so problems move from philosophy to science as soon as adequate data exist to answer them more conclusively, and this has happened to almost everything now. Thus science will always accumulate most of the successes in answering difficult questions; philosophy is by definition always what is left over. So it will not only have vastly fewer problems to confront, but also, by that fact alone, combined with those problems being so difficult even science couldn’t yet solve them, it will be able to claim but few of them solved at all, at any given time. This is simply due to the nature of how problems are assigned to philosophy in the first place, as opposed to ending up resolved by other fields of knowledge.
In a sense, of course, science is the success of philosophy. Every scientific success is a philosophical success, insofar as science is just the best philosophy on the market (and if you don’t think that’s a valid remark, do note, I have demonstrated it to be a historical fact). But that sub-question is usually meant to already accept our recent demarcation of science and philosophy as separate endeavors. And from that demarcation, my conclusion above follows. The second sub-question, “Why isn’t there more progress in philosophy?,” is thus easily answered by these very same points.
The only thing to add here is that the success and progress of philosophy is largely masked by one prominent flaw in academic philosophy today: the nearly complete failure to demarcate sound from garbage philosophy (again, see Is Philosophy Stupid?). But if we were to toss aside all manifestly fallacious argumentation, and all arguments in academic philosophy premised on scientifically false or implausible facts (if we were to “demarcate the astronomy from the astrology” as it were), the progress philosophy has made would be far more evident. Much of what people complain about (“philosophers arguing the same points for ages”) is a product of academic philosophy’s continuing to give equal respect to badly argued ideas. Which is certainly a flaw to criticize. But if academic astronomy published astrology books and papers with equal abandon, that would not mean no progress had been made in astronomy. You’d just have to sift it out to see it. So, too, philosophy.
How Are Counterfactuals True?
A counterfactual is a proposition (a statement in any language or wording) that declares as true something that would have happened or have been the case, if something had been changed or different than actually was. For example, in The Craft of International History: A Guide to Method (one of my top recommendations for learning graduate-level historical methods), author and historian Marc Trachtenberg demonstrates the truth of the proposition that “if the Nazis had captured Russia before 1945, then South America would probably have joined the Axis Powers and invaded the United States,” as in fact several South American nations had a pact with Hitler to do exactly that on exactly that condition. That is a counterfactual, in the sense that neither thing did happen, so each half of the proposition alone is false, but somehow their combination as a conditional is true. How?
This is important because in fact all empirical reasoning depends on counterfactual reasoning. Every time you declare a hypothesis (probably) true, you are simultaneously declaring competing hypotheses (probably) false, and the only way to do that is by counterfactual reasoning. As I explain in Advice on Probabilistic Reasoning, the only way to prove a theory true is to try and prove it false and fail. And that requires reasoning out what would be the case if a competing hypothesis were true, and looking for that (what you have just reasoned “would be true” if that hypothesis were true: a counterfactual); and by failing to find it, you confirm that hypothesis is less probable than the one you are aiming to prove. That is the only way to ever increase the epistemic probability of any hypothesis—about anything, whether in science or history or any other field. So we need to be able to tell the difference between true counterfactuals (such as would entail the factual conditional “if South America would probably have joined the Nazis in the war if the Nazis had taken Russia, then there should be evidence of their agreeing and planning to do that”) and false counterfactuals (such as would entail a factual conditional like “if South America would probably have joined the Nazis in the war if the Nazis had taken Russia, then Hitler should have died his hair blonde”). So how do we do that?
The problem of counterfactuals is, simply stated, that their being true seems to entail a contradiction: if you start with, as premises, all true facts (our “background knowledge”) with which to construct an “if, then” counterfactual, that “background knowledge” includes the negation of the propositions the conditional is constructed from (“the Nazis did not capture Russia” and “South America did not enter the war” and even “all true facts as actually given, South America probably would not have entered the war”). So we have a system in which we are supposed to maintain two contradictories, “that Hitler seized Russia” (hypothetical) and “that Hitler did not seize Russia” (empirical). How can any coherent conclusion arise? (See the entry in the Stanford Encyclopedia.)
Ironically, I already covered this again in my take-down of that ever-kooky Alvin Plantinga. He was trying to argue that “How can counterfactuals be true?” can only be answered “Because, God.” Which is as tinfoil hat as you can get. As I wrote there:
I know, if I run pell-mell into the wall next to me, I’ll injure myself; because if I had run pell-mell into the wall next to me, I would have injured myself. God does not have to exist for that counterfactual to be true. I simply model the scenario in my head and run the model: I know what happens when I run pell-mell into things; I know what my wall is made of and would do to me. The output of the “sim” I ran is thus “I’d be injured right now.” I didn’t tap God’s mind for that. I just tapped my experience (of walls and running and my susceptibility to injury), and set the parameters of the model I wanted to test (my colliding pell-mell with a wall), and ran an analysis. That’s how a robot could learn [and it really did] how its own legs worked without anyone even telling it it had legs, much less anything about those legs. It randomly made up counterfactuals, and tested them until it discovered the ones that instead proved factual. It certainly wasn’t Talking to God.
The solution is simple, and has been proposed before (so, really, this problem was solved; philosophy as a field just has no reliable standard by which to declare that). All propositions are hypothetical, they are all descriptions of models of the world. A counterfactual is like a statement of fiction, and can be true in the same way. It differs solely in that whereas truth-claims about fiction are constrained by imaginary facts, counterfactuals are only constrained by actual facts. One simply substitutes the fictional system of facts with an actual system of facts, and work out what is true and false in exactly the same way.
For instance, “at the time of Luke’s apprenticeship, Yoda was over 800 years old” is a true statement; yet Yoda doesn’t exist, and thus never actually lived a single year. No one has a difficulty grasping this. The statement is about a fictional context, a model. It is not a statement about the world. No one thinks “Yoda was over 800 years old” refers to a real person or real years. And yet they will easily be able to explain why “Yoda was then only 12 years old” is false. It is not false because there is a real Yoda and a real fact about how old he was at the time. It is false because we are making claims about a fictional system someone invented, and that claim is true about that system. We could invent our own fictional system in which Yoda was only 12 years old; but we would then no longer be talking about the fictional system everyone knows well and usually means when discussing ficto-factuals about Yoda. We would be switching systems, so any attempt to imply a claim about the newly-contrived system is also a claim about that other system, the one pretty much everyone is actually talking about, would be a form of equivocation fallacy.
Real counterfactuals operate the same way. They are statements about a fictive system (one in which, for instance, the Nazis did capture Russia), in which all other facts remain true and not fictional. As with Yoda, we would have to reference a “canonical” body of data to determine what his “canonical” age was when he taught Luke; whereas with counterfactuals about the real world, the “canonical” body of data is every true fact of the world with only one thing changed (the protasis of the conditional, the “if” declaration). We then “deduce” from that new completed set of information what is expected to happen. Note that in this system the statement “the Nazis did not capture Russia” is not included. So no contradiction results. Counterfactuals are thus fictional, but they can be judged the same way fiction can be, as when we say a story is “unrealistic” or “would never have happened that way,” because it makes things happen or people behave as we know they factually wouldn’t.
I’ve discussed the philosophy of propositions as statements about models before. Once we accept that that is what propositions are, we no longer run into a problem telling the difference between true and false counterfactuals. We can identify as false the proposition “if South America had joined the Nazis in the war if the Nazis had taken Russia, then Hitler would probably have died his hair blonde” because in the system, the model of reality, that we are talking about, only one set of facts has been changed: that the Nazis captured Russia and South America joined the Nazis in the war. Everything else stays the same. When no other facts in the system are changed, that system does not make it likely Hitler would die his hair blonde. Because there is no causal connection between those two changed facts (whether Russia was captured, and whether South America entered the war), all other true facts, and Hitler being at all likely to die his hair blonde. Whereas there is a discernible causal connection between those facts (true and changed) and South America entering the war. And that model predicts certain things should be observed even in the real (not just the counterfactual) world, such as the existence of documented plans for several South American nations to do exactly that (and those documents really do exist).
So anyone who thinks counterfactuals entail a “problem” simply is confused about what counterfactuals are even doing, what they are statements about. Once you fix that confusion, the problem goes away. All of this requires understanding the difference between what a statement is about, and what would make that statement true. But that’s precisely the kind of thing philosophers should be doing well, not poorly. I have more to say on the ontology of fiction in my old article about Moral Ontology and more recent discussions of Mathematical Ontology, both of which you’ll notice also come up in this list of “unsolved problems” in philosophy. Indeed, this same confusion, once unraveled, solves many other problems on the list.
Material Implication & the Gettier Problem
The problem of so-called “material implication” is that in formal logic we can declare almost any conditional true, such as “if rabbits usually have fur, then cars usually have wheels” or “if rabbits govern the United States, then the moon exists” or even “if rabbits govern the United States, then the moon does not exist” (see a common example of the pertinent truth table for material conditionals), and this leads to a bunch of difficulties philosophers wring their hands over. But this is really just a problem with formal logic’s notation. It is simply broken. Fixing that mistake removes the problem. And this gets us to the importance of philosophers learning to speak English and understand how ordinary language works.
As it happens, the solution here is essentially the same as for counterfactuals. Because this is really just another claim about counterfactual propositions, being that more material implications are adjudged “true” than should be on any sane principle of adjudging counterfactual truth. When we accept that conditionals are statements about models, all the problems of material implication go away.
Hence I simply reject the paradigm. In popular language, no one would say “if rabbits usually have fur, then cars usually have wheels” (or any of the other examples I gave) is true. So philosophers are just wrong to claim otherwise. If a layperson sharp enough to understand the matter were to answer your question why that conditional is false, they would explain that conditionals are supposed to state causal relations (about some designated system; which without context is usually meant to be “the real world” but can be about fictional or counterfactual worlds), or logical entailments. No other conditionals are true. Period. And there is neither here. “Rabbits have fur” simply does not logically entail “cars have wheels,” nor (by itself) would it be a necessary or sufficient or even contingent cause of cars having wheels. It is therefore false, even though formal logic would tell us otherwise. Ordinary people are just speaking a different language than philosophers; and oddly enough, in this instance, the people’s language is more coherent, as demonstrated by all the problems the philosophers’ language causes here—problems they are so dismayed by they incredibly declare them “unsolved”!
Philosophically minded readers might notice that the error of material implication also resembles what happens in the creation of Gettier Problems, which start with random accidental conditionals that lead to “justified true beliefs,” and hence “knowledge” in the usual philosophical parlance. These problems are also solved by adopting a more disciplined understanding of what makes a conditional actually true, although one can also solve Gettier Problems by simply declaring accidental knowledge not to be knowledge—since, after all, we get to define words any way we want, and in popular parlance no one actually regards accidental knowledge as really knowledge. And this is the same important point.
For example, if I say, on no basis whatever, that a meteor will strike Chicago tomorrow, and lo and behold, one does, hardly anyone would say I knew a meteor would strike Chicago that day. They would say I just guessed it and got lucky. And anyone who would try to claim I really knew it, would try to make it real knowledge by claiming I was psychic or received a divine premonition—so even they don’t accept accidental knowledge as real knowledge. Because that’s simply not what everyday people mean by knowledge. Attending to how real people use words and language is essential to doing philosophy well. A lot of philosophical error and confusion arises from not doing that. But I’ve already fully discussed how I solve Gettier Problems so I won’t belabor that point here. You can catch up on that in The Gettier Problem. One thing to note there is the importance of axiomatically accepting that almost all knowledge statements are statements about a probability, which shows us again the fundamental importance of always doing that. Philosophers have an odd tendency not to.
Justifying Knowledge Claims
The Wikipedia list includes the “Problem of the Criterion” and “Agrippa’s Problem”, which are both just variants of the same similar problem: how do we justify any belief without infinite regress—since don’t we have to keep justifying whatever justification scheme we come up with? Is justification therefore hopelessly circular? Or materially impossible? Or dependent on unjustifiable axioms? In essence, this is the fundamental problem in epistemology, the theory of knowledge.
I have already covered this elsewhere, in Epistemological End Game, where my solution is simply to end regress at the undeniable facts of raw uninterpreted experience, which cannot be false (and thus require no further justification, and thus are not “arbitrary” nor require any “circular” reasoning to generate), and building all knowledge from there using deductive logic and, once established via deductive reasoning (as I just noted above), probability theory. In essence “the problem of the criterion” and “Agrippa’s Problem” are both solved once we solve the Problem of Induction, and attend to undeniable facts.
One might still ask whether the axioms of formal logic (like, say, the Law of Non-Contradiction, from which can be derived the other two fundamental laws of logic, the Law of Identity and the Law of Excluded Middle; see index, “contradiction, nature of,” in Sense and Goodness without God and my discussions of logical laws in my Critique of Reppert) or the axioms of other deductive systems we rely upon, like formal mathematics, are just arbitrary, unjustifiable assumptions. But they aren’t. If we re-frame the question as one of probability, ask yourself of each axiom, “on present information, is it as likely to be false as true?” The answer will always be “No.”
That’s why they have been adopted as axioms in those systems. Axioms ultimately are descriptions, and as such always apply to any system they describe, by logical equivalence—the Law of Identity then entails their truth. In short, their truth cannot be false—when we accept they are being declared true of systems they accurately describe. The trick is when we need to ask whether the system they describe corresponds with any system we are actually in (like “the real world”). But that’s where undeniable experience combined with probabilistic reasoning enters in.
For example, Kurt Gödel “proved” that most axiomatic systems used in mathematics cannot be proved internally consistent; but Dan Willard proved otherwise by simply ditching a single assumption that Gödel’s theorems were hanging on: that multiplication and addition are fundamental. Willard replaced multiplication and addition with division and subtraction, and used the latter to construct multiplication and addition. It can get all the same results we need for probability theory, just by a more convoluted route. The inclusion of an assumption that multiplication and addition are “total functions” (which has to do with transfinite sets) makes math easier to do, but isn’t necessary to justifying most of it. Moreover, though we can’t prove multiplication and addition infinitely extendable, by modeling extension as a process in our mind, we can directly observe that it is very unlikely that anything exists that could ever render it false, so the axiom is not arbitrary, but on present knowledge probably true. Likewise all other fundamental axioms.
So all we need are properly basic facts (facts that cannot be denied, because they can never be false as-stated, like “there is an experience right now of a me seeing the color white”), deductive reasoning to arrive at what is logically entailed thereby, and inductive reasoning justified by that deductive reasoning to arrive at what is probable thereby. No infinite regress is required. No arbitrary axioms are required; nor circular argumentation.
Mind Stuff
Most philosophy of mind has been resolved—enough access to data was realized that almost all its problems have moved into science. This began in earnest with the rise of scientific psychology in the mid-19th century, followed by the development and advancement of neuroscience and, subsequently, cognitive science. Only the few problems not accessible to scientifically reliable methods remain for philosophy to ponder.
One example of this shift is Molyneux’s Problem, where it was asked if a blind person who only learned to recognize the shapes of objects by touch, could recognize them by sight once able to see. This has now been answered empirically: it’s no. The blind who gain sight have to re-learn what shapes look like by sight; although they do learn quickly (in a matter of weeks to months). We also know more or less why: the brain evolved in a piecemeal, ad hoc fashion, and hence organizes, processes and stores information more haphazardly than a philosopher would prefer. To integrate tactile and visual information, the brain has to process both at the same time. Once it has done so, however, it can visualize objects merely by touch, using the integrated pathways already constructed. So Wikipedia admits “this may no longer be an unsolved problem in philosophy.”
How this problem was solved, iterated across thousands of examples of problems thus solved in the sciences of the mind, then informs how remaining problems are most likely to be solved. And that’s an argument for philosophy. The big example is the whole mind-body problem itself: how does a physical organ (the brain) produce a subjective mind (and all its experienced content). That’s still the horizon goal of cognitive science, in the way the origin of existence still is for cosmology, and the origin of life on earth is for protobiology. All those fields have gotten pretty far in demarcating the most likely causes without yet being able to fully develop or confirm any. Of course, none of those most likely explanations are “God did it.”
The mind-body problem in particular is probably as solved as can be without final scientific proof (which may yet come in 50-100 years): all evidence so far points to the conclusion that, however it is effected, conscious experience is the sole and direct product of a particular kind and quality of information processing (see The Mind Is a Process Not an Object). Since it is logically necessarily always the case that all achievable experiences will be fundamentally subjective and never a “direct” access to any external reality (as I explain in Eight Questions), we don’t really need any further explanation for why that is observed to be the case. But one can still ask about the particular ways we observe it to be realized.
And that leads to three other questions on the list, which on current data we can only speculate over:
- What are qualia? I think the evidence so far leans most strongly toward a functionalist-representationalist answer to this question (see my PhilPapers Survey): certain physical processes produce different thoughts and feelings, and we interpret these as experiences. As such I am a mind-brain-physicalist (for all the reasons laid out in Sense and Goodness without God, Chapter III.6). Qualia are not “things” with mass or volume or location or anything like that. They simply are what it is to be a certain physical process. As such they are entirely reducible to that physical process, which wholly and without remainder causes their manifestation, no additional causes or substances needed. Because that is the result we get when we apply Ockham’s Razor to all the scientific knowledge we’ve accumulated on the matter.
- Why are there qualia? As in, why do certain kinds of information processing “feel” like that to the processor? I suspect the most likely answer science will discover to be the case is that a brain cannot process information in a certain way and not experience the result in some corresponding way; and how you experience it will abide in a one-to-one correlation with how the information was processed in your brain—so the same exact process will produce for every observer the same exact experience; although in practice, of course, everyone’s brains are processing everything a little bit differently, owing to no one having an identical biological or experiential history. Which does mean I think philosophical zombies are impossible. In fact I think I can conceptually prove that. And that proof affords some evidence for the conclusion that certain physical processes must necessarily generate a belief in qualia, and a belief that we are experiencing qualia is qualia.
- How could we tell an AI is real? This is really asking what would be the difference between a genuine AI and a machine merely puppeting the behaviors of a genuine AI without actually being conscious of anything. And that is answered by appeal to the above two questions: a genuine AI by definition would be experiencing the qualia of consciousness, whereas a puppet would be lying when it said it was. Otherwise, if it was reporting truthfully that it was, that entails it was—because there is no meaningful difference between merely believing you are experiencing qualia and experiencing qualia (and as I just noted, that might indeed be all that qualia are—as articulated in Dennett’s Consciousness Explained). So really the question comes down to: how would we tell if a machine was lying about that.
That last question has been explored with such ideas as the Turing Test, which when suitably robust trades on the very same device we use to know that other minds exist at all: the extreme improbability of faking consciousness to such a high level of interactive reliability. But there is conceptually a more direct test than that: to lie, a machine must be programmed to do so; so all we have to do is look at its programming and see if it was. And that means not merely programming it to choose to lie, but to pull it off, which at a certain level of mimicry becomes vastly more complex to reliably program than simply programming it to be conscious (the difference we trade on with Turing testing other people throughout life). And ultimately, a program designed to instruct mimicry would be identifiable as such, and would be observably distinct from a program that lacked that instrumentality yet produced the behavior anyway. We would be left with the absence of any other cause of the behavior but genuine conscious experience, leaving no other conclusion probable.
And all of that follows from thinking closely about what we mean when we say or propose things—like that a “zombie” or “AI” is “pretending” to experience qualia, or even that that’s what we must suppose, if we think a zombie with an identical brain to ours (and thus identical programming to ours) can behave identically to us and not experience qualia, which when framed that way reveals quite plainly that that is logically impossible. The zombie would have to be lying; but could not be and still have an identical brain to ours operating identically to ours when we tell the truth about the same point of fact. The same must therefore be true of AI. And herein I think lies the solution to the basic mind-brain problem. The particulars (like what circuit is needed to produce an experience of the color “red” or the sound of the sea or of being a singular person; and why) are just the details of implementation, yet to be studied once we have access to the precise structures producing them.
Mental Stuff
Okay. What then do we do with the stuff we “perceive” with the mind? Is the world actually real? What are abstract objects? And so on.
First, of course, whether or not an external reality exists or only our mind, we can still ask why there is something rather than nothing. We could just dismiss that question as moot: whatever the odds, there ended up something; the why of it hardly matters. But if you really need one, the most obvious answer comes, once again, from probability theory (as I explained already in Eight Questions): in the absence of anything guaranteeing that any outcome is more likely than any other, the outcome we should expect to observe will have been selected at random, and on any randomly selected outcome, it is much less likely that there would be nothing than something, because there are infinitely more ways for there to be something than for there to be nothing—which is a singular, undifferentiated state, the most specific and unique outcome that could ever be existentially selected from among all possibilities. So there just isn’t any reason to expect there to be nothing. Which is true even if there once had been nothing. Because nothing is also, by its own definition, too unstable to remain nothing.
This likewise resolves that other question, of whether a reality exists or it’s all in our mind. It’s Bayesian reasoning all the way down. We construct our confidence that an external world exists by interacting with it and finding that the hypothesis that “it’s all in our head” performs very poorly against the alternative, either failing to reliably predict results (getting relatively low likelihoods), or requiring improbably elaborate hypotheses (producing relatively low priors): see my discussion of that point in my Rebuttal to Michael Rea and Eight Questions and Answering PhilPapers. Probability theory thus solves the problem. Again.
Meanwhile, I’ve already thoroughly covered the supposedly “unsolved problems” of mathematical and abstract objects elsewhere, most formally and efficiently in Sense and Goodness without God (Chapter III.5.4, “Abstract Objects,” where I have subsections on “Numbers, Logic, and Mathematics,” “Colors and Processes,” and “Modal Properties”) and throughout my lengthy Critique of Reppert. The short of it is that on this subject I am an Aristotelian: there are only individual particular things (both objects and places, thus including spacetime; indeed, possibly it’s all spacetime), which can have the same features (as nothing exists to prevent that), which we abstract from, thus explaining what we call “universals”; and most things (individual and abstract) are potentially, not actually existent. Most confusions about so-called “abstract objects” result from neglecting the difference between potential and actual existence, and positing “extra entities” that don’t have to exist to explain anything.
For more on what I think mathematical objects in particular are, see my discussion of the ontology of numbers and All Godless Universes Are Mathematical. You can further consult my discussion of the ontology of mathematics against challengers in Defining Naturalism and Defining Naturalism II. As for universals more generally, see my discussion in respect to the PhilPapers Survey. You’ll find that in every case the answer I gave to counterfactuals also resolves every question about abstraction: we are just building and running virtual models in our heads, and derive all abstractions and universals from labeling features of those models. The question only then remains to ascertain how much our models correspond to reality, by “bumping into it” as it were (interacting with reality in various ways) to see what pushes back and whether it’s what our models predicted. Hence my answers to the problems of epistemology and induction.
Demarcation Theory
Next is how we solve the many problems that arise from demarcating one thing from another.
For example, how we solve the problem of individuation in general is a question of, first, semantics, and then, science, once we’ve decided what matters to us in answering the question. We can demarcate individual objects any way we want—e.g. we can “choose” how to tell one mountain apart from another when they are interconnected, by simply choosing what we mean by “individual” mountain, since this is simply a question of why we need to tell, i.e. what use is it to know whether we are on Mount Pinos or San Emigdio Mountain? What difference does it make? And in particular, what difference does it make that we care about?
Most individuation starts there: choice. Which is always a function of need or purpose. But there are also scientifically factual differences to key on. Where we choose to demarcate individuals is arbitrary, or purely utilitarian; but that those demarcations exist is a physical scientific fact. And it is decided by location in space-time and divisibility in principle. No matter how we draw any boundaries between them, as long as we do draw any such boundaries, Mount Pinos is never in a physically identical location to San Emigdio; and even if they could be (if, like photons, they did not conform to the Pauli exclusion principle), they would still be conceptually separable (as photons passing through each other are), retaining their separable properties and histories—unless they really did merge completely into a single mountain, such that there was nothing left to demarcate them by. All of this follows from observing ordinary language in practical, real-world use. Thus illustrating, again, the importance of attending to that.
The same goes for resolving all the demarcation problems subordinate to that general problem. How do we resolve the Sorites Paradox? The same way. Almost every demarcation suffers from the Sorites problem: when does a hill become a mountain, when does a child become an adult, where is the line between two mountains—even what is the circumference of England. We have to choose where the cut-off is, based on why we need to demarcate at all, and sometimes we need to do that for reasons to do with the actual physical system itself.
For example, if we need to know the circumference of England to calculate how many buoys we need to buy to cordon its coastline, then we don’t need to know the fractal circumference of England (measuring the curvature of every single drop of water and grain of sand), we only need the circumference to a resolution that physically corresponds to normal buoy placement (an error margin measured in meters, rather than micrometers). Indeed, most of life only functions when we reject precision (see Not Exactly: In Praise of Vagueness by Kees van Deemter). So, often the Sorites problem doesn’t arise, or is easily solved by an acceptably arbitrary or utilitarian decision.
But also, certain physical realities, like phase changes, dictate demarcation. For instance, at what point does a heap of sand risk causing a dangerous landslide. The answer will follow empirically, from physical facts of the system; it has to be discovered, not dictated arbitrarily. But even then the answer will be within a certain resolution of precision uncertain. We will be able to say that within a certain range of heights, the danger will acquire a probability large enough to worry about, but there will be no “precise count of grains of sand” that guarantees that outcome. The probability will phase up gradually as grains are added. So once again probability theory resolves the issue. If you need to know at precisely what grain the effect will manifest, you are simply asking the physically unknowable. But if you need to know at roughly how many grains the probability of the effect will start to fall above, say, 1 in 1000, then there will be a discoverable answer. In other words, once we learn to accept ambiguity and uncertainty, and describe the world in probabilities, the Sorites problem is simply a fact of life, often annoying but not crippling.
For example, we think water will boil at a precise temperature (relative to pressure), but in fact the more precise we look at its temperature, the more imprecisely we can predict exactly what temperature will trigger the phase change from liquid to gas to begin. We can only say that that trigger will to a very high probability occur within a certain range of hyper-precise temperatures, and to probabilities increasing on a bell curve tightly around an ideal value calculable from physical assumptions. So above and below that small ambiguous gap we have no Sorites problem; the problem arises only within that narrow ambiguous span of microtemperatures. And most of the time we don’t care, so we have no problem to solve (“somewhere within the 100th degree of Celsius” is good enough). And when we do care, we will have to admit the answer is largely unavailable (the variables dictating precisely when the phase change will occur are unknown or unknowable to us), or take steps to narrow the band of ambiguity with more, or more precise, information, instrumentation, or technique. But in no way does this mean there is “no difference” between water and steam, or between what temperatures we can reasonably expect one to become the other.
In short, when asking how we solve the Sorites problem, probably the easiest route to an answer is to go look and see how all the sciences have solved it, as they have done billions of times for countless measurements and demarcation criteria. Then you will discover when physical facts dictate a demarcation, and when they don’t but still define the demarcation we have arbitrarily made, and why we then made that demarcation and not some other.
The same goes for resolving the Theseus Paradox, i.e. when does a ship (or a person or a country or a house or…insert any particular thing at all) cease to be what it was and become something else. When does Theseus’s ship, replaced plank by plank, constitute an entirely new ship? Or does it ever do? When does a person—when all of us change daily, even hour by hour—change so much that we can say they are a completely different person, and not “pretty much the same person as ever”? Can we say no continuous person ever exists, because a person is always changing? Everyone, after all, is always aging, always learning, always evolving, always accumulating new effects of the physical and social environment upon their body and personality. Is the absolute absence of change required for anything to remain “the same”?
The answers generally come from how we already always do this, e.g. in psychology, property law, common discourse, and so on. I covered this somewhat in my PhilPapers Survey. As I said there: “Like all semantics, we can simply choose what degree of ‘sameness’ matters for any given query,” and “merely need to avoid equivocation fallacies when switching among different thresholds of ‘change’.” Thus, just ask, “Why do you care?” (Probably one of the most important yet most neglected questions in philosophy.) The answer to that question will directly entail an answer to when you should demarcate “same” from “new” in that case. It thus comes again from studying ordinary language use: what are we actually talking about and why does it matter?
Similarly, when asking what the difference is between science & other areas of knowledge, it comes down to what you mean and why you care. And likewise to demarcating between science and pseudoscience, although that is really just something that doesn’t merely fall short of scientific standards but also pretends it doesn’t. Thus, pseudoscience is pretending to be science, or to know better than science; whereas, say, history or philosophy are legitimate fields of knowledge that merely have (by the very nature of the problems they tackle) less accuracy and reliability in empirical matters—and don’t pretend otherwise. There is even, therefore, “pseudohistory” (pretending to the available knowledge and rigor of history as a field) and “pseudophilosophy” (pretending to the accepted knowledge and rigor of philosophy as a field). It also goes the other way around: problems that can be addressed with sufficient standards of evidence will depart philosophy or history and become science (e.g. cognitive science from philosophy of mind; or geology, cosmology, or paleontology from history). But “where” that shift occurs (the Sorites Paradox again) varies by science and is dictated largely by arbitrary custom, and mediated by physical realities (e.g. when we need a risk of disaster to be low enough, we need the standards of certainty and thus of evidence to be high enough to determine it).
In short, all of this requires acknowledging that demarcation is an arbitrary human tool—we can demarcate anything any way we want, provided there is anything to physically demarcate in the first place, and provided we are not ignoring physical distinctions (such as what microtemperature range water boils at or at roughly what height a heap can cause a dangerous avalanche), and provided we are not drawing false inferences from our demarcation, such as that a nonscientific knowledge of risk (“I’m sure owning a gun won’t ever harm me”) can outperform a scientific knowledge of risk (“Hard data show an observable frequency of people come to harm from their own guns”); and provided we accept the ambiguity of liminal cases, where it won’t be as certain to us which side of a demarcation line something falls on, because at some resolution a distinction always gets blurry. That’s simply a feature of reality we have to account for in our reasoning, not attempt to evade with false declarations of precision. And likewise for everything else.
Moral Theory
Finally, the general idea of what we really ought to do with our lives and societies, how we can come to know what’s “morally true” at all, I’ve already thoroughly covered elsewhere. My whole book Sense and Goodness without God builds to answering that very question, and my peer reviewed case appears in The End of Christianity. But you can get started on some summaries of my take in The Real Basis of a Moral World as well as Your Own Moral Reasoning and All Your Moral Theories Are the Same.
But that does leave the one other question listed: how do we deal with the quandary of moral luck. Really, this is just a question of “How should we judge people,” rather than “What should we do” (apart from those post-hoc judgment-related actions), since by definition “moral luck” quadaries all consist of identical behaviors in like circumstances, or identical characters in differing circumstances. The most typical being a driver who runs a red light and kills someone and another driver who did exactly the same thing but “luckily” no one was there to kill. The conundrum arises from the fact that their choices are identical, yet our propensity to judge is not (we usually deem the killer worse).
Likewise the variant counterfactual, where a would-be Nazi is forcibly relocated to a different country and thus “by chance” does not participate in any atrocities, even though they would have had they by chance been allowed to remain in Germany. Here they did not make any different moral choice, they just weren’t placed in a position to make the worse one. Yet we deem the one who by chance had an opportunity to do evil, and did, as worse than the one who lacked that opportunity but had an otherwise identical moral character.
In that case I think there is a confusion between ontological reality and epistemic status. If we knew our neighbor would commit atrocities if given the chance, we would not judge them differently from an identical person who did commit atrocities; we’d instead thank our luck that the one didn’t have the chance to become the other, and be otherwise wary of them as being just such a person. A quandary only really arises when we don’t know this about our neighbor, which is actually almost all of the time. Because “suspecting” is not knowing, which is why our laws require someone to do a bad act to be convicted of it—and even beyond a reasonable doubt to have done it for a malicious reason. But even in the everyday context of making decisions about who to trust and who not, since we are not gods, often the only access to one’s true character is to put it in circumstances that test it, and lacking that we’d never know the true depths of evil someone could aspire to. So we cannot judge them until we know.
And that also means immediately aspire to; what kind of person someone could become given a certain change in their circumstances is not what we would judge them for now, but only then. How someone became evil is irrelevant to whether they did become evil; we judge people for who they are, not for some hypothetical person an imagined set of circumstances could turn them into. The latter can concern us (and motivate us to keep those circumstances from them), but that’s not the same thing. And in the end, who someone really is can only be inferred to varying degrees of certainty; whereas someone who commits an atrocity has by that very act resolved all epistemic doubts (assuming there is no reasonable doubt as to their having done it).
As to the other variant, of identical actions but accidentally different outcomes, like all thought experiments, we have to actually carry out the experiment as described, not import assumptions not posed, nor ignore or remove parts of the model that were clearly placed in it. For example, because we are required to assume the drivers acted identically, we have to assume the drivers both would not have stopped even had there been a pedestrian in the street. So we are not actually being asked to imagine someone who saw a clear road but missed the condition or presence of the light. We are imagining, instead, someone so negligent they were not even aware of the condition of the intersection they crossed.
In the condition stated, I think we would judge both drivers equally in respect to their negligence (and that might even be with sympathy, e.g. if they were unavoidably distracted), and only differently judge the tragedy of the outcome. It would be a mistake to confuse one for the other. But it can seem like we are making that mistake when we act epistemically, e.g. we jail the driver who killed someone, and only ticket the other driver, not really because we deem the one worse, but because we cannot counterfactually prove the other driver would have killed the pedestrian. They might have checked the road and not the light and thus wouldn’t have crossed had a person been there; we can’t know from the evidence available. Whereas we do know in the case that resulted in a death.
This is why we ought all frown equally on drunk driving; even as we lament even more the outcome when its unacceptable risk is finally realized. Likewise, this is why we ought not disparage drivers who err when distracted by excusable circumstances (e.g. someone lost and checking signs) as much as drivers who create the very distraction that causes the error (e.g. someone texting while driving), nor they as much as drivers who wilfully violate (e.g. someone who sees the light but doesn’t even care it’s red). Intent is not magic; but it is also not irrelevant.
In both cases, the “quandary” of moral luck is just a salient example of what we already know are conflicting systems of moral intuition evolved in the human brain: one brain center assesses intent, which judges according to intentions regardless of consequences; the other brain center assesses consequences, which judges according to the outcome regardless of intent. In different people these centers can operate at different intensities (some people care more about the intent; some care more about the consequences). These systems were not intelligently designed and thus do not operate with any kind of rational consistency, even within an individual, much less across individuals. And yet since both intent and consequences obviously matter, we have to decide how to tune these measures into a rational consistency, based on what sort of society we want to live in. Usually that means finding a balance between these judgments, intention vs. consequence, that would optimize the outcomes societally. (Because what other outcome measure would you prioritize over that? One that produced worse outcomes societally?) Though none of that changes how we ought to act (don’t drive negligently; don’t commit atrocities even when given the chance to), it does illustrate the importance of consulting science in developing your philosophical understanding of why human intuitions operate the way they do. Intuition is not inherently reliable; nor are your intuitions likely to resemble everyone else’s. We thus can’t use them as metrics for what’s true. They mostly measure how we happen to feel, not whether we should feel that. We can at best use them as hypotheses in need of test.
And I’ll close with this: notice how almost everything I just reasoned out and argued, not just in this section on moral theory but beyond, depended entirely on being able to recognize true from false counterfactuals and apply inductive reasoning in reliable ways. Thus you might now see why those seemingly trivial concerns are actually fundamentally in need of resolution by anyone who intends to do philosophy. If you don’t know why counterfactuals are true or false, how can you be trusted to tell? And thus, how can you do any philosophy at all? Likewise if you don’t know what makes inductive inferences reliable or unreliable.
Conclusion
The general lesson? All of the top twenty-two “unresolved problems” in philosophy are easily solved if you (a) consistently attend to ordinary language rather than get lost in made-up academicalese, (b) frame all knowledge in the context of probability theory, especially Bayesian reasoning, (c) understand all propositions as statements about hypothesized models, and all questions of fact as being about whether those models match reality, (d) never make stuff up that you have no evidence for nor ignore stuff there is evidence for, (e) always include purpose in your analysis (“why” does any given thing matter; what do we need it for), and (f) stick to actual historical and scientific facts when answering these or any other questions (and of course, avoiding fallacies of reasoning, but I assume that’s a given, even if too much philosophy failing even at that is passing peer review). If philosophy as a whole stuck to those six rules, these twenty-two problems would all be listed as “past” problems now solved by philosophy (or in some cases even by science), with only some particular details of implementation left to work out analytically or empirically. That only hasn’t happened because philosophy as a field just hasn’t done that yet.
Excellent, always liked these rundowns of famous “unsolved” problems (and discussions on the frustration about how the field can seem incentivised to keep these controversial problems afloat rather than admit to their sometimes simple solutions).
I just wanted to link to some psychology work on Moral Luck, extending what you said, specifically as it pertains to judgments about punishment. Fiery Cushman and Justin Martin have a cool paper proposing a theory about why retributive action might be particularly sensitive to unintended outcomes, if its of interest to you or anyone: https://cushmanlab.fas.harvard.edu/docs/Martin&Cushman_inpress.pdf
Thank you. That’s a nice paper on the subject. It doesn’t resolve the ought question of course (as the authors admit), it only attempts to explain why people feel this inclination to punish accidents.
In essence, what they are saying is that when identically bad actions are performed, punishing the one that had the actual bad outcome is more salient in causing observers (and the subject) to realize and act on the lesson (and thus correct future behavior); whereas if we punished both equally, even though that should logically produce the same result (both have the same pedagogical aim), it in psychological fact doesn’t, because human brains have a hard time abstracting (for example) why the non-killer’s action is as bad as the killer’s (even though it is, and the same lesson should be learned). And this ties back to what I pointed out about the brain’s two-process judgment system (one for intent, the other for consequences).
We can see this as offering two levels of advice. First, since salience is teachable, and in fact is valuable to be taught in and of itself, we should not simply acquiesce to the cognitive error they are describing. In other words, we should be endeavoring to teach people to abstract the harmful consequences risked even when a lucky outcome is substituted for the unlucky one. A great deal of moral failure would be solved that way. So the first lesson we should learn from their psychological observations is not that we “ought” to punish more the agent whose risk-taking achieved the bad outcome risked (and not equally as much as the agent who took the same risk and got lucky), but that we “ought” to learn the same lesson as much from the lucky case as the unlucky one.
But as we cannot expect that to be universally achieved (least of all before any societal effort has even been made to teach people that way), it might make utilitarian sense to pick on the unlucky bad actors more than the lucky ones, simply to efficiently exploit the psychological impact that has in bettering society through deterring bad actions.
This ties in to my point about the need to find the right equilibrium between the two judgment modes (intent vs. consequences) that produces the best overall societal outcome. Which would not be a question for philosophy, but science: what punishment/judgment regime works better in producing the desired result society-wide, the result for which punishment/judgment even exists in the first place.
I don’t claim to know the answer to that. Only that the philosophical part of the question (what our goals should be and how to achieve them) is resolved in my philosophical system, namely, that we should find the best equilibrium between intention and consequence judgments, with respect to producing the most favorable outcome in society (i.e. reducing the frequency of bad outcomes, meaning the outcomes we are judging as a society to be bad, as in undesirable). Where that equilibrium actually is, is then an empirical question, not a philosophical one.
Can’t find anything in that response to disagree with 🙂
You may very well have solved many of the toughest or most pressing outstanding questions of philosophy with primarily links to your own work. However, isn’t it more reasonable to assume you are a crank?
That would be fallacious reasoning.
If you can find a flaw in anything I said, locate it. If you can’t, you are violating logic by assuming there are flaws you cannot find.
That would make you a crank.
(And note, as my article explains, these problems already were solved. I’m not alone. The only thing philosophy hasn’t done is acknowledge the fact that these problems were already solved. Because it doesn’t follow the six rules I conclude with.)
I am not sure how you have solved Agrippa’s Trilemma. It looks as if you are picking the beliefs / psychological / brute fact prong. That is part of the problem, not a solution to it, even if you claim that your brute facts cannot be false.
Agrippa’s Problem does not say brute facts can’t solve its trilemma. It says they only don’t if they require further justification. The reason undeniables solve the AP is that they require no further justification; they are fully self-justifying (just as Willard Arithmetic is fully self-verifying).
There is therefore no problem. We require no circular argument or arbitrary postulate. No further justification is needed. Regress ends. Because we need not posit anything we can doubt the truth of.
Hi, Dr. Carrier. You said that when we justify knowledge claims, we (or at least, you) start at raw, uninterpreted experience, then from there use deductive reasoning, and then finally use inductive reasoning/hypothesis testing.
Just to make sure I understand, would this be an example of what you’re talking about?
(Undeniable) I am having the raw, uninterpreted experience of seeing a real cat right now.
(Deduction) Either I am actually seeing a cat right now or I’m not
(Induction) The ‘real cat’ hypothesis holds up better than the ‘fake cat’ hypothesis does, so I’m probably seeing a real cat.
This is a muddle.
The word “real” does not belong in the first premise. That is an inference about an experience (a hypothesis to explain it), not a raw uninterpreted experience itself.
And the second premise seems to confuse what you are seeing with what explains what you are seeing. Perhaps you meant to write “either what I am seeing is caused by a real cat or it is caused by something else” or something like that?
And the closing induction does not follow from the premises as stated. You seem to here just be declaring the conclusion of some other argument you didn’t present. For it to be the case that “real cat” is a more likely explanation of “seeing a cat” there needs to be listed an array of evidence (everything that makes the real cat hypothesis more likely than alternative explanations of “seeing a cat”), such that that body of evidence is, collectively, very improbable on any other hypothesis. That is the only way to entail the posterior probability “there is a real cat” will be high. In daily life we generally have ample such evidence. It’s how we know we aren’t dropping acid, dreaming, or schizophrenic.
You might benefit from reading my Critique of Michael Rea on this point, e.g. the example of microwave ovens.
You should write a book about it.
I did.
Can we really be 100% sure that what we think is our raw interpreted present experience is true? Couldn’t we be against all olds false on this. Isn’t there a non-zero but very very very small chance that this is false.
I am not sure what you are referring to.
When I speak of “raw [UN-]interpreted present experience” I mean Cartesian facts (that you are experiencing x now), not inferences (that x indicates something objectively happening outside your mind, or even inside your mind apart from the mere experience itself).
It is logically impossible to experience x and not experience x at the same time (a fact and its negation cannot be simultaneously true: the Law of Non-Contradiction), so Cartesian knowledge is always undeniable. So you can be “100% sure” of it. It’s just that that isn’t very helpful. You generally are not concerned whether you are experiencing having fallen into the sea, but with whether you have actually fallen into a sea (and thus had better start swimming or treading or looking for floatation).
But interpreted experience can indeed always be false—even if the probability is so low you needn’t concern yourself with it. Hence, it is possible that when you experience yourself falling into the sea that you are hallucinating or dreaming or something; but it is so unlikely that you are, that you had better assume you really did fall into a real sea, and act accordingly.
This references the Cartesian Demon problem. On which see We Are Probably Not in a Simulation.
Sorry for the typo, I did mean uninterpreted.
You used the law of non-contradiction in your reasoning, isn’t that something which we could be getting wrong. More generally, since we could be wrong about anything else, why is raw uninterpreted present experience an exception.
That it is impossible to be wrong about the three basic laws of thought (particularly the law of non-contradiction, which really grounds and entails the other two) I detail in Sense and Goodness without God (check “contradiction, nature of” in the index).
That it is impossible to be wrong about immediate experience existing is that it is self-referential and thus requires no additional information (so there is no other condition allowing it to be false).
For something to be “wrong” there has to be a possible state of being in which the claimed fact doesn’t exist. But there is no possible state of being in which “I am experiencing this right now” and at the same time “I am not experiencing this right now.” The latter refers to the absence of the former—so can never be true in its presence.
For more on why small closed loops like that are undeniables, see my article Epistemological End Game.