After reviewing the new 2020 PhilPapers Survey, I can say none of my views have changed; while philosophy as a field has slowly crept more toward my views than not (see my previous article, The New 2020 PhilPapers Survey, which also covers my thoughts on some of the strange or interesting things this new survey found, and what’s different about this one compared to the last one in 2009). So if you want to know where I stand on the standard thirty questions of the original survey (all repeated in the new survey) see How I’d Answer the PhilPapers Survey, which I published as the new survey was being taken (but whose results had not yet been published).

This time they added ten more central questions to those thirty, and then sixty more questions on much more specific topics of moment. I’ll start with the ten new mains. Then I’ll more rapidly run through the sixty sub-q’s. To make things easier here I will only speak of the target faculty percentages and round all percentages down.

The Ten New Core Questions

  • Aim of philosophy (which is most important?): wisdom, understanding, truth/knowledge, happiness, or goodness/justice?

(1) This is a stupid question, as revealed by the fact that effectively zero percent of philosophers were “against” any of these things being the aim of philosophy. So, in other words, everyone effectively answered “all of the above,” as would I (because they are all important, and there is no continuous sense in which any are “more” important; there isn’t any you could “sacrifice” for the others). So that’s pointless. We learn nothing from this. Maybe we can glean something from the way the “for” votes indicate what they think is “most” important…but by what metric? No idea; rendering this useless to us, because we still have no idea what these respondents are saying or thinking. Worse, the “winner” (at 55% of philosophers giving multiple answers and 29% of philosophers giving only one) is “understanding.” Okay. Vague much? This question and its responses contain almost zero information of any use to anyone. I have no idea what the surveyors were thinking here.

  • Eating animals and animal products (is it permissible to eat animals and/or animal products in ordinary circumstances?): vegetarianism (no and yes), omnivorism (yes and yes), or veganism (no and no)?

(2) This is at least better-formulated as a question. And the answers are very interesting indeed: basically zero percent of philosophers are against omivorism. Sorry, vegetarians. This is a crushing defeat. You almost never see such sweeping and comprehensive agreement among philosophers on any subject of substance like this. To be fair, basically no one said vegetarianism or veganism were wrong; and roughly a quarter of philosophers accept (or even “lean” toward) vegetarianism (and a fifth or less, veganism), but that’s not the same thing. Because none of them said the alternative to all that is immoral or wrong, even these few philosophers aren’t saying veganism or vegetarianism are a moral imperative, but rather a merely acceptable position, a matter of personal choice. I concur.

(3) I just answered this in its own article. In short, if we assume Robert Nozick (the contriver of this thought experiment) meant to describe a machine that doesn’t deceive us about what’s going on, and yet still produces the pleasures we seek (e.g. from human interaction and the like) as his description of the experiment requires, then I’d answer “Yes,” like 13% of other philosophers did, because this is simply asking whether we’d prefer to live in a simverse over the realverse. But if we assume Nozick meant to describe a machine that does deceive us (thus producing genuinely false pleasures), then I’d answer “No,” like 76% of other philosophers did. I suspect such a majority so answered because they indeed assumed Nozick was describing a deception scenario, even though there is no way to tell from the way the question is asked here, and Nozick’s own description is itself vague on the point.

  • Footbridge (pushing man off bridge will save five on track below, what ought one do?): don’t push or push?

(4) This is a variant of the trolley problem generally, the original form of which they already included in the original main questions. Which I think is worth their having done, because one can answer differently to both variants. But what we really want to know is why someone answers differently; merely knowing whether they do or not doesn’t give us as much information. I of course already explained I would answer differently, and why (a duty of care exists in this variant that does not exist in the original, so I answer “Don’t push” here, but “Pull switch” there). When it comes to other philosophers, a fifth are for “push,” half are against, and a sixth or so are in various ways undecided; compared to 63% for “switch” (it was 66%) and 13% or so against (it was 7%), and the rest undecided. So it appears my answers here break the same way most philosophers do.

I am still worried though of a possible confounding effect of including this question here, precisely because one should answer it differently as most of us have done: one of the few ways that I noted last time that philosophical opinion changed away from my positions is in that fewer philosophers now choose to “throw the switch” in the original trolley problem (as I just showed by comparisons above), yet inclusion of the very different “push the man” variant may have influenced some respondents psychologically to change their answer to the traditional variant as well, to avoid appearing inconsistent—because they haven’t really thought the issues through. The small variance would indeed support this (only a few percent changed their answer, would could represent the few thinkers who are answering on the fly, without really having examined the matter).

(5) I already discussed this in detail last time. In short, it’s “all of the above.” Because this question is very poorly formulated. Insofar as one re-phrases the question to ask instead whether gender is “entirely” inherited (an even more specific category than it merely being “biological”) and invariant (e.g. “chromosome type dictates gender”), or not, my answer is “Not.” And my answer is no mere opinion; it follows the conclusions of well-established science.

Most of what we assign to “gender” is socio-cultural invention with no actual connection to biology at all; while some has at best a weak (and thus highly unpredictable) link to biology (men differ from other men more than from women, and vice versa, i.e. individuals vary on all metrics more than “genders” do, e.g. you can easily find women who are taller than most men, so we can’t really even predict that, and that’s one of a very few metrics that do have a biological link), and most of even that is mutable by choice (e.g. you can have surgery and HRT; and starting HRT at or before puberty is even more efficacious in matching biological outcomes). I found from analyzing the answer rates last time that fewer than 15% of philosophers reject this conclusion, which means most are on board with my position, and transphobic bigotry is low (though still high enough to be concerned about).

(6) Philosophers are evenly divided. Somewhere between a quarter and a third of philosophers answered either objective or subjective; while only a sixth or so that it was nonexistent. The rest were undecided in various ways (one or another form of “nonexistent” and “undecided” thus making up the remaining third or so). I can understand the confusion. Because a major problem in philosophy today is that no consistent definition of “objective” or “subjective” has been established in the field, so each philosopher can be expected to answer according to their own definition of these terms, leaving us uncertain what in fact they are telling us by voting one way or the other.

The most one can takeaway here is that almost no philosophers believe there is no meaning of life (somewhere around 15% or so), and a majority that life sufficiently has either a subjective or objective meaning. I concur. I do think one can show life has an objectively accessible meaning (see The Objective Value Cascade), insofar as it is provable that everyone always ought to prefer being alive to being dead, absent particular (contingent and not common) reasons otherwise. But this entails referencing the objective existence of subjective experiences, feelings, and preferences (see Objective Moral Facts). So whether one decides the meaning of life is “subjective” or “objective” depends on how one defines each term. If to be objective one must exclude any reference to subjective facts (which means excluding a lot of real, existent, natural and empirically provable facts), then the meaning of life is subjective. But if objective facts include the objectively-factual existence of subjective facts (e.g. individual people really do have feelings about things), then the meaning of life is either objective, or both objective and subjective.

The new survey allowed respondents to answer both, so I would have answered both—for I believe both components comprise any meaning in life. Around 4% of philosophers also so answered (including those who offered “write in” answers like “hybridism,” which even more precisely matches how I might describe my position). Predictably, a lot more atheists did not tick “objective” than theists; but quite a lot of atheists nevertheless did (almost half). Which significantly topples a stereotype.

(7) This was a horribly worded and useless question. I already covered their and my responses last time. In short, for me it’s “all of the above,” with an important asterisk over “intuition-based” (which is only usable for adducing hypotheses, not for verifying them).

(8) I am gratified to see almost no philosophers answered “none” (3%). Almost all (86% or more) agreed philosophy makes either a little or a lot of progress. But it would be more useful to know in what ways they believe philosophy makes progress—or better yet, what they would list as top examples. Of course, what the difference is supposed to be between “a little” and “a lot” is unknown, so philosophers will each be answering differently based on their own arbitrary lines drawn between them, which limits the utility of this question as phrased. We still don’t really know what these philosophers are saying.

I don’t know which I’d answer because I, too, don’t know what the difference is supposed to be. What would constitute “a lot” of progress? Over what span of time? I’d say philosophy makes much slower progress than the sciences, and maybe even slower than the humanities (reckoning advances made in history and literary theory for example, and techniques of production and evaluation in all the arts, e.g. even dance is far more developed in knowledge, training, and techniques than a century ago). So maybe I’d have arbitrarily ticked “a little,” otherwise lacking any other way to express such an answer because I wasn’t given one (bad survey design). But a lot of folks might think my take on progress in philosophy is that it’s “a lot,” given the many examples I can adduce.

(9) I covered this last time. In short, like gender, it’s all of the above. And most philosophers pretty much agree (only 11% answered only biological; those would be predominately race realists, i.e. racists). Race keys on biological characteristics, but attaches to that a bunch of socio-cultural assumptions that aren’t biologically founded. An excellent representation of this point is W.E.B. Du Bois’s comment that (paraphrasing), “Before the Age of Exploration, there were no white people,” capturing the fact that there was no such thing as a white or black “race” until Early Modern slavers invented the idea, specifically to create a socio-biological dominance hierarchy between Europeans and Africans.

Before that you had Germans, Egyptians, French, Nubians, Portuguese, Ethiopians, Italians, Nigerians, etc. None were “white people” or “the white race,” nor black. And none of these categories then were so securely “biological” as “race” has since become, because culture (ethnicity, predominately language and customs) played a larger part in establishing one’s heritage or identity. Having family roots in a region down to three generations was often well enough to make you “one of us” almost anywhere, provided you spoke like a local, dressed like a local, worshiped like a local, and so on. Color was rarely relevant. You were far more likely to face prejudice for your religion (down even to specific sect), or for your professed or suspected political (or even family) ties, than your physical features.

At any rate, to see my thoughts on how “race” as a category at least contingently differs in popular conception now from gender, see Transracialism Is Either a Fraud or a Delusion in Precisely All the Ways Transgenderism Is Not.

(10) Why this is even a question is best understood in the context of “vagueness” being a fashionable topic lately (though it has ancient roots, e.g. the Sorites Paradox). Like most popular philosophy, I think almost all the hand-wringing over it is overblown. The attempt to say it is “only” epistemic, semantic, or metaphysical already illustrates everything wrong with contemporary philosophy. Forcing things into predetermined ruts that are supposed to never overlap, rather than acknowledging most things like this are complex and multifaceted, leads to tons of wasted ink (literal or virtual)—ironically in this case, considering it is vagueness itself we are now talking about. (For an example of what I mean though, see Open Letter to Academic Philosophy: All Your Moral Theories Are the Same.)

You can read my answer to all this here. Vagueness can be physical (the location of an electron), epistemic (the circumference of England when measuring to the nearest micron), or semantic (the circumference of England when measuring to the nearest kilometer). Semantic causes of vagueness arise from linguistic choice: e.g. you have to choose what you mean by “circumference of England” (to the nearest meter or the nearest kilometer or something else?) before you can answer the question. Epistemic causes of vagueness arise from the difficulty of determining the answer even after the matter of semantic precision is settled: e.g. measuring the circumference of England “to the nearest micrometer” runs into insurmountable problems that make the epistemic effort impossible or “fuzzy” (the coastline at that scale constantly changes with tides, waves, erosion, and human and instrument error), yet there still “is” a true answer on that definition, even if it changes every microsecond and is epistemically inaccessible.

Metaphysical causes of vagueness, by contrast, which I think should more properly just be called physical causes, are when even if we had an epistemic solution (like some space camera that can instantly measure the circumference of England down to the micrometer in a single snapshot) we still couldn’t settle the question. Right now possibly the only actual case examples of this are in quantum mechanics—and only on the assumption that there is no physical fact of the matter underlying this, i.e. that our measurement problems there aren’t just another issue of limited epistemic access. For instance, Heisenberg’s uncertainty principle (e.g. that you can never precisely now both the location and the momentum of a particle like an electron) has both an epistemic and a physical explanation. Epistemically, the problem is caused by the fact that the only way to know anything about an electron is to bounce something off of it, which changes its location and momentum. But metaphysically, one might say that the electron has no specific location or momentum until you force it into one, e.g. maybe electrons are a fuzz or fluid spread out over a place and only “snap to” a specific place in that fuzz when physically caused to by some interaction or interference. If that’s the case, then the location of electrons is metaphysically vague, and not just epistemically vague (like “the geographic location of a hurricane or democracy”).

So there is no single answer here. I would thus have answered all of the above—putting me, for example, in the 3% of philosophers who also answered all three, or the 10% who answered “accept a combination of views,” which means pretty much the same thing (and it’s not clear how much those groups overlapped). I suspect most philosophers hadn’t really studied this subject and thus had no informed answers to offer; so we get the ones who were honest about that (the 9% “agnostic/undecided,” a rather high rate indicating this is not a well-known or settled subject of analysis) and the ones who arrogantly assumed they could answer from the armchair (like probably the 15% who insisted all vagueness is only metaphysical, which IMO only an ignoramus could have answered here). The 15% who answered “only epistemic” might likewise include philosophers who don’t have a good sense of what the difference is between epistemic and semantic causes of a phenomenon, while the majority (42%), who answered “only semantic” might be assuming that all epistemic vagueness can be resolved with a suitable semantic precision, possibly because a lot of them only know the semantic cases (like the Sorites Paradox) and not the epistemic ones. I suspect only people facile with actual laboratory and field science would be well familiar with the ever-present nature, and causes, of measurement vagueness—and that they can’t be resolved with any cleverer semantics.

Similarly, from reading the literature, I get the definite impression that philosophers who insist all vagueness is ontological actually don’t understand the difference between there being no fact of the matter (a metaphysical problem), and our not having access to it (an epistemic problem), or our not being specific enough to know what is meant (a semantic problem). Like the “Donald is bald” conundrum, where he either is or isn’t depending on where you draw the line to qualify as “bald” (how much hair has to be missing and from where?). That is obviously a semantic problem (if the question is vague, then we just aren’t defining precisely what we mean by “bald,” and can resolve the matter by simply adopting a more precise definition). It can’t usually be an epistemic problem, because there is no such thing as hair invisible to modern instruments. And it certainly can’t be a metaphysical problem, because on any “precisified” definition of “bald” there is always a physical fact of the matter whether Donald is bald, whether we have epistemic access to it or not.

The Sixty New Micro-Questions

I am not confident most philosophers actually understood these questions, or understood the options all the same way, so I trust the survey response stats even less here due to poor question-and-options construction and the tendency in philosophy for no one to have a consistent definition for anything. The results are thus less informative than one might hope, since you can never be sure you know what a respondent thought the question was asking or what their selected answer was saying, much less whether they even studied the subject adequately to have a reliable or informed answer to give. So I won’t bother with that; I’ll just state my answers, insofar as each question is even intelligible enough to answer. First, I’ll cover the questions that warrant more than a brief comment; then I’ll finish with brief comments on all the questions remaining.

The Most Vexing Questions

The question Capital punishment: permissible or impermissible? is badly worded. My answer would be “Permissable, but rarely advisable,” which may be what many philosophers answering “Impermissable” meant. I don’t believe there is anything inherently universally wrong about killing certain criminals (as Hannibal Lecter put it, “Any sane society would either kill me or put me to some use,” and when the latter isn’t a viable option, the former remains); but there is no good epistemic way to do that reliably (the false conviction rate is unacceptably and unproductively high, while even the benefits of executing the genuinely guilty are measurably small), so in practice a community or state shouldn’t employ it. But this wasn’t offered as an available response in the survey.

The question Causation: nonexistent, counterfactual/difference-making, primitive, or process/production? I would answer with “Process/production.” I covered this before, when discussing their original question about Humean causation. I assume this was added because that one was too vague to get at what non-Humeans (or even Humeans!) actually believe. Causes obviously exist (as proved by there being an observed difference between their presence and absence), and they have to be more than merely “difference-making” (as that does not explain how or why they are difference-making), and everything reduces to matter-energy in space-time, so causation cannot be primitive but derivative (of those other things). That leaves process/production—certainly if we allow that to include all remaining possibilities, but even more so if we suspect causation is simply a description of a fixed geometry in the fourth dimension (as I do, being a B-theorist about time and an Aristotelian), and that even a purely ungoverned random causation entails a process/production explanation of its outcome.

The question Concepts: empiricism or nativism? I would answer with “Empiricism.” But this is strangely worded. Usually this debate concerns knowledge tout court, not “just” concepts. Limited to only concepts, it is quite scientifically obvious that nativism is false. No one is born already cognitively aware of any concept; we are only born with inclinations that aid us in picking up or developing concepts; e.g. we are born with intuitions and inclinations that make language easy to learn, but we are not born knowing any language, just as we are born with intuitions and inclinations that make walking easy to learn, but we are not born knowing how to walk (unlike many other mammals).

But we also know that some primitive “knowledge” of a sort (if by that we mean only information and not strictly “justified true belief”) is inborn, e.g. we are born knowing what colors look like, and what scents smell like, and all other qualia—that is not something we “learn” in the proper sense, although we don’t become “aware” of them until we experience them, which I suppose is a kind of empiricism. We might also be born knowing things that look like snakes and spiders can be dangerous, or with a sub-architecture of grammar. But we cannot have proper knowledge of any of these things until we empirically verify them in experience. Which can attenuate the inherited information, e.g. we will discover that not all snakes and spiders are dangerous, much less all things that look like them; while any inherited tendencies toward language-learning does not prevent our empirically learning or developing language skills wholly alien to them. And apart from being scant and limited and unreliably “justified” only by a long-term process of natural selection, this inherited “information” is still nowhere near conceptual knowledge. An inborn fear of spiders still would precede any developed concepts like “spider” or “dangerous” or even “fear,” much less what to do about it when such fear is experienced. (See Why Plantinga’s Tiger Is Pseudoscience.)

The question Environmental ethics: non-anthropocentric or anthropocentric? I would answer with Anthropocentric. The basic question being asked here is whether our management of the environment should adhere to human concerns or disregard them, e.g. should we help wolves just because wolves deserve their concerns to be heeded, and not because it benefits humans (even if only aesthetically) to have them thriving in certain ecosystems? I answer as I do because a properly non-anthropocentric approach is logically impossible: even the non-anthropocentrists are acting on their own human concerns; so the question becomes whether their idiosyncratic concerns should be universally adopted, and that’s where their argument gets into the weeds.

“I just like wolves” is not a sound basis for a community policy. And even if it became such (e.g. enough humans voted for it), this would just be another anthropocentric enterprise: humans helping wolves because it pleases humans to do so—which thus ultimately reduces to human aesthetics. The debate is often instead framed as between aesthetic goals (what we just “like” to do, what makes us happy or feel good) and material goals (human profit & welfare), but I see no distinction between those vis-a-vis “non-anthropocentric or anthropocentric.” Both are anthropocentric environmentalism, and every option between them entails “costs,” in resources or consequences. Hence the debate is really always just about budget management. Otherwise, any question like, “Do ants and trees just deserve to exist and do well?” always answers “No.” Such conclusions must instead always be derivative of the interests of the self-aware (see Should Science Be Experimenting on Animals?), because there is no logically possible way to get that conclusion other than by appealing to such interests (even if covertly).

The question Extended mind: no or yes? I find to be utterly useless, because it’s entirely and trivially semantic: pick your definition of “mind” and the answer changes. Which means we have no idea what anyone answering this question actually has reported to us, because we don’t know what definition of “mind” they were thinking of when they selected an option. So we have basically no usable information here. Accordingly I can’t answer this question either (other than “the question is too unclear to answer,” along with 3% of other philosophers) because I have no idea what they are asking—because they haven’t specified what they mean by “mind.”

As described at Wikipedia, for example, I would answer “yes,” because a very broad definition of “mind” is there being articulated, such as to include any physical information-storage medium that doesn’t even directly participate in constructing a conscious state. But that isn’t what most ordinary people (and even most philosophers in practice) mean by “mind,” which is, rather, one’s conscious operations and the machinery directly responsible for it. The information in the books in my library, for example, only participates in the construction of my mental states as mediated by my cognitive apparatus. My eyes, language centers, the active effort of physically consulting the book. Which are always a gateway “in between” the external world and my internal model of it. Hence my mind, as usually conceived, is what is on the other side of my eyes in that causal chain. It’s really that cognitive apparatus that’s sustaining my mind, not the books “outside” of all that, and all the way on the other side of even my eyes.

Moreover, my books are shared by anyone, and don’t port with me (they don’t go everywhere I go), and thus are not unique to me or even continuously a part of me, and thus are not properly the domain of “my” mind. Compare a situation where two human minds literally overlapped in their apparatus, as if by some sci-fi surgery, so that they literally did share experiences and thoughts and feelings—that would more properly be called an extended mind. There is a trivial sense in which “everything is a continuous whole,” but that is of no use when real distinctions need to be drawn (e.g. you can’t kill me or give me amnesia by destroying or stealing my books, nor can I have you convicted of murder or medical malpractice for doing so).

So I find the effort to defend “extended mind theory” rather like trying to argue that the Port of Boston is a part of my sailboat because I dock there and supply it from there. Yes, in a trivial sense “everything is a part of my boat.” But not in any sense that matters to any distinction we are usually attempting to make when we refer to “my boat,” like how many people it can hold or where it is or what it weighs or who I have the legal right to kick off of it. I also find similar conflation of different contexts when extended mind advocates try to equate embodied cognition with their idea of an extended mind. That I grow to feel my car as an extension of my body (complete with developed neural-reaction maps to the ways its tires react to pits and bumps in the road) does not make my car a part of my mind. It’s the other way around: my mind is modeling the car as an extension of my body the same way it is modeling my body; but my body is not my mind. It is just the chassis carrying my mind. That’s why losing half of our body does not “destroy” any component whatever of our mind, much less half of it. There is no point in destroying all these distinctions with overblown theories of “extended minds.” Confusing the core apparatus of a mind with its peripheral tools is simply not productive. So if that is what the survey is asking me to do, then my answer would instead be “No.”

The question Method in political philosophy (which do you prefer?): ideal theory or non-ideal theory? is vague as to what “prefer” means. Prefer to use in place of the other? Or as in more often than the other? Or as the basis for testable models but not for enacted policies? Can’t tell. I’d have to answer “both” or “unclear” or “alternative view.” I agree with John Rawls that ideal theory (imagining the “ideal” society in which everyone complies with its rules) is useful as a heuristic for exploring possibility-space, but that non-ideal theory is what must dictate actual policy. I am a political empiricist: enacted policy must be based on evidence regarding what actually works, and not on ideological frameworks—apart from accepting as the singular goal of politics the maintenance of a civil society. See, for example, Sic Semper Regulationes and That Luck Matters More Than Talent: A Strong Rationale for UBI, as well as Part VII, “Natural Politics,” of Sense and Goodness without God (pp. 367-408) in light of my changes of view since.

The question Other minds (for which groups are some members conscious?) included as options “adult humans, cats, fish, flies, worms, plants, particles, newborn babies, current AI systems, and future AI systems.” I discussed all the weird answers to this one last time. Badly worded as usual, it’s unclear what any respondent was thinking “conscious” meant for the purposes of answering the question, so we can’t really tell what they were answering. That makes the data here largely useless. But as I noted last time, I cannot fathom why anyone would have answered “plants” or “particles” (yet they did). Based on scientific facts regarding the computational systems and processes involved (and known to be required), I don’t think those, or even worms or flies, have anything approaching a meaningful sense of consciousness (by which I mean integrated information experience, measurable as phi); fish and some current AI systems might experience some of the most primitive of integrated qualia (a very low but still significant phi) but nothing even resembling a cat’s experiential life or comprehension, much less a person’s; newborn babies would register a phi somewhere in between cats and adult humans (and likely future AI systems), and is of course rapidly developing a higher phi daily. I go into much of this in my recent debate Should Science Be Experimenting on Animals?

The question about Time travel: metaphysically impossible or metaphysically possible? I’d have to answer “The question is too unclear to answer.” I’ll repeat here what I said last time: the answer depends on what one means by time travel. If we mean by that what most people want time travel to mean, then it’s “metaphysically impossible” (unless you are jumping to alternative universes and not moving around in the same one, as depicted in the film Source Code); but if you mean what it really is—antimatter is normal matter moving backwards in time—then it’s not only “metaphysically possible” but physically happens all the time, just only in a way that would never make for an interesting movie (despite implausibly trying: e.g. in Tenet, their thoughts would also go backwards, and thus “undo” their memories rather than build them, producing no net change in anything; in fact, you couldn’t tell the difference between your going forward or backwards in time: see Sense and Goodness without God, “Time and the Multiverse,” III.3.6, esp. pp. 92-93).

And for their question on the Foundations of mathematics: set-theoretic, formalism, constructivism/intuitionism, logicism, or structuralism? I have no definite opinion to relate, largely because “foundations” is vague. Mathematics can be constructed on many different foundations, because it is just a language, a semantic system, based on a logic, which logic can be anything suited to the purpose. There is thus no such thing as “the” foundation of mathematics, any more than there is such a thing as “the” language to translate all languages into. Hence you can base math on set theory a la Zermelo & Fraenkel (which is not even the only one), or on standard logic a la Russell & Whitehead, or on geometry a la Euclid & Archimedes, or on category theory, and so on. So this question just isn’t intelligible. What are they asking for? If they had instead framed it as regards the ontology of mathematics, I’d have a more definite way to answer: I’m an Aristotelian, which is close to what philosophers sometimes mean by nominalism which in this context is close to what philosophers sometimes mean by formalism, and even closer to what philosophers sometimes mean by structuralism, which is really just a subset of formalism (and yet they give these as opposed answers here, as per the usual bad design of this survey). I could thus have answered every possible foundation usable here (everything but intuitionism), or as “There is no fact of the matter,” if I assumed they were asking me to declare the foundation rather than all usable foundations.

All the Questions Remaining

That concludes all the new survey questions.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading