Sam Harris has a contest on. “Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in 1,000 words or less.” The best essay (as judged by Harris opponent and atheist philosopher Russell Blackford) will win $2,000 (and be published on Harris’s personal website). “You must refute the central argument of the book—not peripheral issues.” If any such essay actually changes Sam Harris’s mind, they will win not just $2,000 but $20,000 (and Harris will publicly recant his view).
Ophelia Benson has been critical of this contest (see A Losing Lottery Ticket, Sam Has to Presume a Great Big Ought, and a guest post from a commenter Why the Is/Ought Problem Matters). His own contest page (The Moral Landscape Challenge) has an important FAQ (a must read for any contenders). I actually am behind Harris’s program (I think his core thesis is correct, and I think Benson is wrong to say it is not), but I am not very impressed with Harris’s ability to defend or articulate it.
I had even greater problems with Michael Shermer’s attempt to defend the same core thesis Harris does, and I have commented before on how he was simply destroyed by his opponent, philosopher Massimo Pigliucci, even though I think Pigliucci is ultimately wrong and Shermer ultimately right (see Shermer vs. Pigliucci on Moral Science). I expect Harris will get similarly pwned. And that’s sad. Because it hurts their cause. They just aren’t the best defenders of this idea. And they should admit that and stop trying to be lone wolfs and look for and work with expert collaborators. There are several real, even renowned, philosophers who have been defending the same core thesis for years. Harris did not come up with anything fundamentally new here, and they have far more skill and experience dealing with the rigorous philosophical requirements of this debate.
Below I will explain what is wrong with Harris’s contest so far (and why it is not what Benson is concerned about); why what Benson has been saying is incorrect (and misses the point of Harris’ actual core thesis); and how (again) science can actually take over moral philosophy the same way it has done the theory of life (in the science of biology) and the universe (in the sciences of physics, astrophysics, and cosmology) and man and society (in the sciences of anthropology and sociology) and of mind (in psychology, neurophysiology, and cognitive science).
What Is the Point in Dispute Exactly?
Any contestant of course would need to know what exactly Harris means by a “scientific understanding of morality” that he wants people to try and refute. He gives this answer:
Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
This actually isn’t an adequate description of his thesis. Because this paragraph can mean a number of completely different things. One might reduce his argument to these propositions:
- Morals and values are physically dependent (without remainder) on the nature of any would-be moral agent (such that given the nature of an agent, a certain set of values will necessarily obtain, and those values will then entail a certain set of morals).
- By its own intrinsic nature, the most overriding value any conscious agent will have is for maximizing its own well-being and reducing its own suffering. This includes not just actual present well-being and suffering, but also the risk factors for them (an agent will have an overriding interest in reducing the risk of its suffering as well as its actual suffering; and likewise in increasing the probability of its long-term well-being as well as its present well-being).
- All of the above is constrained (and thus determined) by natural physical laws and objects (the furniture of the universe and how it behaves).
- The nature of an agent, the desires of conscious beings, and the laws of nature are all matters of fact subjectable to empirical scientific inquiry and discovery. (Whether this has been done or not; i.e. this is a claim to what science could do, not to what science has already done.)
- Therefore, there are scientifically objective (and empirically discernible) right and wrong answers in all questions of moral fact and value (i.e. what values people have, and what morals those values entail when placed in conjunction with the facts of the universe).
It might not be immediately obvious how the conclusion (item 5) follows necessarily from those premises (items 1-4), but it does. I think it should be evident to any observer of just this list of propositions, who thinks about it carefully enough. But I formally prove it (by deductive logical syllogism) in a chapter on this topic that was peer reviewed by four professors of philosophy: Richard Carrier, “Moral Facts Naturally Exist (and Science Could Find Them),” in The End of Christianity (ed. John Loftus; Prometheus Books 2011), pp. 333-64, 420-29. That should be required reading for anyone who wants to challenge this conclusion (for even more of my discussion of this thesis, in print and online, see the links provided in my article on Shermer vs. Pigliucci).
So the argument is logically valid (maybe not ever in any way Harris words it, but we should try to clean that up for him, lest we waste our time taking down his own straw man). That means we have to challenge the premises. But exactly what those premises assert is unclear in Harris’s hands (even as I have reworded them).
Confusion arises from Harris’s muddled wording (which is not much improved by my attempt to convert it into something more precise here; I approach the conclusion in a different way in my own work). What exactly does he mean by “values depend on the existence of conscious minds”? Depend in what sense? What does he mean by “well-being and suffering”? Those are crucial questions. Because his conclusion (as he states it) requires that “well-being and suffering” must always be any conscious agent’s most overriding goal, and it is not obvious why or how that can be (given the actual kinds of decisions humans engage in, which regularly sacrifice their own well-being for other goals) or what makes this “moral” as distinct from merely “prudential.” After all, isn’t pursuing solely your own individual well-being usually what we mean by immoral? Harris has done a terrible job of answering these questions.
I’ll show you a better way to look at and answer these questions in a moment. For now, let’s digress on the contest and Benson’s critique of it.
The Contest and the Benson Critique
All this confusion is largely Harris’s fault (I’ve tried, and given up, getting him to be more philosophically rigorous about what he is arguing, and I suspect he himself doesn’t quite know what he means or what the underlying logical structure of it is). Harris is a notoriously bad philosopher. And because he has such contempt for philosophy, he never learned to be any better at it (if you won’t even acknowledge that you can be, you obviously won’t take any self-educating steps to become so). So I have my own reservations about the utility of his contest. But they aren’t the reservations Benson has voiced.
Overall, I actually think this is a great idea, and wish more philosophers (and universities and foundations) would put prize money like this up to drive productive philosophical progress. Because real progress begins with well-judged crowdsourced debating just like this, to find the best case pro or con any x. Because progress is not possible until you have the best case to examine (and accept or refute) for any position. Academic peer review (for books and journals in philosophy) simply does not look for, nor even rewards, best cases. They just publish any rubbish that meets their minimal standards (and those standards are not very high, relatively to where they could be).
This doesn’t mean peer reviewed philosophy isn’t better than other philosophy. It generally is, at least in some respect worth the bother. But peer review standards in philosophy are also twisted and bizarre, excluding a lot of what actually is good philosophy simply because it doesn’t match some current fashion or irrelevant requirement. Whereas it is not as rigorous as it should be in policing fallacious, illogical, unscientific, or muddled argumentation. (And I am speaking as someone who has published academically peer reviewed papers in philosophy.)
However…
(1) Contrary to Benson, I don’t think doing this is vain. It’s actually what the sciences and humanities often already do, a lot. Philosophy is actually behind the times in using approaches like this. And if I had the money, I’d do this every month, or quarter, on some question or other. In fact, I and a few others have been mulling the idea of creating an institute specifically to do that sort of thing. Of course, Benson was right to question just what the victory conditions were, since Harris (in his usual muddled and philosophically naive way) didn’t explain that. At all. Lots of people criticized him for it (not just Benson) and in reaction he eventually retooled the thing several times to be closer to a sensible way of doing it. Although it’s still, IMO, “batting the league minimum” as one might say. There are better ways to do this. But this at least is a start.
(2) Also contrary to Benson, I don’t think it’s relevant that “there are many people who [wrongly] persist in thinking [Harris’s] book [The Moral Landscape] was a bold new theory of morality, that got everything right.” That is indeed silly. His book did not get everything right. Its core thesis is correct. But his book is not the best defense of it. And though his book contained some new ways of talking about it (most innovative being his analogy of there being several different “peaks of well-being” available on an overall “moral landscape,” after which his book is titled), he did not develop any “bold new theory of morality.”
Not only had I published an extensive defense of the same thesis years before he did (in Sense and Goodness without God V, pp. 291-348), which I don’t expect him to have known about, but many philosophers were there before him, writing much more sophisticated defenses of it, and taking challenges to it far more seriously, than he did. And I do expect him to have known about them. At least some of them (his book does barely mention a few, but relies on their work almost not at all). I cite many of these philosophers in my own chapter on Moral Facts. One of them is among the most important moral philosophers in history (yet routinely, and unjustly, even outrageously, ignored in introductions to philosophy), the late, great Philippa Foot, author of the book Natural Goodness and one of the most famous papers in moral philosophy, “Morality as a System of Hypothetical Imperatives,” which you can read in one of the finest collections of moral realist philosophers who defend what is essentially Harris’s thesis–only, again, years before he did: Moral Discourse and Practice.
But that shouldn’t matter to the present contest’s value. Who thought of it first is moot. That Harris did a bad job of defending it up to now is moot.
But Benson is right that “Patricia Churchland, [who also] has a PhD in neuroscience” should try her hand at this challenge because “her book Braintrust is what Harris should have written but didn’t.” I do think Churchland has the chops to make a good go at winning the prize (whether she convinces Harris or not). Assuming she disagrees with him (she might not). She is far more competent at philosophy than Harris, and knows the subject superbly well, and treats it a lot better in her book–indeed, I would far recommend it over Harris’s, except that she does not go very far there in defending the thesis Harris is.
Churchland shies away with some uncertainty over whether her conclusions warrant going all the way toward an actual full-blown science of normative morality. Although she is not shy about science being able to empirically discover true normative propositions generally: she explicitly defends that (refuting the “is/ought dichotomy” objection to it); and she’s undeniably right, since science has already been doing this, with superlative success, for centuries–in fact millennia–as I lay out more explicitly in my own chapter on Moral Facts.
One Question to Answer
Contrary to what is implied by Benson’s guest commenter, Marcus Ranum, the question “how do you know your idea of what the common good is is fact and not merely your opinion?” is not any more a valid objection to Harris’s thesis than it is against any thesis in science whatever. And that’s true even for his thesis as actually argued in his book–since he addresses this question there. So as Harris himself says, this cannot be an objection to the argument of his book, because this completely fails to address the answer his book gives to it. You can’t just keep repeating the same arguments he has responded to, without addressing his responses.
Now, I think one might be able to refute (or demonstrate the insufficiency of) Harris’s answer to that question in his book. Like I said, I don’t think his book is the best defense of the thesis one could make (it’s hard enough even discerning the logic of his arguments, much less their premises and conclusions). That makes it essentially a straw man (of Harris’s own making). But that does not mean there is no sound and sufficient answer to that question. I charge that if you really want to prove there isn’t one, you will have to respond to my answer to it in my own chapter on the subject, which, unlike Harris’s, went through several serious critiques by expert philosophers and was developed from extensive (and not contemptuous) research in the relevant philosophical literature, and with a concern for carefully laying out its formal logic (the syllogisms you have to prove invalid or unsound are on pp. 359-64; they formalize what is explained in the text).
But to make the overall point clear, notice that the same question can be asked of any science. For example, in conversation with a schizophrenic, “how do you know your idea of what the real world is is fact and not merely your opinion?” Oh. Right. Now you see the problem? How, after all, can a psychologist know he is not the schizophrenic, and thus deluded as to the nature and contents of reality? How would the schizophrenic come to realize he is the one deluded as to the nature and contents of reality? If the schizophrenic can never do so, or can but never does, does that make the psychologist’s understanding of reality a mere opinion and not a matter of fact?
That may sound silly, but this is a serious question. One cannot say that merely because there will be people who never agree that reality is x, that therefore there is no fact of the matter as to what x is, much less that science therefore has no authority to tell us what x is, and should not even be funded or directed to try. Analogously, that someone might never be convinced that x is moral is not at all a valid argument that x is not in fact moral–objectively, empirically, and scientifically. Just as reality would remain x regardless of who would admit it, so could objective moral facts remain factually x regardless of who would admit it.
You therefore have to apply the same solution to the doctor-schizophrenic relationship in the scientific quest to know the facts about the universe to any scientific quest for moral facts specifically. Just as there are creationists who never accept evolution is fact, so there will be whole groups of people who will never accept what science discovers to be the moral facts of humankind. Especially the dogmatically religious, but not only them. Look at the rabid examples of immovable irrationality among many atheists, from anti-vaxxers and climate-change deniers to Randroids and MRAs; that they will never agree that reality is x is simply no argument against the fact that reality is x. Atheists will stubbornly reject science, when they have been as sold on a secular belief or ideology as any theist has been sold on a religion. But that they reject science does not make the conclusions of science invalid, or nonfactual, or “just the opinions” of scientists. Anyone familiar with Kühn should already have worked this out (and you don’t have to fully agree with anything Kühn argued to get here from there).
In short, if we can answer the “how do you know [x] is fact and not merely your opinion?” question for any x in any other science, we can do it for any x in moral science. And in precisely the same ways.
Breaking It Down Into Easier to Follow Units
Something Harris is really terrible at, I know. But if you want to challenge Harris, this is one way to simplify the problem:
Which of these premises do you reject?
1. Moral truth must be based on the truth.
If a conclusion is based, at all, on false propositions, that conclusion cannot be claimed to be true (it may or may not be true; but if you are deriving it from false propositions, you cannot claim it is true). This holds in morality as much as in any other domain of knowledge. So “how do you know [x] is fact and not merely your opinion?” is answered when x follows with logical necessity (hence without fallacy) from only true propositions. If any of the propositions you are deriving x from are false, x is not a true moral fact (or not known to be). And you cannot legitimately claim otherwise (lest you become a pseudoscientific crank).
2. The moral is that which you ought to do above all else.
This is the most reductive possible definition of moral fact. It is a tautology (as all definitions are), but is valuable and meaningful precisely because of that. If you mean by “moral” something other than this, then you are wasting everyone’s time talking about nothing of any importance. Because if you mean something else by “moral,” I will have this other thing, this thing which you really ought to do above all else, which means above your thing, too, whatever it is. So I will have something even more imperative than yours, and if mine is factually true (it really is that which you ought to do above all else), yours cannot be (it cannot be that which we ought to do…because I can prove we ought to do something else instead).
(If at this point you protest I can’t ever prove anyone really ought to do anything, much less above all else, that that is not an empirical matter capable of scientific demonstration even in principle, then you need to brush up on the basics: both I and Churchland have in that event proved you wrong, and if you knew anything about the role of empirically proved imperative facts in agriculture, engineering and medicine, we wouldn’t have to school you on this point. But alas if we do, go read what we’ve said on it. You can question whether moral imperatives are like other scientific imperatives; but you can’t question that there are imperatives we can scientifically discover and empirically demonstrate. So if any of those happen to be more imperative than all other factually ascertainable imperatives, those imperatives would be by definition moral imperatives, because they would as a matter of actual fact supersede all other factually true imperatives–as well as, of course, all imperatives that aren’t factually true, for the simple and obvious reason that they aren’t factually true.)
3. All imperatives (all ‘ought’ statements) are hypothetical imperatives.
A hypothetical imperative is an imperative proposition that reduces to an “if, then” conditional, such that “if you desire x, then you ought to do y” can be factually and objectively true (and empirically discoverable and demonstrable as such). All that is required is that we prove you do desire x and that y is the only way to obtain x (all variants, such as better and worse ways to obtain x, can be inserted into this structure by broadening and ramifying the set of options designated by x and y). Both are empirical questions of fact which science is more capable of determining than any other method of acquiring knowledge there is.
This concept was first articulated by Kant, who attempted to argue that some imperatives were not of this type but were “categorical” imperatives, but in fact his categorical imperatives all reduce to hypothetical imperatives, so his attempt to prove there was a second kind of imperative failed. Not everyone is aware of this. But I have a demonstration of it (with references to the supporting philosophical literature) in my chapter on Moral Facts (pp. 340-42). Basically, categorical imperatives must either be hypothetical imperatives, or incapable of being true in any meaningful sense.
This is why Philippa Foot developed the fourth way in moral philosophy, showing that morality is actually just a system of hypothetical imperatives (contrary to what many claim, her last work tested the boundaries of this proposition but didn’t explicitly abandon it). When you are taught any intro to moral philosophy (such as in college) you will be told that morality must be one of three possible systems (all invented by men): a teleological system (first fully articulated in the utilitarianism of John Stewart Mill), a deontological system (first fully articulated in the categorical imperative of Immanuel Kant), or virtue ethics (first fully articulated by Aristotle). Foot added a fourth and wholly separate category, which actually is far more plausibly correct.
In my opinion, this is one of the most important developments in moral philosophy in the history of the subject. Hence I find it appalling hers is not included as a standard option competing with the other three. Particularly since it actually subsumes them all into a single coherent system: even Kant’s deontological ethics reduces to a special form of teleological ethics which reduces in turn to a special form of virtue ethics, which reduces in turn to a system of hypothetical imperatives. Thus, Mill, Kant, and Aristotle were all right, they just were missing pieces of the whole picture, and thus failed to see how the defects of their separate systems disappear when their systems are united rather than treated as incommensurable and in competition with each other. The means to unite them is the approach of Philippa Foot.
So you can see why I find the snubbing of a woman philosopher here appalling. There can be no valid reason to exclude her from the status and importance of Kant, Mill, and Aristotle. It’s hard to avoid the conclusion that she was (and pretty much still is) ignored because she was a woman.
Be that as it may, once we restore her to her proper place in the debate, the old ruts of “teleological, deontological, virtue ethics” dissolve into a single unified way to understand moral propositions. The outcome is that all moral propositions (even in Aristotle, Kant, and Mill, and all their successors and emulators) are revealed to be really just hypothetical imperatives. I have never seen any credible challenge to this outcome. (Most philosophers are unaware the outcome was even achieved…because most philosophers ignore Philippa Foot, or make no use of her work–at all, much less to this valuable end.)
4. All human decisions are made in the hopes of being as satisfied with one’s life as one can be in the circumstances they find themselves in.
People can be wrong about whether a decision they made will lead them to greater satisfaction with their life; but if they knew that when they made the decision, they would have decided differently, and chosen instead what would lead them to greater satisfaction with their life. This is observably always the case, even when persons are irrational (as when they willfully ignore more distant negative consequences to pursue present satisfactions) or mal-informed (as when they pursue, for example, money and wealth under the false belief that it will bring them more satisfaction, never realizing that perhaps it doesn’t, or did so far less in relation to the lost satisfactions of other courses that could have been taken instead).
In both cases (irrational and mal-informed decisions) a decision was made in violation of our first premise (“Moral truth must be based on the truth”) generalized to all domains (“Prudential truth must be based on the truth”). In fact, that our decisions are being made irrationally, or that they need to be, is just a special case of making mal-informed decisions (since if we knew we were being irrational, we would either stop being irrational or continue, but in ignorance of the consequences: i.e. if we knew we were being irrational and didn’t stop, then we are either ignoring the negative consequences of doing so, and thus acting on a false belief that there won’t be negative consequences, or we are knowingly preferring those consequences, in which case we have gone full circle and are back in fact to choosing what most satisfies us).
Thus what we must say is that if a person makes rational decisions, then they would make satisfaction-maximizing decisions. So when we assert that “all human decisions are made in the hopes of being as satisfied with one’s life as one can be in the circumstances they find themselves in,” and add the aim to discover true normative propositions about that, then we are assuming human decisions being made rationally. Because irrational human decisions can never be normative (there is a prima facie exception to this, e.g. gameplay, but it is only an exception when subsumed under an umbrella of rationality at which level it is not an exception, e.g. we make rational decisions about when to behave silly and when not to, so secunda facie this is not an exception but in fact only further confirms the rule).
The same goes for human decisions made ignorantly, etc. Any decision made based on false or incorrect information cannot be normative; and when we reintroduce correct information, decisions will only fulfill the rule, and greater satisfaction will be aimed for as proposed. (There is the question at this point of impossible knowledge or knowledge one cannot reasonably have obtained, but when we accept that all imperatives, even moral imperatives, are situational, this problem dissolves–I explain what it dissolves into in my chapter on Moral Facts).
Satisfaction pursuit is a higher level of generalization than Harris and others (including once myself) have used, which is usually “happiness” or “well-being” or “avoidance of suffering” and so on. But in fact we only pursue those things because it satisfies us more to do so–because when it doesn’t, we pursue something else. For example, if it will satisfy us more to die than to do something despicable, we are not violating the expectation that our “decisions will be made in the hopes of being as satisfied with our life as one we be in the circumstances we find ourselves in,” but in fact conforming to it. It’s just that the circumstances we find ourselves in in such a case leave us few and very poor options, and we settle on the most satisfying. Which is not the most satisfying option we can imagine. It’s just the most satisfying option available.
Likewise, when we do good for others, when we make sacrifices for others or for ideals or things other than our direct material needs, when we voluntarily inconvenience ourselves, we do so because it satisfies us more to do so than it would if we didn’t. The reasons for that are complex. But the fact of it seems amply confirmed, and I am not aware of any disconfirmation of it. (For more on satisfaction as the supreme goal in every individual’s life and its role in entailing moral facts, see my debate with fellow atheist Mike McKay–for links and post-debate commentary visit Goal Theory Update.)
This can be taken as an empirical hypothesis about the species Homo sapiens (as Patricia Churchland does). It can also be taken as a structurally inevitable property of almost any survival-capable conscious agents generally (harder to prove, but possibly true nevertheless). For the purposes of challenging Harris, we should give him the best case possible (so we don’t attack a straw man), and that means assuming only that his thesis requires the less ambitious of these options: that this proposition about satisfaction-pursuit is a scientific hypothesis about the species Homo sapiens.
We must therefore ignore all objections that appeal to aliens or robots or other forms of intelligence. Perhaps there are different moralities for them. That would not change what the moral facts are for us (human beings). This distinction will get more problematic when we acquire the power to radically alter the nature of ourselves and the world. But we can set aside that looming problem for now, and just deal with the easier problem in the present case, which is what is morally true for us, as we are right now. (Although we can’t let that other question sit idle for too long. We are not far from its becoming a crucial question that we as a society will have to seriously answer. Harris has said some things on that matter, not very sagaciously IMO, but we are to address his core thesis, not “peripheral issues.”)
5. What will maximize the satisfaction of any human being in any particular set of circumstances is an empirical fact that science can discover.
Here circumstances include the attributes of the person (e.g. preferences and personality and health) as well as the attributes of the conditions they are in (e.g. social status, property, geopolitical location, a mugger attacking them, etc.).
And it should be obvious that combining (1) with (2) produces the empirical question: what would it take for you to be the most satisfied with your life as any decision you make is able to achieve and at the same time all your beliefs (about yourself and the world and the consequences of every action available to you) are true and complete (such that knowing nothing else would change your answer). And the answer to this question is empirically discoverable by science–for any individual, given sufficient access to the relevant information, all of which is a system of physical facts, about you and the world.
Actually achieving that state may be impossible (you will never have nothing but true and complete beliefs). But we will recognize that if any there is any false belief or incomplete knowledge in our minds that is consequentially relevant to how we decide, correcting it will change our decision. And there can be no rational, factually true reasoning that would persuade us to decide differently. There may be irrational or factually false reasoning that would do so. But the output of such reasoning can never be normative (rule 1). And we are asking about what is normative, what we ought to do–in fact, what we ought to do above all else (rule 2).
6. There are many fundamentals of our biology, neurology, psychology, and environment that are the same for all human beings.
While individuals will differ in what makes them most satisfied, all will share some basic needs in common in that regard due to shared biology and environment.
For example, everyone needs to eat. What specifically they as individuals would prefer to eat will differ, but not the general fact of needing to eat. Likewise breathe, etc. And beyond that, emotional and intellectual needs: it will ultimately be more satisfying (and/or more satisfaction-generating) to know how to reason well and be informed (i.e. educated); it will be more satisfying (and/or more satisfaction-generating) to have social company that will help rather than harm you; more satisfying (and/or more satisfaction-generating) to live in a functional society rather than a dysfunctional one (e.g. more economically and politically stable and efficient, less crime and corruption and excessive constraint); more satisfying (and/or more satisfaction-generating) to have some friends and love in your life rather than none; etc.
Even while individuals again differ (e.g. how much company we prefer to seek vs. being alone), the commonality remains despite variations in degree (e.g. the need for some company and to combat loneliness is biologically, neurologically, and psychologically universal; as is the need to work effectively with the social system we find ourselves in and to have a social system that works effectively at all; and so on).
So when we revisit the fact that all ought statements are hypothetical imperatives (rule 3), we will discover that combining (2) with (3), a hypothetical imperative in which the condition (the x in “if we want x“) supersedes all other conditions (there is in fact nothing we will ever really want more than x) is by definition a moral imperative. All human beings want life satisfaction more than anything else (rule 4). And how to achieve that is an empirical question subject to scientific inquiry (rule 5). And some aspects of how to maximize life satisfaction are true for all human beings (rule 6).
Therefore there is something all people want more than anything else, which entails some behaviors all people ought engage in, if they are rational and do not abide by any false beliefs. And those behaviors are moral facts. Which science can empirically ascertain as such. And no competing idea of what’s moral can override them–because any such competing idea would either be them (and thus not in competition with them) or would not be based on true or rational beliefs (rule 1) or would not be overriding and thus would not be moral, by definition (rule 2).
Conclusion
Therefore, Harris’s core thesis is correct. Indeed, it’s undeniable.
Many Devils remain in the details, but that’s the case for all scientific questions, and therefore cannot be an objection to this core result. Look at how messy and open many questions still are regarding the origin of life and the universe, the unification of relativity and quantum mechanics, even matters concerning the mind and brain or just ordinary human biology and biochemistry. We do not “thereby” conclude those questions are outside the purview of science. And as for them, so for difficult or as-yet-unanswered questions in moral science.
And again, likewise, whether people refuse to accept the results of moral science (like creationists do evolution science) also has no bearing on the truth of Harris’s core thesis.
Hence I do not believe anyone can make a valid argument against it.
Whether Harris would know enough to make any of these points is another question. But that’s his own lookout.
Hi Richard,
Interesting article, but I’m not yet convinced that Harris’s thesis is correct. I agree with much of what he and you write, but I still don’t buy the claim that this model represents objective moral truth.
I agree with all of your premises, apart from maybe your explanation of number 6 in the latter enumeration.
Yes, we all share fundamentals but many of these are not of much moral concern (e.g. we are all exothermic mammals). For most properties of moral concern, I can imagine there being exceptions (e.g. individuals with no need whatsoever for the company of others), so I don’t see why “the commonality remains despite variations in degree”.
I don’t get how you combine your latter six premises to the conclusion that there are objective moral facts in the sense that Harris implies. I worry that the concept of normativity might be confusing here, as i can be applied in a couple of slightly different ways. We probably should take it to mean that there is an ideal, correct way of doing things, but it is often used to mean that there is a typical way of doing things.
I do agree that your original points 1-4 lead to the conclusion 5, when articulated as there being a fact of the matter as to “what values people have, and what morals those values entail when placed in conjunction with the facts of the universe”. This seems to me to be the idea that there are typical-normative moral facts.
But I believe Sam Harris is implying that there are ideal-normative moral facts, and I don’t see how these can be derived from the typical-normative facts you seem to be describing.
The problem with your account is that it seems to me that your definition of morality might be used to justify immoral institutions such as slavery on the basis of typical-normativity.
Consider slavery in a slave-owning society where most people believe slavery to be acceptable (let’s not consider slaves to be members of this society). If we assume that what would give the slave owners the greatest life satisfaction is to be wealthy, comfortable and idle (and assuming that the slaves have no realistic prospect of rebellion), the fact of the matter may be that the most rational thing for the slave-owners to do would be to continue to exploit slaves. By your reduction of morality to the ultimate ought, and with no concept of an independent, transcendent morality, these slave owners would be acting morally.
But I don’t think any of us would agree with that. What you have described to me seems to be only rationality, not morality (two ideals I see as orthogonal).
We could apply my criticism to more contemporary issues also. I can’t see, for instance, how you might approach the question of whether it is right to exploit animals, eat their meat and so on. As you say, people need to eat, and many people get great satisfaction from consuming animal products. Even so, philosophers such as Peter Singer maintain that this is immoral.
For these reasons, Harris’s approach is more plausible to me. He starts with the axiom that what is moral is to promote the well-being of conscious creatures, and that science can help us to establish what is moral by answering questions about how best to achieve this promotion. This framework allows one to come to answers about the issues I raised without getting bogged down in the particular norms of the society one is considering.
I largely agree with Harris except for when he asserts that he has found objective morality, as I can imagine others disagreeing with his axiom. For some, the most moral thing to do might be to obey their idea of God’s will, while for others it might be to support their nation even to the detriment of others. I see no reason to suppose there is a fact of the matter for which view is correct. There is only preference (and I prefer Harris’s).
Therefore it is immoral to lock someone outside in the snow without warm clothes.
Therefore it is moral to give someone locked outside in the snow warm clothes.
Etc.
Etc.
Yes, being an exothermic mammal is of moral concern.
Why do you call those exceptions rather than simply the moral rules? (“Anyone who has no need for the moral company of others ought not be required to have the moral company of others” etc.)
Note I said that morality is situational.
Read my chapter for more on exactly that point.
Moreover, you are only making my point for me: whether there are people like that, and who they are, is a question of scientific fact.
You are thus not disagreeing with Harris’s thesis here. You are disagreeing with which hypotheses science will confirm.
Your argument here is like saying “evolution might be punctuated rather than gradual; therefore evolution is not an objective fact.”
As I say in my article: the formal syllogisms are in my chapter. If you want to know how I get the conclusion, read that. As I also said to do in my article (“the syllogisms you have to prove invalid or unsound are on pp. 359-64”).
Covered in my chapter (at least ten pages are on it).
Since I never said that (nor did Harris), that’s a moot point. We never mean by normative “typical.”
Note that my chapter explicitly distinguishes two possible results science could find in this regard. That is a question for science. Not a question for Harris’s thesis.
If there are norms that are shared by all people, those would de facto be typical, but they would not be norms because they were typical; they would be typical because they were norms that everyone shares.
If there are no norms that are shared by all people, there can still be objective moral facts that differ from one group or type of person to another (because objective relativism is a subset of moral realism: I mentioned even in this article the aliens example, but it’s logically possible it could be the case among populations of Homo sapiens, and I spend several pages on this possibility in my chapter).
But whether that is the case is an empirical question of scientific fact. It is precisely one of the things that Harris’s thesis would ask us to investigate and determine. It therefore cannot be an objection to his thesis.
If you still don’t see it, read my chapter.
Funny you should mention that. That is precisely the argument (and example) I refute in my chapter. Several pages on it. Check that out.
How do you know the “independent, transcendent morality” you just imagined doesn’t also say slave owning is moral?
See the problem? You are assuming you already magically know what is moral, and then using that to argue against what is (in the hypothesized case) factually determined to be moral. That is illogical. That is exactly like creationists saying they know in their hearts that evolution is false, therefore no evidence can ever prove evolution is true, even in principle. I have several paragraphs on what’s wrong with this in the very article you are commenting on.
I also have a whole section on this in my chapter. It’s title: “The Moral Worry (or ‘Caveman Say Science Scary’)”.
This is actually a good case for showing why “slavery is moral” is probably scientifically disprovable (contrary to your worry; I explain in my chapter; I also tackled this a lot in my debate with McKay, which I also linked to in my article here). But logically, if slavery is moral, then slavery is moral. You cannot say it is immoral merely because you don’t like it. That would be saying that morality reduces to what you personally like. You will have what you personally like, I will have objective empirical facts. Which is more authoritative? The same answer you give to the creationist on that question, you must accept for yourself in moral science.
You cannot start with an assumption of what is moral and then go looking for what is factually moral and reject it because it didn’t agree with your preconceived notions. Imagine if all science were done that way. We’d still be living in huts.
You may find that moral truth is not what you expected. Many people think things are objectively, transcendently moral that you would say are immoral; since you believe they can be wrong, you have to accept that you are just as capable of being wrong.
Nevertheless, the moment you try to come up with objections to the notion that slavery is moral that don’t depend on merely “I don’t like it” you will be making claims to fact that are questions of empirical science. Right down the line. Which ought to clue you in: slavery is not moral because a system of objective, empirical facts entails it is not. And that is Harris’s thesis.
And Mormons maintain that drinking coffee is immoral.
People can say anything, when they have no objective foundation for the things they assert. Singer has never grounded his ethics in any objective facts. He starts with ad hoc assumptions that he never demonstrates. His entire moral system is therefore a sham.
You should be far more offended by that approach to morality than Harris’s. Because with Singer’s approach, you can insert any other arbitrary premises you want and come up with any moral system you want. Like National Socialism. And Singer won’t have any objective basis for saying he is right and they are wrong.
That’s a problem.
Harris’s is aiming to solve it.
And his solution is correct (even if he sucks at presenting it).
The problem with that is that we then have to present a scientific, empirical proof (we don’t have to now; I mean, eventually) that what satisfies people most is to “promote the well-being of [all] conscious creatures.” Otherwise that premise is no more true than any other random arbitrary premise you replace it with; and worse, it will be demonstrably false (like creationism). If science finds all human beings will live more satisfying lives if they don’t “promote the well-being of [all] conscious creatures,” then no one will have any good reason to care about “promoting the well-being of [all] conscious creatures.”
Thus, if you think there is a good reason for everyone to care about “promoting the well-being of [all] conscious creatures” (and I would agree with you: I list this as one of the expected consequences of my satisfaction thesis in my chapter on this and in my debate with McKay), then those “good reasons” must be empirical matters of fact that science could demonstrate. Thus, Harris’s thesis reduces to my thesis. They therefore cannot be contrasted as competing theses.
There is no fact of the matter whether God exists? (And thus has any will at all, much less a specific one?)
Surely you don’t seriously believe that. Even in principle, i.e. regardless of whether we’ve answered the question, there is an objective fact of the matter whether God exists and has a specific particular will.
Likewise the axiom “to support their nation even to the detriment of others”: will following that axiom really end in a more satisfying life than a life lived without that axiom (all else being equal), or vice versa? That is a question of scientific fact. And there is an objective empirical fact of the matter. Whether we have found out what it is or not. It is still a discoverable fact. So you can’t say there is “no reason to suppose there is a fact of the matter for which view is correct.”
The question is what preferences people actually have, in relation to what the consequences actually are. When people’s preferences are based on false beliefs about those consequences (“the Third Reich is a good idea and will work out great for us”), then their preferences are based of false axioms. When they correct that, and base their preferences on true axioms, then we’ll have their true preferences. But what makes the difference between true and false axioms is objective, empirical, scientific fact. On this, as well as the second component (of core preferences, those which cannot even in principle be changed by external objective facts: the existence of such preferences and what they are [and for whom, if people differ in this] is also a question of objective, empirical, scientific fact), read my article on Pigliucci vs. Shermer (linked in my article here), where I discuss both points.
Hi Richard,
Just a short note to say that since I appreciate all the effort you go to in answering your critics in comments, I have now purchased your book on kindle. I will read it and get back to you!
What about nonhuman beings? If morality can be objective, rather than intersubjective, why is it so gosh-darned anthropocentric?
I’m not sure which you mean.
You could mean to ask:
Why is how Homo sapiens should live so based on the nature of Homo sapiens?
But hopefully you would recognize that’s a dumb question. Obviously how any x should live depends on the nature of x. (If you really, seriously, don’t get that, then read my chapter on this subject, which lays it out formally.)
So I must suppose you mean to ask:
Why are the objectively true moral facts for Homo sapiens so concerned about the welfare of Homo sapiens? (Or something in that ballpark.)
The problem with this question is that you would have to be assuming you know what “the objectively true moral facts for Homo sapiens” are. Already. Without having conducted a single scientific study of the facts requisite to know such things. That makes no sense.
Harris is saying you can’t know this from the armchair. You have to go and discover some real actual facts first.
And those facts might not be “so gosh-darned anthropocentric.” Or if they are, those same facts will answer your question as to why.
So I don’t see any objection to Harris’s thesis here.
A better way to frame this question: Isn’t the structure of imperatives studied for all possible sentient agents a better target for ethics than limiting the study to homo sapiens? Especially as we prepare to enter the transhuman era.
I’m not sure what you mean. Right now only Homo sapiens are capable of using the technology of imperative knowledge. So it is moot to talk about other species doing so. However, there is merit to starting side-discussions about alternative futures in this regard (and in regards AI, not just human transformation). That’s just not what I’m talking about here. As I wrote in the article you are commenting on:
But I do explore that other subject elsewhere, e.g. in Ten Years to the Robot Apocalypse and in TEC, p. 356 (w. notes).
To clarify what I meant: Empirical study of morality is limited to homo sapiens, but conceptual study can be extended to more general settings, as is done for example in game theory. The broader regularities — although counterfactual rather than empirical at this point — might provide reasons for more particular conditions of feeling satisfied.
Yes, that’s always worthwhile, but it’s still secondary to the empirical task of working out best practices for actual living people.
I’m sorry if I’m misusing the thread, last comment but it will be a disorganized one. I bring this up because I fear that “because I feel so” might be insufficient reason for feeling satisfied. The “conceptual study” might be done using mathematics but also using science fiction or art in general. It might provide “hermeneutic circle”-style foundation for core values. One might try to stuff this into “providing more knowledge” rubric (“feeling satisfied under condition of sufficient knowledge”). But empirically speaking, a more knowledgeable person is already different than an ignorant person. When we are in the business of evaluating values. We cannot perform a measurement of satisfaction of subscribing to a wide range of values on a fixed person, because particular values might be a matter of personal identity to them — they’ll refuse to be “brainwashed”, even counterfactually. “Satisfaction” might mean a faculty that responds to reasons. But it might mean a non-cognitive feeling hard-wired to evaluation of an identity-defining set of values. In the latter case it would not make sense to say that satisfaction increases across changes in values (i.e. from happy Christian to happy atheist).
Except that we can in principle match satisfaction degree to qualitative measures of a brain state, and thus objectively compare degrees of satisfaction by a functional brain scan. Thus, it would make sense to say that satisfaction increases across changes in values (i.e. from happy Christian to happy atheist), if in fact we confirmed it neurophysically.
However, I assume that’s not the point you are trying to make, which pertains to the Socratic contrast of living as a happy pig vs. living as a less happy human. If we merely summed neurophysically measured satisfaction states we would be overlooking the fact that the pig can’t do the things a human can do to understand and improve their existence (e.g. grasp the value of life and extend it and improve it). In that case, being a pig would be to a human equivalent to being dead (since the lack of self awareness would entail the functional equivalent of being brain dead for a human…i.e. who they are would cease to exist, and along with it all capacity for appreciating their existence). The comparative satisfaction states are thus human = N and pig = 0, and any N is greater than 0. Thus satisfaction is not a synonym for pleasure. And we therefore don’t have to worry about comparisons between people and animals.
So we move the analogy to one less extreme, where we have a profoundly ignorant human vs. a much wiser one. There we face two points of comparison to include:
(1) The ignorant human can only remain ignorant if they adopt a bad epistemology, but a bad epistemology will always be sub-optimal even for their own satisfaction pursuit; whereas adopting a good epistemology will eventually, inevitably, convert the ignorant human into a wise one. Thus, a more reliable satisfaction achievement (e.g. lower risk) requires adopting the good epistemology, since its benefits (lowered risk) outweigh the costs (the downsides of wisdom). In short, a person using a bad epistemology simply won’t be able to reliably match the person using a good epistemology in satisfaction states (in duration, degree, and frequency–even measured neurophysically). This is doubly so when we include the effects of the ignorant person’s decisions on the social system, which then limits that person’s ability to be satisfied (because stupidly governed social systems are far harder to find satisfaction in).
So there is no rational argument, from maximizing satisfaction, for adopting a bad epistemology over a good one. And a good one eliminates ignorance. So a rational person does not actually have the choice to be ignorant rather than wise. Therefore being ignorant is not an available option. The comparison is therefore moot.
Similarly…
(2) The ignorant person already has desires the satisfaction of which require rational conclusions derived from true facts, and which will in turn drive them to explore the world to learn what they need to satisfy those desires. Those desires will continue to be frustrated rather than satisfied the more the ignorant person prefers a bad epistemology to a good one, since only a good one can reliably produce conclusions about how to satisfy those desires. Those desires will include the desire to know and the desire to be correct (and maybe even, eventually, the desire not to hurt people unknowingly, and so on). Which desires can only be satisfied by pursuing and adopting a good epistemology over a bad one, and by pursuing the elimination of one’s ignorance regardless of which epistemology they use.
So in point of fact an ignorant person will always be in some measure dissatisfied, unless they are in the process of becoming wise. Therefore, again, remaining ignorant is not satisfaction maximizing, because it’s not even an available option for anyone pursuing the satisfaction of their desires. The comparison is therefore, again, moot.
And that’s even for a case where the initial or happenstance satisfaction states are equal…which is questionable to begin with. Some studies are already showing that Christians, for example, suffer under a burden of fears and other oppressive or undesired states, which they report being so much happier being free of upon conversion to atheism (which we could confirm neurophysically). It’s therefore probably the case that it is very unlikely (and thus not reliable enough to count on) for a Christian on average to enjoy the same overall life satisfaction as (let’s say) an atheistic humanist on average (all else being equal). The more so, again, when we include social system consequences (e.g. the ways a Christian-produced social system limits everyone’s life satisfaction that a Humanist-produced social system would not).
Finally, retrospective satisfaction measures are valid as well. A common argument for ignorance is ignorance of the consequences of death (“isn’t it happier to think you’ll get to meet your loved ones in heaven, than to admit they no longer exist and you’ll never meet them again?”). This argument falls to both points above. But it also falls to a third: it is a fallacy to contrast the fictional (supposedly happy) option to the worst comparand (of depression over the reality, for example), because that violates the law of excluded middle–there are more than two options.
A third option is acceptance of death: reaching a point where the fact that certain people cease to exist does not disturb you so much (it always will to some degree, but it can do so much less with an attitude of acceptance and adjustment). Then ask a person in the fiction-state which they would even at that time prefer: (A) to continue believing what is false (remaining in their fiction state) or (B) to achieve the acceptance-of-death state. A rationally informed person will choose (B). They will, even then, admit that that would be more satisfying. Obviously, for (A) to not be fictional (which I’ll designate as (A)*) would be even more satisfying still (and thus always preferred), but only if it was true (so if it’s false, it’s being true is not an available option), so what is disturbing is to continue believing (A)* knowing it may be a false belief; and that is more disturbing (and angst generating) than accepting the truth would be. This is what is reported by all I know who have converted from the one state to the other.
And this ultimately would be measurable neurophysically (being in state (A) will measure as less satisfying overall than being in state (B)). This is why theists often try so hard to protect their delusion and insulate themselves from the truth, because (A) is disturbing, and anyone would prefer (A)* to (B), so they have to do everything they can to never realize they might realistically be in state (A). But that entails adopting such profoundly bad epistemologies (e.g. avoiding exposure to outside information, avoiding knowing anything about cognitive biases they are in thrall to, insulating themselves inside self-defeating and exploitative belief systems, etc.) that it hugely impairs their ability to reliably satisfy most other desires. So it is not a winning strategy. As apostates consistently confirm.
I totally agree. The arguments until the mention of Christians are defense of universal value of knowledge for agents involved in a real world. It would be a stretch to call them inductive conclusions from empirical evidence — they are empirical only indirectly by being derived by a thinker who came to be through a natural process, they apply across all worlds. Such conclusions can inform our pursuit of values and our attempts at molding what is physiologically satisfying to us.
Richard,
In order to help us evaluate your case for moral science, I believe it would be useful to consider some examples. Could you give us examples of this science being used to resolve some difficult moral questions?
Otis Graf
Read my chapter on the subject.
I say even more about what the scientific research program would actually look like (and cite some of the science already done that comes close to this) in Sense and Goodness without God (Part V). But the formal underpinning for that is in the later treatment in “Moral Facts Naturally Exist”.
If you want to ask a specific question (e.g. pose a specific “difficult moral question”) I can outline here what a related research program on it would look like. But you would still have to have read my chapter to understand why (so I won’t be giving the why of it here; I wrote it down there so I wouldn’t have to write it down again; but I can give the how of it).
I’m afraid objective morality makes no more sense than objectively tasty food. Sure, there are foods most ppl would be able to enjoy, if only due to shared biology, but “tasty” is still in the eye of the beholder. So is beaty, morality and whatnot. One could redefine tasty as nutritious, etc., but that’s not what most ppl mean by that.
Then you just aren’t paying attention. You are like a creationist saying evolution makes no more sense than junkyards randomly turning into airplanes.
My chapter (indeed my article here; but most formally there) refutes your claim. You are simply ignoring everything I said and gainsaying it as if I didn’t say anything disproving you.
Morality is not like aesthetics. Morality is like surgery or engineering. What you “ought” to do in surgery and engineering is not a matter of taste. It’s a matter of objective scientific fact. And this is true even if the core values morality is based on differ for different people. Because moral relativism is a subset of objective moral realism (i.e. even if moral facts are group relative, the moral facts for a given group are still objectively true for that group).
To quote my other comment in this thread:
Um, no. The only way you can make morality objective is by tying its definition to something objective. But that would be akin to begging the question – why should anyone accept your definition? My own morality doesn’t allow for murder, rape, etc., but I’m all too conscious that it’s only my choice. There is no objective Ought hanging out there telling me what to do. You choose to bring things like personal satisfaction into the equation, but nobody is obliged to accept your definition. Others will simply disagree that morality has anything to do with satisfaction. So in the end morality is like aesthetics, and is nothing like surgery.
Human desires are objective facts of physical brains that actually exist in a physical world.
Which definition?
Please interact with my actual article. It does not appear you have read it or are responding to any of the arguments in it. It looks like you are just gainsaying its conclusion as if there were no arguments for that conclusion.
“Human desires are objective facts of physical brains that actually exist in a physical world”
If this was relevant then everything would be objective because all our subjective appreciation would be objective facts of physical brains that actually exist in the physical world.
The fact that I enjoy the sweetness of an ananas, doesn’t imply that “an ananas is enjoyably sweet” is an objective characteristic of the universe.
That someone desires something doesn’t imply that “the desirabilty of that something” is an objective characteristic of the universe.
If physicalism is true, then everything is an objective fact. It’s just that some of those things are only known subjectively. But the latter is an empirical property, not a metaphysical one.
You are confusing a statement about you with a statement about everyone. What is an objective characteristic of the universe is “an ananas is enjoyably sweet to Axxyaan.” For the distinction see SaG II.2.2.3, pp. 37-39.
Yes, it does. Because “that someone” is an objective characteristic of the universe (they are a physical part of it).
Again, do not confuse “objective facts about an object in category x” with “objective facts about all objects in category x.” Both are objective facts. Both are objective characteristics of the universe. But both are not the same characteristics.
Sotona and axxyan, you are both spot on.
>>That someone desires something doesn’t imply that “the desirabilty of that something” is an objective characteristic of the universe.
>>Yes, it does. Because “that someone” is an objective characteristic of the universe (they are a physical part of it).
Here you can see Richard’s fundamental glaring mistake coming out.
He completely avoids the actual moral question of what we OUGHT to do and instead looks at whatever we desire and then goes off on a “scientific” chase for how to satisfy that and calls that morality (all the while ignoring the often infinite ways to achieve one thing, or similar results, and while sneaking in backdoor fundamental values – e.g., a mythology of sacred personhood, of equal personhood, etc. – to gain a liberal result and protect against killing others and so forth even if we desire that or it’s the best way to get something.)
Sotona: Richard always refuses to answer basic moral problems as you raise because it exposes that he can’t. Should I sacrifice my life to my sickly family member or seek self-fulfillment as an artist? Richard “moral” theory can’t answer a moral question. He’ll only ever say to read his book where he doesn’t answer it either. See below and other posts where he refuses to answer in favor of obfuscation and endless logorrhea.
Richard also blatantly lies, says one thing one time, then adamantly denies he ever said it, etc. See my entry below and many examples on here. it’s pretty bizarre. Another good one is constantly claiming he “irrefutably proved” something when he merely attempted to but no one accepts his proof anyway.
http://www.richardcarrier.info/archives/4498/comment-page-1#comment-53867
No I don’t.
You are just ignoring all my arguments.
As usual.
Stop being an ass, snowman.
You’re just embarrassing yourself.
Except that I do. Repeatedly. In all my writings.
Which you explicitly refuse to read.
You’ve been pulling this shit for years now.
You never read my work, you never interact with it, you never make an argument that addresses anything I’ve ever actually said, and you lie, repeatedly, about all of it.
You can’t find a single person, anywhere, who read the syllogisms in my book and found any fallacy or false premise in them. Hence you don’t point to even one.
You certainly aren’t one. You refuse to read anything I’ve ever written. And yet pretend to criticize it.
That’s just being an ass.
It’s not convincing anyone here.
Your reply here and below to me – and to others – shows what a nut you are. You’re a genius, you’ve proved everything – without even getting at the real question -, anyone who disagrees is lying about you, etc.. You really do come off as slightly crazy.
You can see here how badly Richard cheats his way out of the basic philosophical question at hand in favor of, what? economics?.
Funny you still can’t answer a basic moral quandary as posed above by me and by many others. It’s because you need to be handed an answer then you can say, “Ah, lets find out how to get that!” That’s not doing moral philosophy.
Richard thinks we OUGHT to do what we think we ought to do (what we desire, what “satisfies” us). Which is what? Oh, right, we don’t even know that because we don’t walk around with a nicely ordered list, which is why we ask ourselves what we OUGHT to desire, i.e., we think and do actual philosophy, not avoid the question entirely as Richard does.
Nor can Richard define “satisfaction” in any meaningful way whatsoever. Drugs? No, doesn’t count, that’s not “real” satisfaction. So he has a positive definition of true human liberty at work behind the scenes.
Nor can Richard get out of cases where I desire power and dominance and you the opposite and clashes result – there Richard steps away from his rule and invokes mythologies about sacred and equal personhood, about hating yourself for being mean to others, etc. Just silly fluff.
>>You can’t find a single person, anywhere, who read the syllogisms in my book and found any fallacy or false premise in them.
Jeez, come back to reality until you publish it in a real phil journal and phils agree with your, for now, mere CLAIM of having proved something, Richard. It means nothing that you published your own book and no one cares enough to argue against a silly argument buried amongst a deluge of overwrought pedantism. And even when people point out basic fundamental problems you shout them down as liars.
Again, yes, I’ve read your fundamental argument, Richard. It is really poor. It’s a bit of a joke. But, no, of course I wouldn’t pay you money to read the same fundamental argument reformatted in a book, sorry..
You still aren’t interacting with my chapter in TEC as you were told.
When you do, then I will respond to you.
Otherwise, we’re done.
I would not criticize Harris for his apparent lack of rigor since he has said on many occasions he has *chosen* the level of engagement on purpose to be what he believes is the most palatable for the widest relevant audience. You’ve interacted with him through email and perhaps formed more intimate conclusions based on that interaction, but perhaps you’ve still misapprehended him. The analogy he uses to illustrate the gist of moral realism by appeal to the unstated philosophical underpinnings of medicine works (or *should* work) at a populist level, imo, whereas much of the philosophical lingo you’ve dropped in this post (while still stream-lined down) still doesn’t. Harris is more popular than you. I imagine you are probably more popular than Churchland or Foot. And Harris is still falling far short of being even popular enough to rattle the entire philosophical/secular/scientific world to get the ball rolling on a science of moral realism. Give the guy a fucking break. You’re not doing better in effect.
Harris has set the bar pretty damn low because he knows most mortals are going to have problems with *anything* in regards to belligerent hydra that is human awareness of what makes their own values tick. He made his presentation choices. You disagree with them. So what? He’s trying to make the iota of progress that would allow both the relativistic secular world and the religious secular world (in other words, all the scientists that might actually be involved in the science of morality) to be on board with the basic enterprise of moral realism. He wants people in the ballpark and they stubbornly refuse to go kicking and screaming like little children. However they have to resort to more egregious errors than any of his to manage this impressive feat of moral buffoonery.
In my mind, Harris is arguing at a populist level for the gist of moral realism. You’ve successfully boxed moral agents into your goal theory at a more rigorous philosophical level to cut off and/or accommodate all the practical problematic issues. However, I wouldn’t program a robot from scratch with your goal theory as it seems to rely overly on the incidence of the human condition. I would consider Alonzo Fyfe’s desirism a more articulate moral theory since it talks about what is inside the box of a moral agent rather than just trapping a human moral agent in your logic. Just sayin.
So I say all this in hopes that perhaps it keeps things in perspective. Harris doesn’t need your shit, any more than he needs Ophelia’s or anyone else’s, given the long line of dreadful FAIL that has been advancing the cause of moral realism in this supposedly civilized world.
I also choose a colloquial level of engagement for that same reason. That is not an alternative to rigor. You can be colloquial and rigorous, in serial and in parallel. In fact, if your aim is to start a new science, you should be both–especially when people keep asking you to, and your mission (to create a new science) demands it.
I also used that same analogy (years before he did). The mistake you are making is in assuming you can’t do both, when in fact Harris especially needs to do both, if he wants to get experts on board with forming and delineating a new science. And he needs to take seriously why both are needed.
The mistakes he makes result from his not “doing the math” as one might say. You might not have to show your math on a test; but you certainly have to be able to do that math to get the right answers and make sure they are right. Harris doesn’t know how to do the math, and consequently gets things wrong, and stubbornly refuses to accept that he’s gotten things wrong, because he is contemptuous of the people pointing that out, because he is contemptuous of philosophy. But philosophy developed to the level of sophistication it has for the same reason science did: you can’t be sloppy about things and expect to develop consistently coherent or correct arguments.
It’s not even that Harris is popularizing existing philosophy in support of his thesis (even though there is plenty of it). He is simply ignoring almost all that philosophy and expecting he won’t run into all the same problems it did, and which it worked around because it confronted them but he did not. He is like someone reinventing the wheel, getting a wonky wheel, and refusing to listen to any wheelwrights who keep explaining to him how to do it right.
Harris wants to start a new science. The way to do that is get scientists on board. Not laypeople. It’s valuable to communicate to laypeople what you want to do. But if that’s all you do, then you are epically failing at the one thing you actually did want to do, which is actually get this science going.
It’s all the worse when you ignore all the criticisms of your proposal from people who have been working on the same issue for decades, or treat them with contempt.
That is simply shooting himself in the foot.
Then maybe you don’t know that I proved Alonzo Fyfe’s desirism reduces to my goal theory. That was precisely what the debate between myself and McKay was on (which I linked to in this article). Check that out.
Yes, he does. Badly. Because he is making a train wreck of his own stated goal: actually starting an actual science of morality.
Thanks for this, yes it’s obvious.
I suspect that part of the tension emerges from the threat “experimental philosophy” presents to traditionalists.
And satisfaction often involves a journey through unhappy valley, complicating evaluation.
If I may ask a question. I’m sorry that I haven’t read that book of yours (yet), and so I’m sorry if it’s answered there. I’ll try to be brief.
I generally take it as axiomatic that we should act to increase the happiness, freedom, safety, and well-being of ourselves and others, and we should act to further the other values of humanism.
It seems that you take it as axiomatic – as self-evident – that one should act to achieve one’s desires. I’ll defend the is-ought distinction only as far as this: for the rest of your argument to follow, you need that one little axiom, that one little starting unjustified “ought” proposition.
From there, you rightly note that most people do desire to be good people, to help others (to some extent), and so on. I don’t recall seeing it, but we can also rightly argue that having police and proper deterrence in place will also help make it in everyone’s self interest to also be nice to each other, and it’s also in everyone’s self interest to fund the police. It’s even in someone’s self interest if that person lacks empathy. This includes people colloquially known as sociopathic/psychopathic.
Thus, I think our moral systems come to the same conclusions on nearly every point, and can be considered basically equivalent.
However, they differ in corner cases. Consider the person who lacks empathy. (IIRC, some estimates put them at 1% of the population.) Consider a situation where you and another person are isolated and removed from society, such as on a desert island, and consider the other person to be a “sociopath”. On my framework, it is true that he should not murder you if you annoy him. On your framework, it seems that you cannot make that claim. There are some people who will feel no remorse from killing you, and it would be in their best self interest to kill you. It is quite coherent and plausible for there to be such a person who will be happier, safer, have more freedom, have a better well-being, and so on, by killing you on that desert island. I still want to say that killing you for merely annoying him is morally wrong, and I don’t think I can do that in your framework.
I’m just curious of your thoughts. Thanks!
The question is “why” should you or anyone take that as axiomatic?
Either that question has no answer (and therefore there really is no reason and therefore nothing actually true or right about doing this), or it does.
If it has an answer, that answer is either an empirically discoverable fact science could get at, or it is not an empirically discoverable fact science could get at.
If it is not an empirically discoverable fact science could get at, then how can anyone ever know what that answer is, or that the answer they come up with is true? We end up back where we started: we can discern no reason and therefore nothing actually true or right about doing this.
Harris’s thesis is that the answer is an empirically discoverable fact science could get at.
And for the reasons I lay out (here and in my chapter on Moral Facts and so on), he’s right.
That’s a tautology. Hence it is necessarily true.
Indeed, even you just took it as axiomatic (you desire to follow the other axiom you stated; if you didn’t, you wouldn’t). You could not have done otherwise. Because all “ought” statements reduce to subjunctive “would” statements (any “you ought to do x” translates to “you would do x if y; y is true, therefore x“; that’s the hypothetical imperative; full logical analysis, with citations to the literature, are in my chapter). Otherwise “ought” statements have no meaning or truth value (as I also explain in my chapter).
This is the magic pill fallacy. We don’t live with a single other person isolated and removed from society. Therefore, any conclusions that hold in that bizarre condition will not necessarily hold in regular human life.
Morals must be based on the facts as they actually are. Not as you can imagine them being in some hypothetical alternate universe. Obviously if you change all the facts of the world, you change the morals. That is not a profound discovery. It just has nothing to do with what the moral facts are in the situations we actually find ourselves in.
As I said in this article, morality is situational.
Yes, I could. You seem to be confusing what would be the truth of the matter, and whether the sociopath would accept the truth of the matter. That he would not does not make it not true. I devoted several paragraphs to this fallacy in this very article you are commenting on.
But even if, somehow, on a full analysis, we found that in such a bizarre scenario he should kill you (that that was objectively true and not just a false belief he refused to abandon), then that same analysis would entail you should immediately kill him–precisely to remove an otherwise extreme risk of death. Which would then entail he should not kill you after all, for his own good.
The analysis could continue from there. But like the proverbial turtles holding up the earth, it’s still Game Theory all the way down.
Thanks for your time. I don’t expect a long reply. If you want, I’d be happy for just a brief reply to the general thrust.
Dr. Carrier, do you really think that? Suppose we found a person who lacked empathy, again purportedly one of the 1% of the human population. Suppose this person was in a situation where he could steal something and he had good reason to think that he would not be caught, ever. He could rightly conclude that he will suffer no negative externally imposed harms for stealing it. Suppose he wants it, and thus he will satisfy a desire by stealing it. He would not experience any particular positive emotion by not stealing it. It seems that using your rules of analysis, he would conclude that he should steal it.
As a third-party, am I to conclude that “he should steal it”? What if he’s stealing something of mine which I value? Could I simultaneously say that “he should steal it” and “I should act to prevent its theft”? I’m trying to figure out if you think that there are true moral statements concerning conscious creatures which do not arbitrarily favor the perspective of any particular conscious creature. From your answer to the desert island example, it seems no, but I want to emphasize it again with this example.
In any system that I will recognize as morality, one must be able to conclude true moral statements concerning conscious creatures which do not arbitrarily favor the perspective of any particular conscious creature. In my example, the usual outcome is: “it is morally wrong for him to steal it, and he should not steal it”. (Note that this is entirely compatible with the statement of efficacy: “Stealing it is the most efficacious plan for him to achieve his own best personal well-being”.)
Furthermore, this is not some arcane detail that requires a highly contrived scenario. This and related examples are highly relevant to most people’s daily lives.
I’m pretty sure Sam Harris is with me on this point, and we are both clearly against you. Sam and I talk about the words “ought” and “should” in the context of a landscape of well-being which is not dependent independent of any particular conscious creature. So, we simply have to disagree over both the meaning of the words in the following, and more importantly the meaning of the general form of the following argument:
Sam and I would both argue that this is not what the words mean to most people (“should” relates to the global well-being, not merely personal well-being), and more importantly that the … “proper actions” one takes relate in some degree to the global well-being. If you do not already accept this value, then I cannot make any logical argument that you should, and I will not try.
I also think that it’s near definitional that if one does not accept this value, then one is a (colloquial) sociopath/psychopath. The distinction between a normal person and a sociopath/psychopath is that a normal person recognizes that they ought to sometimes act against their best self interest for the good of other people.
I know exactly what you’re about to say, that the good people actually are acting in their own self interest if you properly take into account the positive feelings from “doing the right thing”. I might agree. However, the crucial difference is that by your analysis, it seems quite difficult to talk about a global morality, and it seems quite difficult to say that it is wrong for sociopaths/psychopaths to steal like it is for normal people to steal.
Consequently, I think you are wrong when you say that you are merely clarifying the points which Sam Harris is trying to make. Sam Harris, and I, and most other good moral people, are taking a fundamentally different stance than you, and I do not think that it is fair or accurate to characterize that you’re just “cleaning up his arguments” as you have done in your post. (Accidental I’m sure. No malice implied.)
Luckily for us, the two views converge a large amount of the time. Sam Harris once said “a high tide raises all boats”, and meant that as a recognition that moral questions are not often zero-sum games, and often the answer that benefits you will also benefit your neighbor. Unluckily, I think there is a very real and common class of corner cases where this is not true, and I will argue for language and methods that allow me to say that it is wrong for a sociopath/psychopath to steal in the same way that it’s wrong for a normal person to steal.
To end, you’ve heard the common rebuttal to Christian morality that goes like this? The Christian says he’s moral to avoid going to hell and to get into heaven. The atheist says that the christian is a selfish prick, and that’s not morality. I see some strong similarity to this discussion here.
PS: Again, thanks for your time, and otherwise keep up the good work (lol).
PPS: Why don’t I see a “preview” button?
Think what? You didn’t specify what you were referring to. So I don’t know how to answer you.
If you are asking about how psychopaths fit into the Harris thesis, Harris himself has some to say about that, but for myself, I explicitly discuss it in TEC, p. 353 (w. n. 41, p. 427) and SaG V.2.3.2, pp. 342-44 (with bibliography).
Your own scenarios you have to work through using Game Theory (e.g. Ken Binmore, Game Theory and the Social Contract, vol. 1 and vol. 2, which I cite in TEC; I give an example in re: slaveowners therein, pp. 343-47, which is relevant to what you might be asking).
Your error is in assuming the only negative consequence to stealing something is getting caught. That is not even remotely correct. I discuss this fact a little in TEC, but more extensively in SaG V.2.1, pp. 313-24, but even more in my debate with McKay.
For an example of Game Theoretic thinking on this point (using lying as an example rather than theft, but the same thing could be adapted thereto), as I said in Finke’s interview of me:
And that’s just one factor. I discuss yet other factors in SaG.
Note that this cannot be an objection to my theory (or Harris’s). Because what you are saying is “What if science confirms something is true that I don’t like?” Well, you can act like a creationist and deny the facts. Or you can accept the facts, and admit that moral truth does not consist in what you personally just happen to “like.” That is exactly how Harris would answer you. And he is right.
As it happens, it’s unlikely science will get the result you “fear” here (see the pages I cited above from TEC on this because they are directly on this point, using slavery as the example rather than cases of thievery). But if it did, then your intuition here would be proved wrong. By facts.
Indeed, I can imagine a society in which it is in fact a universally accepted rule that theft is okay if you can reasonably believe you will never get caught. The whole society will then adjust its behavior to adapt to the knowledge that all its members will be abiding by that rule. The problem is in precisely the consequences of that fact: the adjustments that society engages will likely make society far worse to live in than if we didn’t adopt such a rule. Therefore even the sociopath should not do so.
In my debate with McKay we explored the analogy of an omniscient person (who could get away with all kinds of things because of their omniscience). You will benefit from reading that.
If that were true, then it would not be a landscape. The fact that Harris acknowledges there may be multiple peaks proves you simply are wrong about what he is claiming. But he is even explicit about it when he talks about hypothetical alien races. He agrees with me. Not you. His whole point is that where the peaks are on the landscape is empirically dependent on the species of creature looking for them.
Incorrect.
For example, when Sam was once asked about why the oppression of women among the Taliban is wrong, his argument was not “merely because it was contrary to the welfare of women” but because the total systemic consequences of a society that disregards the welfare of women are negative for the men living in it. In other words, it is contrary to the best interests of men for them to oppress women. He gave examples such as it substantially degrades the quality of personal relationships men have with women (thus barring them from a much higher achievable peak of their own personal welfare on the available landscape), and that it measurably degrades the men who have to degrade women (e.g. the men are left more immature and driven by irrational emotion and locked in their own oppressive roles by their own rules). It also degrades the economic system (in direct costs and in lost revenues), which is why no first world country acts like the Taliban (but rather keeps increasing the rights and power of women).
In a more direct sense: for men to oppress women requires that men not care about the welfare of women (to not think of them as equals or even as human beings and to disregard their feelings and desires). But the costs to a man of not caring about women are huge and are the primary reason men should not want that (that they irrationally don’t recognize this is a product of simply that: irrationality). I give the same point about compassion more generally (and the impossibility of selectively suppressing it, yet the consequences of not suppressing it to decision making about the welfare of others) in SaG V.2.1, pp. 313-24, and I reference this discussion in TEC (cf. p. 344, w. n. 26, p. 424).
Which necessarily requires that you care about global well-being. Harris completely acknowledges this and builds his entire theory on it. It always comes back to why the agent would be motivated to act toward such end goals. Harris has never said otherwise. Though he doesn’t write very well so I can understand why someone who doesn’t read him thoroughly might be confused by him.
That is scientifically false. Psychopaths can recognize that, and even act accordingly. They just aren’t highly motivated to. The consequences to them are generally not good (relative to the entire psychopath population), and will be even worse in a society that has a better science-based strategy for defending itself against them. But psychopaths are mentally disabled, so they are greatly handicapped in their ability to recognize this or act on it.
What actually distinguishes a psychopath from a normal person is that psychopaths have a greatly suppressed fear response, and in consequence (if this onsets in childhood, and hence during child development) they do not develop empathy (the ability to have sympathetic feelings with others). So they don’t care if they hurt people.
This is because during childhood it is, e.g., fear of disappointing one’s mother/father/teachers/elders/peers that trains the brain to attend to the emotional behavior of those people and thereby develop sympathetic emotions that allow them to anticipate their behavior. The side-effect of this (normally) is that these learned sympathies are locked-in as the child grows up and thus guide conduct throughout life. Sociopaths never develop this skill because their diminished fear response never motivates them to, and so their brains get locked-in without it (and so far it appears this is not reversible).
But sociopaths also continue to lack a substantive fear response, which is actually the primary mental defect characterizing sociopathy. So they usually don’t react even to fear of getting caught (that’s also why they are such good liars)…which is why movies and TV get psychopaths wrong almost all the time: psychopaths are typically very easily caught, because they don’t think they need to plan against it–the clever ones who figure out to do that as a manipulation skill are actually quite rare, although also the most dangerous, but even they are not motivated by a fear of getting caught but by a positive desire to succeed at their goals. Yet by lacking a fear response, it’s actually much harder for a sociopath to learn how to avoid negative consequences than it is for a normal person–they generally instead just rely on ad hoc lying, emotional manipulation, intimidation, and if necessary, violence.
Ultimately, sociopathy is a mental disease. And as such, cannot be normative. Certainly not for non-sociopaths. But even for sociopaths, their behavior is ultimately maladaptive and Game Theoretically irrational. I say much more about this in the referenced paged I cited at the start.
You very much need to read my actual analysis of exactly this in TEC, pp. 335-38 (“The Logic of Christian Morality”).
I don’t know. You should. If you still don’t, report the problem to our webmaster. (That goes for anyone.)
“And because he has such contempt for philosophy, he never learned to be any better at it…”
When has Harris shown contempt for philosophy? Seems like Lawrence Krauss, Dawkins, Shermer, etc.. all like to talk some trash on philosophy, but Sam never does this (to my knowledge). In fact, he regularly defends it whenever it comes up at his events, saying things like there is no sharp boundary between philosophy and science (that’s both are just included under the umbrella of “all rational thought”), that his moral landscape would include it, and that he doesn’t dismiss what other philosopher’s have had to say on the subject. He earned his bachelor’s in philosophy after all.
Granted, he didn’t engage with many (or any) in the ML, and perhaps he should’ve, but I think he was honest about his reasons for not doing so. Are there other instances I’m just not aware of?
Oh, yes, there are others who show even greater contempt for philosophy. But Harris is often enough snide and contemptuous of philosophers (and the technical literature of philosophy) in most of his books and public discourse and interactions. The worst example is his book Free Will. In Moral Landscape he at least shows some small respect for philosophy, but he still mocks it and makes excuses for ignoring or dismissing it more than once there (and shows very minimal acquaintance with the relevant philosophy, too, which is an act of contempt in itself, when writing on a subject philosophers have been seriously publishing on extensively). That’s why he hardly ever relies on it or cites or interacts with it.
See my other comment in this thread.
I’ll accept for the sake of argument that “some aspects of how to maximize life satisfaction are true for all human beings”. However, it remains to be demonstrated that maximal life satisfaction is simultaneously achievable for all human beings. If it is, great, problem solved. But I’ll go out on a limb and say that, given scarcities of various kinds, it is not possible to simultaneously, maximally satisfy all human beings.
If that is indeed the case how do we arbitrate who yields their satisfaction, and by how much? This strikes me as a (if not the) core concern of morality, and it’s not readily apparent that the rules for such arbitration can be derived from empirical observation.
Those are all questions for a moral science to empirically resolve. It is not an objection to Harris’s thesis.
Science could even prove them unresolvable. Harris himself concedes the best we might end up with is an optimal rather than an ideal moral system, but even then it will still be empirically, scientifically proven to be the best system achievable on present means–until we discover a better.
Not related to this post, but it is nice to see atheists arguing with other atheists. It’s a nice change from just a couple years ago, when atheists seemed to only argue with the religious. I have no desire to argue with religious people anymore…for some reason it feels like the religious are losing the battle and becoming less and less of an influence…to the point where I don’t care what they think. I think reading blogs, like this one, where atheists are arguing with each other about important philosophical/moral details signifies that times have changed.
I’ve also read Harris’ book, and while yours is more concise presentation of its core thesis, I still don’t think it works. Equating moral facts and personal satisfaction seems adequate prima facie, but I think it will eventually collapse under itself.
Let’s take a simple example. Suppose I’m a bigoted dictator who wants more than anything to kill anyone daring to disagree with me. Let’s also suppose I’m right about this, i.e. that it could be scientifically demonstrated to indeed cause me greater satisfaction than any other course of behavior. Even with all the risks I’d have to take it would be better than watching people hold other opinions. Now, according to your theory my most rational choice would be to slaughter as many of my opponents as I can find. It would in fact be morally right for me to do so.
As far as I can see your only way out of this is to argue that killing all dissenters wouldn’t really be the most satisfying thing for me. Saying that I shouldn’t kill everyone despite my deep desires would negate your premise of morality as personal gratification. Appealing to human universals would also have the same result. There are very few truly universal traits – so it would generally boil down to a majority vote – and even with those few your definition of morality doesn’t have the same power with the universal needs of others. I agree that asking “Why should I do what’s best for me?” is nonsensical, but asking “Why should I do what’s best for others?” isn’t. Even when it comes to things like needing to eat, respecting them in others cannot be ontologically justified in the same way as respecting it in yourself can. It also can’t be derived from the fact that your doing what’s best for you is self-evident. Helping others may not be best for you even most of the time, some people you might even enjoy hurting. Generally, in the long run it benefits you to be kind to those around you, but this isn’t enough to avoid the is-ought problem anymore.
Generalizing from the dictator example above, you’d have to explain how exploiting other people – whether at the cost of their lives or something less – is a priori never the most satisfying option.
Note that you can’t reach that conclusion until you find an invalidity or undemonstrated premise in the formal argument in my chapter “Moral Facts Naturally Exist”. As I explained in this very article (“the syllogisms you have to prove invalid or unsound are on pp. 359-64”).
Your counter-examples (or their functional equivalents) are all refuted in my Chapter’s main text and endnotes as well.
>>“Why should I do what’s best for others?”
Perfect question and, no, Richard did not “refute” this in his book. (Whenever Richard says he “refuted” or “proved” something, once you deduct all the hubris it merely means he argued something.)
Richard sneaks mythological values of sacred personhood and equal personhood in the back door to protect from getting an illiberal result.
He’s even claimed, in order to save his desired liberal outcome, that if you rape and hurt other people you will go insane from self-hatred. Nutty stuff; definitely not a refutation of your question anyway.
Incorrect. I produce formal logical syllogisms, devoid of fallacy, whose premises cannot be denied.
That is exactly what a proof is.
But you never even look at them. So you don’t know what the fuck you are talking about.
I love how you define yourself: Richard, the one who “produces formal logical syllogisms, devoid of fallacy, whose premises cannot be denied.”
You are hilarious in your blind pomposity.
Can you explain again why I’ll hate myself and not “truly” be satisfied if I’m mean to others? That one is always good for a chuckle. I get great satisfaction out of laughing at you, for example.
It is not pompous to say I have produced formal logical syllogisms devoid of fallacy whose premises cannot be denied, when in fact that is actually what one has done.
You cannot claim I haven’t. Because you won’t even look.
That proves your opinion on this point worthless.
So yet again you can’t respond to basic criticism of your theory?
Please explain why it “cannot be denied” that there is an exception to your “desire what you desire” but be nice theory whereby we will hate ourselves if we desire to be mean to you.
What are you so afraid of?
(Seriously, you don’t think yourself pompous??? You’re the Anthony Weiner of atheism. Even if we agree with you on something, you present yourself as such a jerk all the time we don’t want to be seen on the same team.)
I have responded. I’ve published the case against your rudimentary criticisms more than once. You refuse to read them. Why then should I repeat myself? You aren’t treating me with any seriousness or respect. You never read my actual arguments. You lie about what they are, or ignore them and make claims that I’ve already refuted as if I hadn’t, and continue to do this even after I call you out on it.
You are just a troll.
“Hence I do not believe anyone can make a valid argument against it.”
Well, technically it would be pretty easy to present a valid argument against Harris’s thesis. For example:
(Premise 1) If unicorns shit gold, then Harris’s thesis is wrong.
(Premise 2) Unicorns shit gold.
Therefore,
(Conclusion) Harris’s thesis is wrong.
This is a formally valid argument (‘valid’ meaning that if the premises are true, then the conclusion cannot possibly be false). However, since the premises are not true, the argument is not sound. I’m guessing that’s what you meant to begin with, but I think it’s important to maintain the very important distinction between validity and soundness.
Just for the record: I do agree with Harris’s main thesis (though I find your defense of the same basic idea to be far more sophisticated and philosophically responsible).
Right. There is a colloquial sense of “valid” and the technical sense of “valid.” If I were saying something technically precise, I would use “sound” instead of “valid.” But most people don’t use or understand the meaning of the words “sound” in that sense, but understand “valid” in the sense that philosophers technically use “sound.”
I prefer to, and almost always, speak and write in common language, not technical language. Even when I use language in a technical sense, I usually say I am or contextualize it so it can be inferred I am.
The reason I prefer colloquial discourse is that it is the dialect everyone speaks, and it is important that I speak the language of the people I am communicating with. (Someone else here mistook me for criticizing Harris for doing the same; I am not: see my comment on that for clarification.)
Seems to me that this argument leads to the conclusion that everyone should be given Soma.
Also, satisfaction when? For how long? A week of intense satisfaction with life or a lifetime of very mild satisfaction?
I have no issue with the general pursuit of extending (proportional) empathy to all sentient beings, and that being codified in human rights legislation etc, but our moral senses are not a result of a neat, designed system, and so I seriously doubt that any logical argument that concentrates on one outcome (satisfaction, well-being), however loosely defined, will ever be satisfactory.
People will suffer for causes, and for art, not because it is the most satisfying option, but because some things are perceived to have merit beyond self-satisfaction.
Those are all empirical questions of fact for science to answer. These are not objections to Harris’s thesis.
Not being philosophically sophisticated, I can only take “what values people have” to mean “what they do in fact value”. Harris seems to go quite a bit further than that:
That doesn’t sound much like a “hypothetical imperative” to me. In fact, he seems to explicitly deny it (“Nor am I merely saying that science can help us get what we want out of life”). Rather than “if you desire x, then you ought to do y” what he seems to be saying is that science can tell you to “desire x” in the first place [1], and I think his critics rightly accuse him of begging the question when the basic value judgement (the well-being/satisfaction of conscious creatures is worth something) is part of his premises from the very outset. Not that there is anything wrong with taking a value judgement as a premise per se, but it is a problem if the point you are trying to make is that science can help you arrive at value judgments in the first place just by looking at the facts:
Asking whether or not science can tell us “what we should/ought to value”, is rather like asking “Have you stopped beating your wife?”, since the question already presupposes that we should/ought to value something. That seems like a bait and switch to me since his job was to demonstrate – as a matter of fact – that “something is worth something”.
_____________________________________
1. If that’s not what is implied, then this has to be the most spectacular failure to “be very clear” about one’s “general thesis” ever written.
Then you must not understand how hypothetical imperatives work.
For example, “you ought to sterilize your instruments before performing surgery” is objectively true.
Suppose you meet a tribe who believes it is immoral to sterilize their instruments before performing surgery (because it angers the gods). It would still be true for them that “you ought to sterilize your instruments before performing surgery.” They just don’t know it yet, because they have false beliefs.
In the same way, your preferences at present can be incorrect, and you can be convinced to change your preferences. Those people should want to sterilize their instruments. Saying so in no way denies that what we are talking about is a hypothetical imperative. We still are. Ceteris paribus, they want surgery patients to survive and have fewer complications. Therefore they should want to sterilize their instruments before surgery. What is causing them to want to do something else instead is a false belief.
When it comes to moral imperatives (as opposed to just any imperative, like surgical imperatives), there are certain kinds of desires you are interested in. And there are certain desires that even Harris would admit are unchangeable (no change of understanding or belief would change those desires: those are called core desires; I lay out the distinctions and relevance of this in my chapter on Moral Facts). But by that very reason, those are the desires moral facts are based on. What Harris is saying is that all those core desires in conjunction with the true facts of the world entail that people should have a whole system of derivative desires. That is what he is talking about when he says there are desires people should have. He is not talking about core desires there.
I said more about this elsewhere in this thread, but to really get the point, you need to read my chapter on this.
As far as I understand, you are not defending Harris’s thesis and I strongly suspect he would disagree with what you are saying here.
Harris has stated he is not talking about starting from certain values from which science can find out how best to enact these values. From that I understand he is not talking about hypothetical imperatives (or at least seems to try very hard to deny that).
He is talking about the possibility or deriving an ought from an is. But as far as I am familiar we are not talking about hypothetical imperatives in such a context. Harris is talking about deriving values from facts not depending on other values (expressed as wishes, what people care for, …).
Now maybe my understanding of his real thesis is wrong. But I have the impression I’m not the only one who understand Harris this way and I think this way of understanding him is rather natural from how he expresses himself. So you can’t blame people for critisizing this way of understanding Harris’s thesis, even if you think this is not what he is trying to say.
I am also talking about deriving an ought from an is. And Harris is also talking about doing this via hypothetical imperatives. That he makes a hash of his explanation is a product of his not being well informed philosophically, not of his having a different argument than mine.
You seem to think Harris does not argue from core values. That’s incorrect. He very definitely does (and his whole thesis is based on this assumption; he hints at it many times). But core values are not the same thing as derivative values. Harris talks mostly about the latter. But that should not confuse you into thinking he talks only about the latter.
For more on the distinction, follow the thread here.
IMO, you are not deriving a real ought, You are using imperative language for pointing out (practical) requirement for enacting particular values. That IMO is not what people are talking about when they talk about deriving an ought from an is. I haven’t been following Blackford lately but when The Moral landscape came out he had a blog post about hypothetical imperatives (although he didn’t call then that) and how you could indeed derive those from an is, but that this was not what Harris was talking about.
As such I disagree with your example you gave to Bjarte Foshaug. “you ought to sterilize your instruments before performing surgery” is not objectively true. It is possible not to care about the survival of your surgery patient. That the surgeon performs the surgery in order to solve some kind of puzzle for which the survival of the patient is unimportant. You may find such a surgeon a horrible person that acts contrary to the values of most people, that doesn’t make the statement objective.
Also I do think Harris argues from core values, only it seems Harris himself doesn’t seem to understand that because AFAIU him, he denies doing that. As far as I understand he claims not to start from core values but that those “core values” are derived from facts.
I have no idea what you mean by a “real” ought.
The only “ought” I am interested in is one that can be known to be true.
And I (and others) have shown hypothetical imperatives are the only oughts that meet that condition (I cite Darwall on this point in TEC, p. 423 n. 21, for example).
No other oughts matter. As I explicitly prove in TEC, p. 348, with pp. 340-43 (syllogisms on pp. 359-61).
And yes, this is exactly what Harris is talking about. That he doesn’t use the right language and isn’t clear is the product of his not being very good at this.
But whether it’s possible is irrelevant. Whether it is the case is what matters. And that is a scientific question of empirical fact. That’s the bottom line. If you want to say that there is a system of moral facts that only are true for certain types of people (e.g. people who have certain desires), then you are not objecting to the Harris thesis. Harris fully agrees that may be the case. That’s the whole point of there maybe being multiple peaks on the landscape. He just does not do a good job of explaining why all those peaks will have to be peaks for the whole society one is in (he simply assumes this, or vaguely makes a case for it in other places), which misleads people to mistake what he is saying. What he is saying is that even if there are different moralities for different types of people, those moralities cannot be in conflict with social well being because then the society the moral agent lives in will be dysfunctional–which will make it impossible for that agent to ascend a peak.
Note your first sentence is a clue to what’s wrong with your second. Yes, indeed, Harris is not very clear in his thinking or writing on this, and does not appear to really understand how to make the logic of his own argument work. Thus we end up with a situation where he does not make clear what he means by core values deriving from facts (as opposed to being facts). If you read him all through carefully and charitably you can reconstruct what he must be thinking or what is required for him to be thinking it. But of course, we shouldn’t have to reconstruct his thinking. He should be able to clearly articulate it top to bottom. And that, in a nutshell, is what’s wrong with Harris as a philosopher.
>>It might not be immediately obvious how the conclusion (item 5) follows necessarily from those premises… But I formally prove it (by deductive logical syllogism) in a chapter on this topic that was peer reviewed by four professors of philosophy
No, you didn’t “prove” anything there.
>>Academic peer review simply does not look for, nor even rewards, best cases. They just publish any rubbish that meets their minimal standards. It is not as rigorous as it should be in policing fallacious, illogical, unscientific, or muddled argumentation.
You’re such a hypocrite, Richard. When I posted that recently about your repeated pimping of being peer reviewed you went off the deep end asking repeatedly how I could possibly dare claim that your being peer reviewed said nothing about the truth of your (bad, preachy, moralistic) arguments.
>> both I and Churchland have in that event proved you wrong, and if you knew anything about the role of empirically proved imperative facts in agriculture, engineering and medicine, we wouldn’t have to school you on this point
Can you please prove again quickly whether vanilla or chocolate is more satisfying? Or if dedicating my life to my sickly family member or to self-achievement is more satisfying?
Oh, right, you can’t prove it because there is no right answer and you’ve simply avoided the actual question of what we SHOULD do while pretending you can calculate how to get what we do want.
Both you and Harris are such embarrassing clowns.
Yes, snowman. I did prove it there. The syllogisms on pp. 359-64 are a formal deductive proof (hence “the syllogisms you have to prove invalid or unsound are on pp. 359-64”).
I know you keep coming to my blog and claiming it’s not a proof (even though that is what a proof is) and you keep refusing (even after I’ve asked you a dozen times now) to even attempt to find a single flaw in that proof or even mention what the proof is.
But that just makes you an asshole. It doesn’t make you right.
I’ve also explained to you the value and limits of peer review. Several times. That you keep ignoring me and pretending we never had those conversations just confirms what an asshole you are.
You take no one seriously and do absolutely no homework and make nothing even resembling a sound argument.
You are the clown.
Yet again your absurd hubris is on display. You didn’t “prove” anything. You merely attempted to but no one in philosophy accepts your proof. Got it? Publish it in a major phil journal and have it accepted as proof by phil’s around the world then come back and you can say that.
>>I’ve also explained to you the value and limits of peer review.
Yet again, you say something then come back and lie about it like it never happened. It’s bizarre how many times you’ve done that. It’s make you look completely untrustworthy even when right.
Compare here with your earlier 10-reply freak out about how legitimate your arguments are simply because peer reviewed (though not even in a journal and you refused to say by whom).
>>Academic peer review simply does not look for, nor even rewards, best cases. They just publish any rubbish that meets their minimal standards. It is not as rigorous as it should be in policing fallacious, illogical, unscientific, or muddled argumentation.
>You’re such a hypocrite, Richard. When I posted that recently about your repeated pimping of being peer reviewed you went off the deep end asking repeatedly how I could possibly dare claim that your being peer reviewed said nothing about the truth of your (bad, preachy, moralistic) arguments.
Now see http://www.richardcarrier.info/archives/3245/comment-page-1#comment-32802
(Go up a post to begin a great thread for how Richard’s theory is so off track he he has to refuse to answer a basic moral question about how we ought to act, even when blogging about how to answer a critic asking that question!)
How do you know?
You won’t even look at the proofs.
No, you’re a liar. You misrepresent what I say, willfully and deliberately. I know, because I’ve been correcting you on the matter of peer review for years, yet you keep telling the same lies about what I said, over and over.
Actually, anyone can follow the link for himself and see how you freaked out repeatedly when I said being peer-reviewed anonymously in a private book said nothing about the truth of your arguments.
And then see how you said the exact opposite above.
Your problem is you can’t see the forest for the debate points you think you’re winning as you argue every tiny little thing, no matter how bad your replies.
See http://www.richardcarrier.info/archives/3245/comment-page-1#comment-32802
No, I did not say “the exact opposite.” Nor did I freak out.
You are a liar, snowman. Plain and simple.
My nose is not big and your daddy is a fat loser!
Good god, Richard, do you have Asperger’s? You are so absurdly childish and pedantic it is amazing. And to combine that with such poor arguments and such ludicrous pomposity to begin is wondrous. Few people are as entertaining. Thank you!
In which you demonstrate which of us is the actual child.
“In short, if we can answer the “how do you know [x] is fact and not merely your opinion?” question for any x in any other science, we can do it for any x in moral science. And in precisely the same ways.”
But the answer for any other x in ‘any other’science will be some version of ‘because we have satisfactory experimental data’. That answer can never be given for a moral ‘fact’ and so the things are in different categories from the get-go.
That is a circular argument.
You are simply ignoring everything I just said in this article (as well as in my formal chapter on the subject) and just gainsaying it without addressing any of the refutations I just presented of the very thing you are now claiming.
But this makes the morals not objective. IMO Objective implies the same for everyone, including aliens or robots or other forms of intelligence. What you are talking about here seems to be intersubjectivity, I don’t have any problem with that. I can understand that since humans are generally similarly wired, we can expect some general tendencies in what brings us satisfaction and from there come to some general conclusions about values and morality.
But this is not how I understand Harris. On the contrary he seems to deny this is the kind of thing he is talking about.
No, that’s “universal.” Objective just means true regardless of what you believe or want to be true. That you like cheesecake would be an objective fact about you, even if it wasn’t true of anyone else (I could even prove it scientifically using an fMRI study, without once ever asking you whether you liked cheesecake). It would be an objective fact for everyone if everyone liked cheesecake (and then the liking of cheesecake would be a universal fact…for humans).
The word “intersubjectivity” is not a term in metaphysics; it’s a term in epistemology (it describes a process or limitation on how we acquire knowledge about the objective world). “Objectivity” is similarly an epistemological term (it is also a process of knowing, a methodological stance); but “objective” is not an epistemological term but a metaphysical one. An “objective fact” is a fact that is true independently of what you think is true. That can include “subjective facts” (such as whether you like cheesecake), since those happen to be material physical facts that exist and are observable regardless of what you believe, but subjective facts are a special subset in that they can never be subjectively false (and if mind-brain physicalism is true, when subjective facts obtain, they can never be objectively false either, i.e. if you like cheesecake, it cannot be the case that your brain would physically manifest as not liking it). Thus what you are experiencing right now is subjectively true and a subjective fact; whether what you are experiencing is real (and not a dream or hallucination or illusion etc.) is a question of objective fact. Hence that you like cheesecake is a subjective fact. Whether it is also an objective fact (about you) depends on whether there is a fact about you that entails your liking cheesecake, independently of your merely liking cheesecake (like, say, a brain).
All science is intersubjective–by necessity (we only have access to our own personal subjective states; we then infer from those states what is or is not objectively true apart from those states). So Harris cannot possibly be saying otherwise (for moral science any more than any other science).
The question is not whether a moral science would be epistemologically intersubjective. The question is whether there are objective facts for a moral science to discover, just as in any other science (e.g. that evolution could only have ever been discovered intersubjectively does not mean evolution is not an objective fact of the world).
Incidentally, moral relativism can be objectively true, and thus empirically discoverable as such by science. Harris’s thesis is compatible with moral relativism (his “peaks on a landscape” model is indeed a form of moral relativism); it’s just a question for science to determine if relativism holds for human population groups or types, or if there is a set of universal moral facts. I say more on these two possibilities in my chapter on this.
“That you like cheesecake would be an objective fact about you”
But we are not talking about such things, at least I don’t understand Harris to be talking about this kind of things. What Harris is discussing is more like “cheescake is tasty” and even if it turns out all humanity really likes cheescake, that would still only say something about how humanity appreciates cheescake and not about an objective characteristic of cheescake. And should you be the single person with a distaste for cheescake and expressing that, you wouldn’t be contradicting objective facts.
Yes practising science relies on intersubjectivity, that doesn’t mean the conclusion is intersubjective. If I measure two sticks and I find out the first is about double in length than the second, I trust that who ever remeasures those sticks will come to the same conclusion. However when I taste a dish and come to some conclusion, there is no such trust. It is even doubtfull I come to the same conclusion myself if I tast that same wine again at a later time.
So the question IMO is, what kind of intersubjectivity are we talking about when we discuss values and morals, that of measuring sticks or that of tasting wine.
I don’t understand what you are criticizing here.
You just said he doesn’t mean x, and then said he means x. There is a disconnect here. Somehow you are confused into thinking you just affirmed something different from what you are denying. I can’t make heads or tails of that. So I must not understand your point.
Again, you are just affirming what I myself said. So I don’t follow what you think you are criticizing here.
With an appropriate brain scan model and an fMRI you would easily trust when someone likes or doesn’t like cheesecake. Including yourself. It is an objectively and empirically observable operation of their brain.
But it’s unclear what you are questioning here. Again, that your preferences can change is simply another fact science would then prove and thus show was relevant to any true moral system (it would then be accounted for in that system). Harris’s thesis does not entail any particular findings on this one way or another. His thesis is that whatever the findings were they would be the true findings (unlike all the mythically made up moralities we are running on now; or our best-guess philosophical inference from what scientific finding we have so far, but even that is just an attempt to approximate what a more informed science would discover).
That is not the same question, Dr. Carrier. The first question concerns matters of moral prescription, the second is a matter of description. The distinction is relevant, since it is often moral prescriptions that are said to be relative to the individual or community, not reality.
That aside, I don’t share the force of your objection. The psychologist qua scientist alone cannot answer whether his or someone else’s view is delusional. That requires the help of philosophical argument or presuppositions, and that is precisely why his finding would be subject to philosophical dispute such as those concerned with Martha Mitchell effect, the Rosenhan experiment, and conceptual issues (See: David AS (1999). “On the impossibility of defining delusions”. Philosophy, Psychiatry and Psychology 6 (1)), and for a bashing of the DSM-IV definition of delusion, see: http://plato.stanford.edu/entries/delusion/#NatDel
“All of the above is constrained (and thus determined) by natural physical laws and objects (the furniture of the universe and how it behaves).”
Constrained by natural physical laws? Are we viewing laws of nature as prescriptive here?
.
Then you aren’t paying attention. There are many prescriptive facts proven by science (in medicine, engineering, etc.), and they derive necessarily from descriptive facts (hence my discussion of the hypothetical imperative).
You are evidently confused about this and need to read what either I or Churchland wrote on how the prescriptive is a necessary output of descriptive facts and therefore fundamentally descriptive (via the hypothetical imperative).
They entail prescriptive propositions. For example, the laws of biology (which are objective facts of the world) and the desires of surgeons (which are also an objective fact of the world, detectable now even by third parties using an fMRI) entail “you ought to sterilize your instruments before surgery” is true for all such surgeons.
Richard,
Those laws would not entail prescriptive propositions. What you mean to say is that those laws in conjunction with the desires of surgeons. If the laws themselves entailed prescriptive propositions, then you could deduce a precription from something of the form “all Fs are Gs”. But you can’t, and you know it. Thus, why speak of them as if they were prescriptive?
Btw, take any law of nature and a desire of surgeons. Show me how a prescriptive statement is entailed. By what line of inference?
Why would you say I am evidentially confused? You haven’t shown that here.
That isn’t what I mean to say. It’s what I actually, literally said (“the laws of biology…and the desires of surgeons (which are also an objective fact of the world, detectable now even by third parties using an fMRI) entail…”).
If humans didn’t have any desires, there would be no moral facts.
That humans do have desires, entails there are moral facts. The question is only what they are and whether they are the same for everyone. But those are questions for science to answer. Harris’s core thesis does not presume to declare in advance what those findings will be.
The formal logic is explained in detail in my chapter “Moral Facts Naturally Exist” (although I am astonished that you think there is no logical basis for surgeons agreeing that they ought to sterilize their instruments, so your request looks a little like trolling; but I will assume you are being sincere and you really do doubt that it is true surgeons ought to do that, in which case your answer is in my treatment).
Richard, you’re confused. I initially asked you why the laws of nature were taken to be constraints. You replied that they entail prescriptive statements. But of course, as we both agree, they do not. Nothing prescriptive follows from “all Fs are Gs”. Either you or Harris need to justify talking about laws of nature as if they themselves constrain this or that.
None of that paragraph makes sense to me.
Maybe you are ignoring me and that’s why it doesn’t make sense.
Prescriptive facts do follow from “all Fs are Gs.” In surgery, engineering, everything.
Not only did Kant already demonstrate this, I do myself in TEC, pp. 340-43 (syllogism on pp. 360-61), showing that Kant’s own demonstration of it even applies to his own categorical imperative, even though that was his attempt to avoid his own conclusion on the first point (syllogism on this point: p. 359).
I explained this in the very article you are commenting on, so either you didn’t read this article, or else I must assume what you mean to say is that I did not present the full formal demonstration here–but this article already says that and directs you there for the full formal demonstration.
I am not a philosopher, so I feel ill equipped to wrestle with you in this particular bath of baked beans, but one thing that went through my head when I heard about this competition is “What a great way to sell books!”. People will buy it in order to challenge it!
Seriously though, one significant aspect of changing minds is the level of confidence that you appear to have in your own words. By placing a not insignificant amount of money on the table, I would bet that a large proportion of casual observers (sans philosophy training) would treat Harris’s claims more charitably than if he had not put that money down. “20 grand worth of confidence… they say there’s no smoke without fire… I guess his arguments must be pretty robust… etc”
Still, its a great way to stimulate debate, which is of course the womb where the next generation of ideas are gestated.
Except (a) people can buy it used (which gives the author nothing) or (b) get it at a local public library for free.
(And indeed, if I had twenty grand to throw around on things like this, I readily would. As in fact I explained in this very article. That Sam Harris is rich is an advantage he has over me in spades. That’s hardly to the point, though.)
I’m all for elevating the role of science in morality. The most cogent arguments against that I’ve heard are attacking the satisfaction / common good premise. Why pick that versus something else. In this article, correct me if I’m wrong, I saw a couple of arguments. The psych/schiz reality example implied that we can’t know so we should just punt. Later you asserted that all people seek satisfaction. I think that reduces to a naturalistic fallacy or a tautology. If Meth addicts and sociopaths are seeking satisfaction then satisfaction has no meaning.
Nobody is arguin against science in engineering or biology, jut morality and specifically the selection of the if/purpose part of the imperative. Where can I find good arguments about why satisfaction/common good is a better goal than satisfaction/selfishness or natural balance or Biblical conformity or MMA?
All your concerns are directly addressed in my chapter “Moral Facts Naturally Exist”.
After completing that, you should follow my debate with McKay (linked in the article above).
Also: as a Catholic, and a Thomist to boot, I enjoy and welcome talk of natures, but what exactly is a nature in modern, mechanical science? When I hear of nature, I think of forms, or maybe even Plantinga’s ideas of essence, but the latter is the sort of thing often jettisoned by modern science. In fact, forms can’t be the subject of modern science, in principle. And if there actually do exist, then materialism is false.
I haven’t read any of your books, but the appeal to natures, something which seems entirely philosophical, and pre-modern, has a hard time being passed off as modern day science. Perhaps some other sense is used, but the that will have to be made clear.
Dude. Get rid of that weirdo Platonism. Natures are shared sets of properties. Nothing more. They are not magical floaty things that adhere in objects like invisible fluids.
See here for more. On its application to “human nature” in the present case, follow up by reading Sense and Goodness without God V.2.2.2, p. 328.
Actually, Plato held that Forms do not adhere to us, that was Aristotle. He believed that there were immanent in us, and other things. Odd that a classicist would get this wrong.
That aside, natures are shared set of properties? Huh? Tell me, what is the nature of any would-be moral agent?
No, that notion was medieval Christians making stuff up based on Aristotle. Aristotle himself said what I said [as for Platonism, I was mocking certain modern Platonists, not Plato]. Aristotle’s theory of forms held that forms do not actually exist unless manifest in a material (otherwise they only potentially exist: thus matter has the potential to be formed, and form is the actualization of that potential); and that a form was simply the pattern of shared properties defining a specific or common object or phenomenon (always a physical structure, a geometric arrangement of parts: read the De Anima for the most direct discussion and detailed examples).
Origen got Aristotle more right than medieval Christians did, and hence his theory of resurrection reflected this actual view: we could not exist unless the pattern that is unique to us was stamped in a material; if it wasn’t for God’s mind keeping that pattern of us in his thoughts like a computer databank, when we died we would cease to exist and we could never be recreated (because the information for how to rebuild that pattern would have been lost). Of course, Aristotle would have rejected the whole idea of God having a disembodied mind that actively thinks like this, but that’s a separate matter.
Hence Aristotle’s actual view was that a form was the physical structure of a material, which gave that material its distinctive properties. Thus, a bunch of rods in the “form” of laying flat on the ground does not have the property of stopping a car, but that same bunch of rods welded into a Czech hedgehog does have the property of stopping a car, and thus “able to stop a car” is a property of a “Czech hedgehog,” which is a word humans invented to refer to matter arranged into anything of such a shape and size.
That’s quite broad. “Moral agent” would include future androids, alien beings, gods, faeries, demons, angels, Jessica Rabbit.
The morals that would be true for Homo sapiens would depend on the nature of Homo sapiens specifically, not just any moral agents whatever (there might be a meta-universal morality, but that would be for science to discover: I discuss this possibility in my chapter “Moral Facts Naturally Exist”; meanwhile, see again what I say about aliens in my paragraph in the present article above).
But assuming you are being sincere and really do mean any moral agent, and thus what puts humans into the class of all possible moral agents (potential or actual), then I answer that question in Sense and Goodness without God V.2.2.3-4, pp. 329-31.
In short, a moral agent is any agent that has desires and the ability to understand, engage in, and act on moral reasoning. In our case these are all the neurophysical behaviors of a physical brain (desires and self-conscious reasoning) and can be defined in terms of minimal structural requirements (the wiring-up that is sine qua non those properties, such that any rewiring or reassembly or disassembly that removes your desires or self-conscious reasoning would turn you into something else, not a moral agent–death being the most common example of doing that; certain kinds of brain damage another).
But these structural attributes can in principle be produced in other material systems (computers, alien bodies, the ectoplasmic minds of angels), simply by arranging any material system so that it physically interacts in the same way (or sufficiently similar way, i.e. any desires and self-conscious reasoning will satisfy the requirements of being a moral agent, since that combination always entails it is true the agent ought to do something above all else and can know this, and therefore entails a moral system for that agent, hence “moral agent”).
What is morally true for a given agent depends on just what core desires it has (core desires being desires that would never change no matter what facts of the world changed apart from them, i.e. desires we would never replace, or could never replace, no matter what we came to believe was true about ourselves or the world). All other desires are derivative desires and thus depend on the facts of the world (how to realize the core desires, which depends on how the external world works, what abilities and limitations our bodies and brains have, and so on).
But core desires would again be physical properties of the brain (or equivalent), and can in principle vary across types of moral agents (again, aliens might differ from us in such a way as to be governed by a different morality than us; although, again, there may be a universal meta-morality; etc.).
The larger point is that you misinterpreted Plato. If you wish to talk about some modern theory like Plato’s, but not attributed to Plato himself, we use the word with a lower case ‘p’. There is a difference between Platonism and platonism.
I have to scratch my head at your portrayal of Aristotle. The form of a compound substance is essential to it, its matter is not. Nothing physical or material can be the form, Aristotle dismissed the idea in Z3, since it cannot be both separable and this something (an individual).
In any case, if nature is a “set” of shared properties, then there are no natures unless there are at least two distinct property holders (hence the word ‘share’). are you prepared to accept this?
I wasn’t talking about Plato. Don’t try to derail the conversation.
And you clearly aren’t understanding what I said about Aristotle. I said Aristotle did not believe forms existed apart from the matter they manifest in. For Aristotle, form is simply nothing more than the physical structure of a piece of matter. It is not something separable from it.
That is a non sequitur. In no way does there have to be more than one of a thing for a thing to have a set of properties distinctive of that thing. And a nature is a set of properties distinctive of the thing it is the nature of.
We just happened to be talking about shared natures, because you brought up human nature. Nowhere did I say the contingent fact in that case (that a nature is shared among many actual members of a set, namely “humanity”) applies to all possible cases where we would speak of a thing having a nature.
Moreover, even if we were to abide by the ridiculous rule that unique things lack natures (!), I already told you about the Aristotelian distinction between potential and actual things:
If there is only one instantiation of a type, it has a nature with respect to all potential things of that type, regardless of whether they are ever actualized. Thus, for example, if there is only ever one plane triangle, it has the “nature of a plane triangle,” which is all the things true of a triangle (such as the sum of its angles is always 180 degrees), which means “of any potential triangle of any kind,” which does not require any actual second triangle (it only requires that there be the potential for another triangle, which of course there always is as the one actual triangle could be rearranged into another triangle–at the very least).
Only if you imagined something that could not even potentially be duplicated would we have to deal with the weird case of something that does not share a nature with anything (even any potential thing). I’m not sure that’s even possible (since if a thing can occur once, it is hard to understand what would make it logically impossible to occur again), but assuming it is, then the nature of that one non-reproducible thing would be what distinguishes it from all other things, not what it shares with other things.
But “human being” is not an object of that kind. So that’s moot here.
Hi Richard,
Thank you for your thoughtful and extensive reply.
Points taken on the properties we have in common, although I’m not sure I really see their relevance. If exceptions don’t matter to your argument, then I don’t see why the commonality is important either. You seem to be saying that people ought to be able to get whatever they need, whatever that is. That seems reasonable to me, but it doesn’t need all the talk of commonality to justify it. Perhaps this is important to your more detailed argument in your book.
Unfortunately, I will have to read your book before I can get back to you in detail, as it seems my main points are addressed therein. On the basis of what you have said on this page, I remain unconvinced.
You made a few points which I feel I can clarify:
“Transcendent Morality”
I should point out that I am not claiming that there is a transcendent external morality. I just bring it up as the standard alternative moral realist position. I’m a moral nihilist or moral relativist, depending on your perspective.
But I am nevertheless moral. I have my own moral preferences and I stand by them. I really do think that slavery is immoral, and I would fight to abolish it. I would seek to find common ground with slave-owners and persuade them that the institution of slavery is incompatible with their core values. However I think that it is possible that no such incompatibility would be found, in which case I would fail unless I had might on my side.
So I personally value and stand by my moral opinions. I try to keep them consistent with the goal of achieving well-being for all. I just don’t think that my opinion on the subject is objectively any more valid than a slave-owner’s. It’s certainly more valid to me, and that’s all that matters as far as I’m concerned.
If I’m right that there is no objective morality, then there is only opinion. You will have objective emprical facts about norms, etc, but I don’t see how that in itself establishes that objective morality exists. Perhaps the argument in your book could convince me otherwise.
“Fact of the matter”
No, of course I believe there is a fact of the matter on whether God exists. I just don’t think there is a fact of the matter on what morality is ultimately about. If there were a God then the idea that we should follow God’s will has a basis, but the idea that we should ignore God’s will and just do whatever will maximise well-being is just as reasonable (if not more so).
Again, there is a fact of the matter about whether this axiom will lead to life satisfaction. I’m saying that I don’t think there is a fact of the matter about whether this is a good moral axiom, since I don’t buy that morality is about maximising personal life satisfaction.
In any case, I don’t presume to know whether following that axiom will make life more satisfying. It seems unlikely, but not impossible. What if it did, though? What if we had empirical data that supported this? Would you really think that following this axiom was moral in that scenario? I certainly wouldn’t.
Incidentally, this line of thought is now looking pretty equivalent to the slavery one, so I guess I’ll just have to read your book.
It’s not that they don’t matter. It’s that they aren’t exceptions. They are just part of the total system of moral rules. One can say they are exceptions to the rules that “normally” apply, but then that’s the same thing: that there are different rules for different circumstances; “normal” circumstances are just the circumstances we almost always find ourselves in. Thus, “exceptions” cannot be an objection to any moral theory whatever (except those that deny morality is situational).
If you and I have the same core needs and are in the same circumstances, the same actions will have the same consequences, and therefore the action that best serves my needs will be the same action that best serves yours. That is why commonality matters.
Humans have the same core needs and are normally in the same circumstances, thus most moral decisions humans make are the same. Even when they are not identical, they have the same ultimate morality (e.g. exceptions, per above, only adapt more fundamental rules to specific circumstances; the more fundamental rule remains unchanged, even while the action prescribed is different).
Thus the fact that we share so much in common biologically, psychologically, socially, environmentally entails we have a lot in common as regards what works out best for us (especially when we realize we are acting within a social system, indeed a social system on which we are actually dependent–Libertarian fantasies aside–so our actions, to be optimal even for ourselves, must take that into account, and correctly).
That oversimplifies the reality of what that rule, if honestly followed, would actually entail. It’s what it entails (actually entails, not what you think it entails) that is, and must be, the object of scientific inquiry.
But yes, much more is clearer in my chapter on this subject. (Which you should read first, rather than my book; I assume by the latter you mean Sense and Goodness without God; whereas my formal treatment of the underlying logic of the moral theory laid out there is actually in The End of Christianity, edited by John Loftus.)
Those are contradictory positions. So I can’t ascertain what you mean. Harris’s thesis actually is compatible with moral relativism (and essentially refutes moral nihilism). Indeed, Harris’s “peaks on a landscape” model is inherently morally relativist. I discuss the two possible outcomes of a research program in moral science, the moral relativist and the moral universalist, in my chapter in the Loftus book.
The problem with that is that someone who had exactly the opposite view could say exactly the same. How do you know which of you is right? If you say neither of you is, then you have no way to morally improve, and you can never be wrong about anything in morality, ever, even in principle. Because the moral is just whatever you think is moral at any given time. I shouldn’t have to explain what’s wrong with that perspective. But in case you need explaining, my chapter covers that point–a lot.
“If there is no objective reality, then there is only opinion” is also true.
But so what?
Whether there is an objective fact in the case is precisely the question. And you can’t answer that by desire or wish or personal feeling. Facts are facts. You can accept them or reject them. Like the creationist. But like the creationist, rejecting facts just makes you wrong.
Yet you obey it yourself (your own moral system you live, as you yourself essentially said, because it satisfies you more to do so than not–hence if it satisfied you more to live by a different moral system, you would be).
You thus follow the axiom and even base your own morality on it, then you say you don’t buy that morality is based on that axiom.
You need to rethink this.
Hi Richard,
Thanks again for your reply.
Thanks for clearing up exceptions/commonality. I still don’t see why my simpler formulation is not equivalent, but perhaps it’s not so important. Your elaboration of my simple statement was what I intended, and I think this ought to be relatively clear.
It seems I did buy the wrong book! I guess I was influenced by greater interest in your naturalistic worldview than I am in the topic of The End of Christianity. I’m reading it from the beginning. So far I haven’t gone much beyond the chapter on epistemology. Is it the case then that dealing with the example of slavery example isn’t in this book at all? If not, is there any other place where I can get a good treatment of it?
I am a moral nihilist in that I don’t think there is any objective fact of the matter about what is right or wrong.
I am a moral relativist in that I think that there are subjective facts of the matter about what is right or wrong which people, including me, feel are important to them. I have my moral framework (which I share with Harris) and I act according to it. It is real to me. I just don’t make any claims that it is any more correct than anybody else’s from an objective point of view. However, I do make judgements about the morality or immorality of others and do so by judging them according to my framework. As such, I am not a moral relativist in the sense of calmly accepting whatever barbaric rituals might be practiced in other societies.
I disagree that Harris or you are moral relativists as commonly understood. If I understand your view of moral relativism, it is that we can objectively say that X is right for people in the context of society A, based on what will lead to the greatest satisfaction in that society, while Y is right for people in society B based on similar criteria.
I agree with this but I don’t think that is true moral relativism.
I think true moral relativism would be to accept whatever criteria a society proposes for what is moral. X is right for society A because they think morality is about increasing their satisfaction and X increases satisfaction. Y is right for people in society B because it is mandated by their holy book and they think morality is about following their holy book.
I would say neither of us is right.
I disagree that moral improvement is impossible in my view, because I can learn to iron out inconsistencies in my own foundational moral beliefs and learn to live up to the consequences of this foundation. I can help to influence the next generation to adopt this framework and so contribute to the moral progress of society.
I also think that it is not true that one can never be wrong about anything in morality, because if your moral attitudes can be shown to be incompatible with your core values, then you have made a mistake.
I’m not rejecting facts. I accept all your facts. I just disagree that the moral norms you uncover are the same as objective morality, because I think it is possible for rational and informed moral norms to be immoral in the light of my own and Harris’s moral framework. The slavery example best illustrates this. Unfortunately I have not yet seen how you address this.
I don’t see why. I could paraphrase with “You thus follow your own taste preferences and even base your food choices on it, then you say that you don’t buy that food choices [for other people] are based on your taste preferences.”
I accept the axiom as consistent with my own intuitions about what is right or wrong. I feel a drive to be a good person, and my intuitions tell me that this drive would be satisfied by behaving according to the axiom.
I don’t see why I need to find a rationalisation to support the objective reality of this axiom for others in order to do so. I’m content with the idea that my moral intuitions arise from evolution, environment and reflection, and have no objective reality outside of my own mind.
—–
Incidentally, on the slavery example… is your argument that in practice slave-ownership would not actually lead to life satisfaction? If so, I think I can counter it.
Respond only to the actual argument I give in the book. If you do that, I would welcome it.
(I said more about it because it also came up in my debate with McKay, if you are interested in facing the best and fullest case, then that would supplement the book. But I’d be content to have a response only to the argument as given in the book.)
If you mean Sense and Goodness, I don’t think so. Certainly not the argument you will want most to address.
There are other examples in Sense and Goodness, though. In fact, after finishing that book, you might even be able to reconstruct my argument regarding slavery in TEC before even reading it. Although that would be remarkable (since I wasn’t as grounded in Game Theory when I wrote SaG as I was when I wrote TEC, so SaG might not have a complete template for the argument I develop in TEC).
That’s not necessarily nihilism, since one can still believe in subjective facts, and one can mean different things by “objective facts.” Nihilism is the view that there are no facts (as regards morality).
So imagine everyone else was exactly like you (down to the last detail). They would then agree with you on morality. (Otherwise your moral system is completely random and undetermined by anything and therefore you can have no reason to prefer it over any other…which would be nihilism).
This is a hyper-simplification of what Harris is saying: everyone is (relevantly) like you.
So when you say you reserve judgment about what’s right for others, you are only expressing empirical uncertainty as to how much unlike you people are. But that is what science is good at: resolving empirical uncertainty (of just that sort).
When we did that, we’d find, of course, that other people are not “exactly like you (down to the last detail),” but that they are “exactly like you” in some details. And insofar as those details entail some set of moral facts for you, they do so for everyone else. QED.
The question of which details (or any) is a matter for science, not armchair philosophy or idle opinion, to answer. In the meantime we can use what facts we already have to infer what science will most likely discover in that regard. But that won’t ever be as good as actually finding out what science discovers in that regard. So we should do that.
That’s Harris’s thesis in a nutshell.
That is one kind of moral relativism, and one unlikely to be true (because people’s core desires are determined by biology more than culture, and cultures are revisable in a way fundamental biology is not). But another kind of moral relativism would be based on personality profile, which is genetically determined (and partly determined environmentally but before an agent becomes capable of self-determination). For example, there could be a moral system for extroverts that differs from the moral system for introverts. I analyze this possibility in general in TEC (pp. 351-54) and find that what we are more likely to end up with is a meta-morality that is the same for both extroverts and introverts, and the attribute of being an extrovert or introvert is just a variable situation that modifies the application of that morality. Although words are human inventions, so we could call it whatever we like. But that would be the clearest way to talk about it in the most common dialects of English.
This is key. Because this is exactly what I am saying (and Harris is saying). You just admitted that what it all depends on are your core values.
So if science finds everyone actually does have the same core values, and that in fact anyone who claims they didn’t was just factually mistaken and being inconsistent…
Well, I hope you can see the point now.
But this isn’t my point at all. I’m not expressing uncertainty about what would lead to the satisfaction of others. That is indeed an empirical question, and I agree that science can help to answer this question.
I am instead expressing the opinion that if another person tells me that morality is not about satisfaction but is about following the edicts of a king, or the commandments of a holy book, or the whims of the voices in his head, or assisting his kin and to hell with the rest of us, we can’t simply say he is wrong.
We can try to persuade him that the king is just a man, or that his holy book is just a book, that the voices in his head are just his schizophrenia, and that the rest of us are just as human as his family. He may accept all that and still insist that his moral imperative remains, and I have no grounds on which to answer him.
I’ve made this point before, but you kind of skirted it by attacking the meta-ethics of Peter Singer, but I ask you again: how would you go about determining whether it is moral to eat animals for pleasure and not necessity? Is it maybe moral for some of us but not moral for others?
Your research program would seem to suggest that we should find out which attitude will make us satisfied. I reject this because if I (subjectively) think it is wrong to cause suffering to animals, then I think (subjectively) that it is wrong for all people everywhere to do so, whether it brings them satisfaction or not. This is how most people view morality and this is what I think the word is intended to mean. Your research program does not then uncover morality but only people’s attitudes towards morality.
I agree with your analysis of moral relativity as more applicable to personality types than cultures, although cultural context plays a role. There are arguments for instance that it is actually moral to perform female circumcision on girls in societies where they would be ostracised if not so circumcised (not that I would want to defend that horrific practice for a minute).
Sure. My core values do make it an empirical question as to what I ought to do in order to satisfy these values.
However, I don’t think this is morality, because I don’t think that all possible core values lead to a concern for the welfare of others. I think a rational well-informed person could engage in acts of violence, murder, rape and exploitation of others if their core values were so aligned. I think calling such a person moral is an abuse of the term.
So, I can be morally wrong in making choices inconsistent with my core values (as shown by empirical evidence), but if my core values are compatible with acts most of us would consider evil then I don’t think there is any way of showing that I am objectively wrong.
And so, I hope you can see the point now!
Unless, of course, this is answered in the TEC. I’ll see if I can persuade my wife to allow me the indulgence of purchasing another book so hot on the heels of the former…
Then you are wrong. And my chapter in TEC proves it.
Imagine if you said this:
“I am instead expressing the opinion that if another person tells me that curing illnesses is not about fighting germs and viruses and biochemical disorders but is about following the edicts of a king, or the commandments of a holy book, or the whims of the voices in his head, or assisting his kin and to hell with the rest of us, we can’t simply say he is wrong.”
You would immediately recognize that what you were saying was incorrect. Indeed, insupportable.
I am saying you are exactly that wrong here, and for the same reason. And I have formal deductive proofs in TEC that show it. Although you might be able to more readily follow the main text which colloquializes them.
Even Socrates could show it, less formally, by simply asking such a person “Why should you obey the edicts of that king as opposed to some other directions entirely?” and having the claimant continue to answer the “why” of every answer they give. They will always end up in the same place: “Because it satisfies me most to do so.” Which is an empirically testable claim. One almost always proved false. (Which is why we don’t obey kings anymore.)
People who are irrational or refuse to abandon false beliefs might not recognize this. But we are not talking about what irrational and misinformed people would believe. We’re talking about what is true. Which is never discovered by being irrational and misinformed. Hence the several paragraphs in my article on exactly this point.
First, you would empirically (scientifically) nail down all the relevant facts of the world (thus, testing all the claims of vegans as to why we should be vegan, for example–and eliminating every one proved empirically false, i.e. as in, based on false claims about the facts of the world, such as what kinds of experiences animals can have or actually do have or what actually happens in the industry or what the health and environmental effects actually are and so on–generally, I predict almost all vegan claims will be scientifically false at the end of any such study, so we hardly even need continue–since a true morality cannot be based on false claims to fact) and then all the relevant facts of what people really want out of life.
In the latter case, for example, vegans will appeal to compassion in their target audience, so to test the moral imperativity of veganism we would first have to verify the moral imperativity of compassion, and I am fairly certain we can do that, and I outline some of the reasons why science is likely to do so if ever it tries, in Sense and Goodness without God. So to answer your hypothetical, let’s assume that that leg of the research program has been completed and actually got that result (people live more satisfied lives when they live reasonably compassionate lives, and everyone who knows the difference does in fact end up preferring the latter).
Would compassion for animals entail not eating them? There is no logical connection. There is a logical connection with raising and killing them humanely, but once we are working toward that, we have no reason left to not eat them. And illogical moralities (as in, conclusions about what is moral that do not logically follow from the stated premises) cannot be true moralities. Any more than illogical medicine can be true medicine or illogical engineering true engineering or illogical astronomy true astronomy. In other words, same as every other science.
So vegans might try appealing to some other value. But then science would determine if that’s a value everyone shares. That is, again, an empirical question. Let’s suppose science finds that of all the values vegans can appeal to that do logically entail not eating animals, none are universally shared by all people, nor entailed by any values that should be shared by all people (like compassion, per the “previous study” we just imagined having completed).
And let’s suppose that for most people who do share any of them, those people have values they hold in greater esteem, the pursuing of which makes their lives more satisfying than pursuing the values they have that vegans are appealing to (e.g. such as a value for avoiding tiny increases in mortality risk, which is overweighed in everyone by the tremendous value for a whole life lived of easier and more rewarding culinary experiences–I mean overweighed in actual scientific fact, as in science verifies that that is in fact what most people value more, and that this is not in consequence of any factually false belief or any logically fallacious inference).
And let’s suppose that in the end only a tiny percentage of the population will actually have more satisfying lives as vegans than they would have had on any other modality of diet.
Science will then have proved that for those people, they should be vegans. But it will not have established that this is a universally true moral fact. It will in fact have empirically established that it is not. And any vegan who insisted otherwise would just be like a creationist insisting evolution is nevertheless false or a climate science denier insisting that all the facts are wrong and there really is no global warming.
In TEC I outline that what science would then have discovered will be a universal moral fact of a covering law. For example, “that it is always moral to live what is for you a most satisfying life, when doing so does not go against any other moral directive,” which universal moral would entail vegans should be vegans and non-vegans should be non-vegans, in each case an individual, situational, application of a universal moral rule–note that people tend to think the rule I just stated is self-evident, but in fact it is regarded as immoral in some moral systems, such as those in which suffering is elevated as a better life or in which enjoying the pleasures of life is a sin, so insofar as science proves the contrary rule, those moralities will have been proven as false as creation tales or theories of the afterlife.
If you (subjectively) think it is wrong to eat beans because beans contain reincarnated human souls (as Pythagoras is alleged to have claimed), then you would think (subjectively) that it is wrong for all people everywhere to eat beans.
But your subjective feelings would not be factual. Beans don’t contain human souls. So your values are derived from false beliefs. As science would prove.
Thus, you can’t appeal to subjective feelings as authoritative guides. Those feelings are all too often based on factually false beliefs. So you should want to be scientifically informed and keen to purge false beliefs. And when you do, all values based on them will be exposed as false as well. Science does this better. And even when you can’t do it scientifically, you can do it as scientifically as you are able, and thus approximate a scientific result.
Case in point: eating animals does not cause them suffering. Death is painless; in fact, it ends all pain and suffering. So if you were against animal suffering, you should actually be in favor of killing them (humanely before they get eaten by predators, wracked with fatal diseases, or suffer the slow miserable death of old age–there is a reason we often “put to sleep” extremely old or ill pets).
So if you are basing your belief that you shouldn’t eat animals on the belief that eating them causes them suffering, your belief is false right out of the gate. We don’t even need to do much science to confirm that (it’s already a fairly thoroughly confirmed fact of biological science).
One could then debate the degrees of suffering produced by different kinds of husbandry, but that’s all manageable (e.g. you can vote for and patronize the more humane suppliers or raise the animals humanely yourself), and runs aground on the issue that vegetables also produce suffering (in the humans who labor in that industry, often in awful conditions for abysmal pay), so the mere fact of causing suffering cannot be reason not to eat the food produced by any given system (indeed, arguably more suffering is caused by fruits and vegetables, because it is human suffering of which its victims are fully aware, both of the facts and their consequences to their lives, which is, IMO, more horrible).
So one would have to sort out all the actual facts (all of which are empirical) and one’s actual hierarchy of values (e.g. even vegans must value eating delightful food more than the suffering of bottom-rung farm laborers) before coming to conclusions here–but these are again all empirical facts (i.e. whether you value eating delightful food more than the suffering of bottom-rung farm laborers is itself an actual fact, determinable by a scientist studying you–but even in the absence of a scientist, it remains an empirical fact that is either true or false, and is testable empirically; it is not a matter of opinion or ideology or tradition or anything else).
That’s an empirical question. You cannot answer such questions from the armchair.
Maybe we shouldn’t always be concerned about the welfare of others. If that’s the fact, then you have to live with it. But whether it’s a fact has to be determined. Empirically.
We aren’t much concerned about the welfare of Syrians, for example. We would prefer not to invade Syria for reasons more compelling to us than the hundreds of thousands of murders and millions of suffering refugees. Is that right? Or morally ought we to invade Syria and fix it? These are actually empirical questions, not only as to what the total consequences of both options would be, but also as to which values we really do esteem more: the welfare of our own society, or the welfare of other people not in it (if what was happening in Syria were happening in Texas, we’d be invading in a heartbeat and everyone would be on board with that).
Which we actually value more is an empirical question again. As is even the question of which we should value more, since what we do value more will be based on reasoning, and that reasoning might be fallacious (even such that when corrected to be non-fallacious we switch our values) or that reasoning might be based on premises that are false (even such that when corrected on the facts we switch our values). And science can correct us on both. And absent the science, we have to get the best empirical access to those facts as we can. Or else we can’t claim our opinions in the matter are at all sound.
Hence even in the absence of a completed science, all these things are still questions of fact that can only ever be known empirically. So all the moral conclusions that follow from them are themselves empirical claims to fact.
>>by simply asking such a person “Why should you obey the edicts of that king as opposed to some other directions entirely?” and having the claimant continue to answer the “why” of every answer they give. They will always end up in the same place: “Because it satisfies me most to do so.”
No, it would end by saying I do it because that’s what you ought to do, that’s all. Our ultimate values have no justification possible.
Exactly as you would end when asked repeatedly why, no, I have to pursue “satisfaction”. In the end you can only say because that’s what you ought to do.
You also have defined away “satisfaction” to be meaningless – e.g., I’m “satisfied” by suffering and sacrifice for others??? – yet full of silly addons with underlying mythologies at work so we can’t hurt others and wreck your pre-desired nice-nice outcome, e.g., you’ll “truly” hate yourself if mean to other people so that’s not real “satisfaction” (because persons are sacred and equal, derived ultimately from Christian mythology of souls then reformulated by Kant as “dignity” with an honest admission of needing god to square that circle).
This is just gainsaying the facts without any facts to the contrary.
Obviously you can irrationally insist the earth is flat no matter what the evidence.
That does not make the earth flat.
It just makes you a delusional fool.
As in geology, so in morality.
Hi Richard,
Once again, I do appreciate all the effort you are putting into answering my points, and enjoyed reading your detailed account of how we might determine the morality of eating meat (incidentally, any problem I might have with eating meat or eggs would indeed be on animal welfare grounds, as you note).
Unfortunately, it seems to me that it is quite unlikely you will convince me as you appear not to be really addressing my core issue with your framework, which I will now attempt to explain more clearly.
If it is even hypothetically possible for your system to show that we ought not to be concerned with the welfare of others, then I don’t think that this is morality.
I have not at all been questioning that there are empirical questions about what might satisfy us, but it seems to me that there has been an unintentional equivocation. What you call morality is simply not what almost anyone else calls morality, including Harris. Harris grounds his morality in the utilitarian concern for others, for all conscious creatures in fact. This is much more like the concept of morality shared by most humans than your account of the term.
This means that I am for the time being calling into question your second premise (although I will explain in a moment why I might ultimately accept it after all):
“The moral is that which you ought to do above all else.”
This is because your argument has convinced me that there are at least two senses of the word “ought”. There is the selfish hypothetical ought of “If you want to be satisfied, you ought to…” and the moral hypothetical ought of “If you want to be moral, you ought to…”
You may be right that for many people, and in most situations, those are the same thing, but I do not think that you are right in all cases, in particular for psychopaths, sadists, people who don’t care about the welfare of animals and people who simply weight their own personal satisfaction much more than the suffering of others.
There is no objective “above all else” in this view. Morality is just another goal or drive, like the desire to be healthy or the desire for satisfaction. Only for those who view morality as the most important goal is morality equivalent to that which you ought to do above all else, and for those people this subjective attitude applies to all people, not just themselves. For those people, myself included, it is subjectively true that one always ought to act to promote the welfare of all, even if this comes at a real cost to personal satisfaction.
If we can imagine an extreme thought experiment – imagine a parent in the impossible situation of having to sacrifice a child in order to save the lives of thousands of people in a city. The circumstances don’t matter. I think in this extreme case, the desire for satisfaction and the desire to be moral are in conflict, and for most people the desire for satisfaction would win out — the love of the child would outweigh the deaths of even thousands of strangers. Though the deaths of those strangers will have a very negative affect on life satisfaction in the years to come, the death of the child would be even worse.
And yet, I think most people would agree that the moral thing to do would be to allow the child to die. A parent committed to morally and of exceptional courage might make the decision to sacrifice the child, and I don’t think they would be doing so because they think the guilt from allowing strangers to die would be greater than the guilt of allowing the child to die. I think they would do so knowing that this is the more painful path. They are not motivated by fear of greater guilt but by an iron resolve and a robust internal sense of right and wrong.
This is the funny thing about morality. I think it really can cause us to make decisions in the full knowledge that it will decrease satisfaction. I think there are people who care more about being moral than they do about being satisfied, and I don’t think that they are making a mistake when they decide to sacrifice satisfaction in order to be moral.
With this in mind, I could accept your second premise after all, with the understanding that “ought” implicitly means the moral ought (in which case it’s simply a rather vacuous tautology, as you note), and instead reject your fourth:
“All human decisions are made in the hopes of being as satisfied with one’s life as one can be in the circumstances they find themselves in.”
I can of course anticipate your answer. You will say that these people will simply get more satisfaction out of being moral. That may be true in typical cases, but I do not accept that it is so in extreme cases such as the one I laid out.
It might also be helpful to consider the opposite case, of a cruel and oppressive dictator such as Stalin or Mao. I think these immoral psychopaths got great personal satisfaction from defeating and torturing their enemies. I believe that every successful opportunistic grab for power during the rise to national leadership must have been very gratifying for them, no matter what the cost to their nation and countrymen. I doubt Mao regretted a thing he did on his deathbed — in fact I imagine he got immense personal satisfaction from his quasi-deification in the Chinese mind.
In the pursuit of personal satisfaction, therefore, he was tremendously successful. But this came at a cost of tens of millions of lives. By any commonly accepted standards of morality, he was a moral monster.
I think it is wrong to cause harm, whether this causes me satisfaction or not. You are quite right to say that my moral impulses are then not factual, and no more defensible than the moral impulses of the person who follows some entirely arbitrary moral code. This is no more a problem for me than that my artistic tastes are different. These attempts to provide an objective justification for morality seem to me to be a rationalisation for those moral beliefs you hold so strongly because you are an intrinsically good person.
But as a rationalist philosopher, I think your self image demands that you be able to justify your moral intuitions.
I say this not to attack you but because this is the position I found myself when I came to the conclusion that there was no such justification. It troubled me greatly. I desired very much to be rational, but also to be moral.
Finally I realised that no justification is needed. I desire to be moral because that is part of who I am. I can no more stop desiring to be moral than I can desire to stop living. It’s just one of the fundamental drives that motivates my behaviour, and that’s ok. If you remove drives, after all, there’s no justification for doing anything at all (as you note yourself).
Then you are replacing facts with your own opinions. Like a creationist.
Read TEC, pp. 343-47.
Not really. He grounds that utilitarian concern for others in individual human needs. If you missed that, then that is another example of how he sucks at explaining himself. I read him carefully, and listened to his responses to critics, and I got it clearly. So it’s there. But perhaps it’s not clear enough for most people, which IMO is definitely possible. He is not very rigorous or careful in his argumentation, and often writes or speaks confusingly.
The bottom line is, if you are talking about something else with the word “moral” then what I am talking about, then what you are talking about not what we ought to do (exactly like the Christian telling me to obey his morality: my every argument on that point then applies to you–see TEC, pp. 335-39). You are thus not talking about anything of any actual use. Because it will be a demonstrated fact that we always ought to do something other than what you are calling “moral.”
On this very point see TEC, pp. 340-43 and 347-51.
That’s an empirical question. It is for science to answer. Not for armchair speculators to answer.
I have already addressed psychopaths (as did Harris). See comments above.
I have also addressed self-sacrifice–indeed using almost the same exact example you came up with (it does not contradict satisfaction pursuit, but fulfills it). See TEC, p. 350.
As to the rest, you may be wrong about how other people should behave. That is simply for science to discover. Or in the meantime, for us to discern from what empirical facts we have, and if there aren’t enough for it, we must admit we don’t know. We cannot elevate our opinions into facts in the absence of any evidence whatever that they are anything other than how we want people to behave, rather than how they actually should.
The bottom line is, you will have no argument for any x being moral (none whatever), if it is not an appeal to desires in the person you are claiming is governed by x, and the comparative consequences of their options. Both of which are empirical claims open to scientific test.
Otherwise, all you are saying is “I want you to do x,” not “you ought to do x.” And the former can have nothing to do with “morality” in any meaningful sense. More importantly, it is just a pointless utterance, if there is no reason for the person you are speaking to to do x. You are merely just complaining that people don’t do what you want them to. You aren’t actually saying anything true about how they should act.
>>Obviously you can irrationally insist the earth is flat no matter what the evidence.
Ignoring that you can provide evidence in that case that we can both accept whereas you are unable to provide any evidence I can accept as to why I OUGHT to prefer satisfaction over duty. (Unless you define away “satisfaction” to be whatever I want to do, as you pretty much do, thus rendering it meaningless.) Nor can I provide any to you as to why you OUGHT to pursue duty. There is no ground for the ground of what we culturally value utmost.
Or explain simply why I must value satisfaction over duty (without defining away “satisfaction”)? Though I know you won’t because you can’t. (E.g., “See TEC 429 blah blah”, where you don’t answer any better either.)
You really can’t see the forest for the trees, can you? You’ve skipped the most fundamental question of thinking about what we ought to desire, then claimed you “solved” morality by getting what we already desire because that “satisfies” us. You’ve merely done a sleight-of-hand card trick.
Also, your link above to an explanation of why mean people can’t really be satisfied by cheating the system – the golden rule has network effects – is silly and refuted by the fact that we already constantly cheat the system and yet it works anyway. I assume you cheated to get into grad school; you’re way ahead on satisfaction. 😉
Some people get killed; it doesn’t follow that society falls apart or, more to the point, that I personally suffer negatively from killing someone and stealing his millions and therefore cannot be “satisfied” by that. Even if society is worse off, I’m still way ahead and “morally” MUST do it based on your theory once we exclude your indefensible religiously grounded addons against harming other sacred “persons”. And from there to saying, well, we’ll actually hate ourselves if uncaring is just dumb.
This Jr. High defense to cover obvious problems with your theory obviously betrays a desperate attempt to achieve a pre-desired outcome, right? It’s not like logic led you there.
We can provide evidence of the actual consequences of the actions available to you and evidence of which of those consequence-outcomes you would actually find more satisfying or dissatisfying. And those consequences to consider are far more than those you seem to be aware of here. As I have explained in all my writings on this subject.
And that’s all there is to it.
But you won’t understand that because you don’t read anything I write or pay attention to anything I say and you don’t accurately repeat it even when you pretend to.
I feel like you are making the same exact mistake Harris is making and Benson pointed out. I do not see how your thesis is different from Harris’s, you actually argue say the core is the same anyhow. I do not think your sophisticated version of Harris has alleviated the problem that was pointed out on Benson’s blog.
I feel like you are doing the same thing.. you are assuming utilitarianism, and then saying science can inform the utilitarianism calculus needed to make the right decisions. I actually agree with this argument – as do most philosophers I have read. In fact, this is my moral point of view… accept I am a moral fictionalist not a realist. I do not see how you actually defended moral realism in the meta-ethical sense.. rather, I see you have assumed the mete-ethical part and then built an objective bases system on that mete-ethical assumption. Something, which I am fine with anyway…
However, you then say you managed to either avoid or falsify Ophelia’s point and other peoples points on this subject… and this seems to me to be reaching. I simply see no argument that you have presented that defends that… maybe I missed something.. but I read this article three times and I still do not see it. I do no see how you managed to avoid the is-ought problem at all.
You said you spoke about it before.. is there a free resource I can locate that speaks to this?
I am not assuming “utilitarianism” and in fact I even explicitly explain that in this article. So please re-read my article and comment on what it actually says.
Then please explain what “Ophelia’s point and other peoples points on this subject” I have said I “managed to either avoid or falsify” and why what my article actually says does not do that.
I can’t address vagueries. Least of all vagueries that ignore the very article they are commenting on.
Richard,
Your “moral science” has been in existence for several years now. We respect science because it works. Where are the “difficult moral questions” that have been resolved? I have read your papers. If your moral science has any value, it should by now have produced results. An “outline of a research program” is, at this point, not sufficient. We need to see the results of a research program that has been completed. Are there any research programs that are currently in progress?
I don’t know what you are referring to. The research program I describe in Sense and Goodness without God (and lay the most formal foundation for in The End of Christianity) has not been enacted by any scientific research institute in the world so far as I know. If you know of one that has, do tell.
Huh? How can we begin a scientific research program until we have outlined it?
I fully agree with you that “we need to see the results of a research program that has been completed,” so if you agree with that, then you should agree with me that we need to get started instituting and conducting that research program so that we can see some completed results.
In the meantime we can talk about what hypotheses are most or least likely to be confirmed by it and why, based on the evidence so far available, but that’s still just scientifically informed philosophy. If we want actual full-on science, we have to actually do some. So do you now agree we should get on with that and actually start doing some?
That’s exactly what Harris is saying.
Hello, Richard! Excellent article, per norm. I’m a big fan of Philippa Foot (teach her in my Ethics classes!) and of yours, of course. I haven’t read your chapter on this – obviously, I shall have to soon! – so pardon me if you address any of these, but it seems there’s a few places one could try to get traction for a counter-argument. Bear in mind, I’m spitballing, here, rather than stating my own, considered opinion, but I’d love to hear what you have to say about these possible objections, if you have the time.
All of them turn, in some sense, on the definition of “moral” used. In no particular order:
The “Above All Else” Issue: I happen to like your definition of the moral and have no objections to how you use it, but when reading your argument (and in wrestling with Foot, previously), it’s occurred to me that some might object to claiming that the thing one should do under a hypothetical imperative answers to this. That is, it’s hard to say (x) is moral, i.e., the thing that should be done above all else, when (x) is only what you should do assuming (y). What do you say to someone who objects that calling (x) “moral” vitiates the term? (I’m still working on my own response, and I doubt it would be as good as yours, anyway.)
The “Limits of Science” Objection: This one is a poke at the latter part of premise 5. Even if one is willing to grant that it is in principle possible for science to discover what will maximize the happiness of a given person in a particular circumstance, doesn’t that leave room to object that because of certain facts about humans, their psychology, and the opacity of the conditions that lead to their happiness (even to themselves) science cannot *in fact* discover what will maximize the happiness etc.? This objection, of course, does not depend on skepticism about the involvement of science, per se, in morality, but rather on skepticism that it can deliver an answer to what must be done *above all else*. My own opinion is that, if we grant the skeptic his point, we must conclude either (a) that science can only deliver an incomplete morality, able to tell us what must be done above all else in places where we can rely on the common features of humans to instruct us but remaining silent when we get to a point where the particularities of a given human would decide the issue (or, at least, in some cases) OR (b) that such issues are not moral issues. I’m not certain those are the only options, nor am I sure what to make of them, exactly.
finally,
The Incommensurability Thing: It might be that there is no singular answer to what, in a given circumstance, will maximize the satisfaction of any particular human being IF we include “happiness” in the overall matrix of satisfaction-determination AND we assume that humans at least sometimes have incommensurable desires. In such cases, there is, by definition, no one thing that must be done above all. Such situations needn’t involve trivial things, either, so one can’t just wave away those cases as unimportant. Is there available a modification of your definition of the moral that can deflect this worry without “breaking the system,” so to speak?
I look forward to hearing your responses! Thanks, as always, for some wonderful, thoughtful blogging!
TEC, p. 348.
TEC, pp. 343-47(and exact wording on p. 364 and analysis in n. 28, p. 424, and n. 35, p. 426).
To relate all that to what you said: it’s (a).
(Except that science will be able to answer some of the particularized questions, too, just as it can in “unique cases” in medicine and engineering and agriculture.)
Analogy: origin of life, origin of universe: possibly science will never be able to answer those (the first esp., since most of the evidence has been erased and yet the options that fit existing evidence are many). But even then science can give us a lot of answers about those, from which we can infer the most and least likely possibilities to a probability below the high bar of scientific certainty–but still above the bar of total ignorance. For example, science can inform a doctor or engineer how to best adapt general principles to unusual unique cases even when it can’t or hasn’t yet decisively answered that question. Likewise in morality.
On everything else, we would have to say in moral science exactly what we say in other sciences: “we don’t know the answer to that question.” That is a meaningful result, because it means anyone who claims to know the answer (as in creationism, so in morality) we can say definitively is wrong, in the sense that they have no basis for what they are saying and thus cannot claim to honestly, much less scientifically, “know” what they are saying is true.
Moral ignorance is as likely as ignorance in any other domain in science. But that we will be ignorant of some things is no reason to make no effort to dispel what ignorance we can (we would not say “we can never discover everything about how life originated, therefore we should not study the origin of life at all and we should shut down all protobiology research right now”…nor would we say “we can never discover everything about how life originated, therefore we should just base everything we think about the origin of life on tradition and opinion and randomly selected ideological systems”).
TEC, pp. 355-56 (with n. 33, p. 425).
(BTW, the title of Harris’s book is a reference to his answer to this very question. Just FYI. I just formalize the point in terms philosophers will more readily be able to work with.)
Thanks! I’ll pick this up ASAP!
I must admit up front that I do not have the time or resources to read all the articles and books referenced, so if I am raising an objection that is addressed elsewhere I apologize.
You are far too flippantly dismissive of the fact/opinion distinction. You seem to imply that there is no answer to that dilemma, even in science, so it is not a valid criticism. This is demonstrably false. Every scientific theory can be asked this question, and an answer must be provided in every case.
How can the theory of gravity be shown to be more fact than opinion? It can be tested and verified. Anyone can measure the acceleration due to gravity and see that the results match the theoretical model. There are some situational limits that could bear improvement, but it is this very testability that is not contingent on an observer’s point of view that grants the theory of gravity a claim to objective truth, even if the theory is not perfect and does not have a complete claim to objective truth. This is why it is fact and not opinion.
Your doctor/schizophrenic example also has a similar solution. Peer review and testability. The schizophrenic can inquire about his perspective to other people, and if the perceptions are regularly disputed the schizophrenic can be fairly sure her perception is delusional. This is not even the only way. I would also point to the movie “A Beautiful Mind” (2001) where the protagonist is able to realize he is delusional by noting that his hallucinated people did not age appropriately. There is always room for doubt or theoretical mass delusion, but I doubt we need to visit upon hard solipsism here. A measured claim to truth over opinion can be applied whenever a theory is testable.
So we come around to premise 5. How are you going to objectively measure satisfaction? This is not a trivial detail to be worked out later. If satisfaction has no objective measure, your moral system can have no claim to objective truth, and will be forever indistinguishable from mere opinion.
I would also be interested to know if your system is one of monist or pluralist moral values, since I find the monist position inadequate and the pluralist one to be incompatible with the idea of a single greatest good, but my first objection far outweighs my second one. Do not bother with this objection until you deal with the first one, if you feel generous enough to deal with either.
No, you are far too easily tricked into believing the distinction matters. Instead of reading our formal demonstrations that values are facts and thus there is no fact/value distinction.
Repeating the same old refuted arguments is what creationists do. Don’t be that guy.
I don’t understand this remark. I have never said any such thing. I have said it is not even a dilemma. Which is saying there is an answer to it. TEC, pp. 334-35, 340-43.
Same as any statement of the form “moral agent x wants y outcome more than z.” That is a statement of fact that can be tested and verified. As can “all members of Homo sapiens want y outcome more than z” or “members of Homo sapiens with properties x want y outcome more than z, but members of Homo sapiens with properties w want z outcome more than y” and so on.
Moral facts ensue.
The same way the sciences of psychology and sociology and neorophysics already do (and better ways are coming, as functional brain scan resolution increases).
As I’ve said repeatedly: that is for science to discover. The Harris thesis is compatible with either outcome and presumes neither. The very title of his book is based on his statement of exactly that.
If you want more specifics on how a moral science could determine a number of type- or group-dependent systems of moral facts, read TEC, pp. 350-51.
I suspect it will end up being monist with pluralist riders (one monist morality with variant add-ons for types, which add-ons can be called personal principles rather than morality in the universal sense). But that’s, again, an empirical question. It can’t be conclusively answered from the armchair. The case for its probability is in TEC, pp. 251-54 (formal demonstration on pp. 362-64).
I am sorry if I am bringing up things you have addressed elsewhere, I can only say in all sincerity that I have not seen the standard response yet. If I am truly too far behind to keep up, don’t waste your time responding to me. I might suggest that if indeed commenters are required to buy your books in order to discuss things here, you might be more up front about that and save everyone a lot of time and effort.
When you ask:
I say the answer is most definitely yes. The answer is that there are indeed objective methods available to the schizophrenic, not that the distinction does not matter. If you are truly arguing that there is no distinction between a value and a fact, I will have trouble taking you at all seriously. It is trivially easy to come up with a value that is not a fact. “PZ writes the best blog on FTB”. That is an opinion, and not a fact. One can tie this opinion to various facts, hits for example, but it is an opinion, and only an opinion, as long as there are components to it that have no objective test that it’s claim to truth might be tied to. The existence of detractors has nothing to do with this particular problem, and it is confusing to me why you even brought it up in that context. This is a problem of method, not one of consensus. It is not a problem for evolution that creationists exist. It would be a problem if evolution made no predictions that could be put to the test, but of course that is not so.
If you have indeed devised a method to apply a number to satisfaction in an objective way, such that satisfaction values might be compared and resolved against one another, then I salute you. You have indeed created a scientific and objective moral system. To be taken seriously by the wider public, you should put forth some examples of how some moral dilemmas can be solved by the use of your method. People can appreciate a cell phone far more easily than electrical engineering and quantum mechanics principles, and so it would be with a scientific moral method.
You seem to have misread me. I was arguing exactly what you just did, not against it.
Perhaps (judging by your example) you are confusing aesthetic opinions with goal-oriented values. Those are not the same things.
Moral facts do not follow from opinions (not only because opinions aren’t widely shared but also because they can be wrong even for the person uttering them…e.g. you might derive your opinion from a false belief about the object of your opinion, e.g. in the PZ case if you have not read all the blogs on FTB, you cannot claim his is the best, so if you do, your opinion is fallacious).
Moral facts follow from actual values–which does not mean the values you think you have or just happen to have, but the values you would have if you derived your values non-fallaciously from true facts (and not false beliefs) about yourself and the world. Hence, your actual values (what you really would value if you were right about everything).
As to questions like how we would measure satisfaction comparatively, that’s an empirical question for scientists to work out. It is not relevant to the underlying facts. That we didn’t have telescopes would not mean there are no mountains on Mars. Regardless of what instruments we presently have, the facts of the world remain objective facts of the world. Thus, a person’s greatest available satisfaction state is an actual objective fact about them. Whether we presently have instruments to detect it or not. And all moral discourse is covertly appealing to that and thus already making an objective fact claim about it–whether people realize it or not.
All of which is explained in my chapter “Moral Facts Naturally Exist.” (That last sentence I even exemplify with an analysis of Christian moral philosophy, which is supposed to be a paradigmatic example of denying what I just said, but in fact ends up being a confirmation of it.)
And so we need to acknowledge that and start approximating results by observing what the facts are in that regard. And we can do that pre-scientifically until science gets going on it.
By now I’m sure you’ve seen John Shook’s response: http://www.centerforinquiry.net/blogs/entry/a_confutation_of_both_sam_harris_and_richard_carrier_on_science_and_moralit
I’m aware of it. I just won’t have time to get to it until next week.
She was in a philosophy podcast a while back, and this sort of came up during the conversation. She made some claims which seemed to be to the effect that there would be no “science” of morality, since it is “philosophical.” It’s not clear to me how she was drawing a line between science and non-science, what non-empirical truths or methods she thinks need to be involved in morality, or why exactly she was taking that position. It was more than a little confusing, given her other views (but that might be my problem understanding her). As I’m remembering it, this was tied up with the larger conversation they were having about some (other, related?) misconceptions about her views, being an “eliminative materialist” about consciousness and so on. There were too many threads going at once, so a really definite answer never quite materialized, as far as I can tell. It’s interesting, and they cover a lot of ground, so I figure it’s worth a listen either way.
Exactly. I can’t make anything out of something so ambiguous. Rather, read what she writes in Braintrust, pp. 185ff.
Note that even I could say something similar: since no one has actually begun the requisite research program I and Harris et al. are calling for, we actually can’t make formally scientific statements about moral facts right now, we can only make philosophical ones that are informed by what science we do so far have. But I would not say that state of affairs can never change (we clearly could know a great deal more than we do, if we’d just undertake the research program). Although it might well never change for some questions of moral fact (e.g. it’s possible no science will know everything in its purview).
What an odd reply. I did not ignore what you said in the article, I quoted it and pointed out what strikes me as an obvious flaw. If you have an answer to it, I would love to know what it is. Here again, is the quotation:
My objection is that we answer that question for any x in science in a way that we cannot for an x in moral ‘science’, namely through experimental methodologies. So the logic of the argument is flawed, they are clearly in different categories. That is not circular, in fact it breaks your circle, surely. Unless you really mean that we can use reproducable, experimental scientific methodologies to establish what are moral facts? An example would be interesting.
You said this. To which I replied that your claim is already directly addressed by my article. Extensively. You need to interact with what my article says about that. Because I explain in my article (and link to references with formal demonstrations) that “we answer that question for any x in science in a way that we cannot for an x in moral ‘science’” is false. I spend several paragraphs on why it is false. So you can’t just keep repeating the claim. I’ve already refuted it. So respond to the refutation of it.
Minor nit, but may be important.
Consider:
I argue that all coherent belief frameworks are axiomatic. That is, they will contain beliefs for which there is no justification in that framework. The alternatives are grim. You can reject basic math and logic. You can allow circular justifications. You can allow endless non-circular regresses of justifications. Hopefully we can dismiss those alternatives out of hand, and thus it necessarily follows via some simple graph theory (math) that any non-empty belief system will have at least one belief which lacks justification. Those unjustified beliefs are what we commonly call “axioms”.
I understand the above quote to be a complete rejection of axiomatic belief systems, which is of course supremely silly.
Perhaps you just meant to assert that the particular axiom is “bad” and should not be used? Specifically, perhaps you meant to assert that all truths necessarily reduce down to empirical, scientific truths, and that the axioms of science are the only acceptable axioms?
From later in your post, it seems that the only true facts you allow for are material scientific facts. What about pure logic and math? it is true t hat addition is commutative in the Reals defined according to the usual Cauchy construction (in the context of the axiomatic framework of ZF), but this is not a scientific fact. I did not prove this with evidence. There is absolutely no observation of our shared reality with our conventional senses which I could ever make which would could show that this is false. It’s simply not falsifiable. It’s also undeniably true. Thus, there are justified true facts which are not justified by empirical reasoning at all.
When I say that it is objectively true that I am sitting in a chair, it is implicitly understood that we are in the axiomatic framework of evidence-based reasoning and the scientific method. More formally, we are talking about a particular universe of decision problems, and one of those decision problems is “Am I sitting on a chair?”. Evidence-based reasoning and the scientific method is a particular solver for that set of decision problems. It is a rule (or set of rules), a process, an algorithm by which one can decide problems in that set. We say that this particular algorithm is objective. What we mean by that is that any reasonable observer who applies the rule in the same situation will achieve the same results. Compare and contrast with the rules for the decision problems of refereeing football and the decision problems of judging figure skating. The rules of football are called objective because any reasonable observer who understands the rules and applies them will come to the same result, whereas this is not true of judging figure skating. That is what “objectively true” means. Something is objectively true only in the context of an explicit or implicit objective framework. (Of course, “objective” under this definition is not a strict “yes or no”, but a matter of degree.)
Similarly, when I say that it is objectively true that addition is commutative in the Reals under the usual Cauchy construction from Rationals (and Rationals are from the usual construction from Integers, etc., and Naturals are from the usual definition in ZF), this is objectively true in the context of the objective framework of ZF.
http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory
The rules are objective because any reasonable person who applies the rules of ZF in this situation will also conclude that it is true that addition is commutative in the Reals.
In other words, I have identified two distinct realms of truth – that of pure math and logic, and that of material facts. One kind of fact is true according to its framework, and the other kind of fact is true according to the other framework. You cannot use science to determine if addition is commutative in Reals any more than you can use ZF to determine the weight of a chair.
I think that many people understand morality to occupy a third realm, distinct from pure math and logic, and distinct from material facts. Now, just like we can create mathematical models and use mathematical models as part of the scientific process, we can also use science as part of our investigation of moral truths. However, moral truths do not completely reduce to scientific truths any more than truths of pure math and logic completely reduce to scientific truths.
Because morality occupies a third realm, we consequently need a third framework, a third solver for the third set of decision problems. Part of the solver for the decision problems of morality can be described as “Act to increase your own well-being and the well-being of others”. (I think there’s a couple other vital parts of the way we decide moral problems, such as John Stuart Mill’s Harm Principle, some way to measure benefit now vs benefit in the future, and a couple other corner cases, a way to measure how much good do you need before you can inflict harm on an innocent unrelated third party (if ever), and so on.)
I’m sure you disagree, but I’m not sure where or how. I am kind of curious. I don’t expect much, but I at least wanted to let you know where I’m coming from. I also do hope that you did not mean to reject all axiomatic frameworks outright.
So any such objection you raise to moral science will apply with the same force to every science (biology, physics, etc.).
Consequently, the way one would defend other sciences against it, will equally defend moral science against it.
Since I am assuming you are not attacking all science as unfounded, I must assume you agree your objection has a defeater for other sciences. You must therefore agree it is defeated for moral science–by the same defeater.
Certainly not. I was not assuming you were attacking all of science. It would have been uncharitable to. If now you want to attack all of science, then we have a much bigger problem.
Fundamentally, moral science is based on the same axioms as all science.
So I was assuming you did not mean axiomatic as in “not empirically determined” (as if you were challenging the entire epistemological foundation of basic propositions on which all of science and mathematics rests).
Once we get that cleared away, and realize by “axiomatic” we can only mean here “fundamental empirical fact on which a specific science is based,” then I am saying what is axiomatic is the satisfaction axiom, to which your own axiom reduces.
If it did not, then your axiom would have to be shown to be empirically true. Otherwise no system could be built on it that can claim to be true.
You are perhaps weirdly thinking that morality is a system of logically necessary facts, like mathematics. But I don’t see where you would get that impression. You certainly can’t think I believe that. And you’ve given me no reason to believe you have any good reason to believe that yourself.
And many people think fetuses have souls, and that we survive the death of our bodies, and that homosexuality is evil.
What people think is irrelevant to the question of what’s true.
They reduce to empirical truths about people: how people ought to behave.
I have laid out why.
If someone can prove some other kind of morality exists that is true (i.e. not just some fictional system of “oughts” they made up, but a system of oughts that I really should, as a matter of actual fact, obey, and obey over all other systems of oughts), then I would be wrong, and they would thereby have proved it (by definition).
What Harris and I are saying is that no such morality exists. People have had three thousand years to produce one, and have failed. Even after thousands of really super smart people writing millions of pages on the subject.
Meanwhile, Harris and I (and many other philosophers) have found a morality that we can prove does exist and is true, and not just some fictional system we made up, but one we can empirically demonstrate, if we undertake the correct research program.
It’s as if everyone keeps talking about Planet X, while we are talking about Pluto and asking for scientists to build telescopes to start observing what’s true about Pluto and for people to stop writing elaborate fictions about a non-existent Planet X and claiming their fictions are “true.”
Hm.
You’ll have to excuse me if my level of understanding isn’t up to the level of most, I’m very much a layman and all of my philosophical knowledge comes from my own nutting out of how things work rather than reading philosophers.
I -think- I understand where you’re coming from, and I think I conditionally agree.
If I’m reading correctly, you (and Harris, obviously) are arguing that morality is objective if we take neurology to its logical conclusion and define morality as above. I believe it to be in the same area as the free will question, and they are bound to the same conclusion. If we do not possess free will, that is to say if we are bound by the extremely complex but still ultimately testable and predictable neural networks that we develop over the courses of our lives, then we are also bound to the logical conclusion that what is moral is also predictable and therefore objective.
Sorry, word salad. There’s meaning in there somewhere.
We do not currently possess the level of neurological and psychological understanding that would allow us to access this objective moral standard, that is obvious. I think that’s where the main bone of contention lies, and of course in the uncertainty we still have about whether or not our behaviour is ultimately reaction-based or if we have the ability to spontaneously alter our own neurology. Again, the free weill problem.
We do have to – at this stage – presuppose that all human psychology and behaviour is testable and predictable. My personal belief is that this is probably true. There is yet no hard evidence to falsify this hypothesis, and the fact that we are able, however clumsily, to identify, diagnose and successfully treat mental disorders* is (primitive) evidence in favour of it.
Anyway. I agree that it is probable that morality can be held to an objective standard, with the condition that we are absolutely nowhere near to being able to have the tools to discern what that objective standard actually is, and therefore should be.
I hope that made sense. Apologies for the rambling, my brain likes to take side roads. Into free will territory this time, apparently.
*Not stepping into the utter minefield that is the field of normative psychology, as the idea of what should be considered “normal” is contentious and morally fraught within itself! It rather smacks of the old problem of having to use a broken hammer in order to fix that very hammer, heh.
Yes. All correct. Except that (a) moral facts depend on more than just neurophysics–they depend on how the world (and social systems) actually behave as well; and (b) we actually could do much better at this science and learn a great deal more even now, if only we started the requisite research program.
Thankyou, it’s rather fun trying to wrap my brain around this sort of thing.
You’re right, I was extrapolating neurology and behavioural psychology out to encompass interpersonal behaviours, societal functioning and taking as read that reality is the basis of any standard, moral or otherwise. Misapplication of terms on my part, really. Or oversimplification. I do that a lot.
I also agree we need to focus heavily on this kind of research. It’s such a massive potential field – with so broad a spectrum that we could focus on anything from the minutae of a single aspect of neurophisics to the sprawling effects of cognitive functions (or biases) on an entire community, or on even a wider pool still.
Even if we eventually find that it’s all too complex and intrinsic biases account for too much of our thought process to be able to derive anything concrete from them (though I’m doubtful that’s the case), at least we’d have a vastly greater knowledge of how our brains, and by extension our communities, function. No downside, really.
The objection I keep seeing people make, in this and other threads, and in one of Massimo Pigliucci’s blogs, is that even if we know exactly what we most value and desire, and we know how to achieve it, that doesn’t give us any help at all in answering the question “what should I most value and desire.” Please tell me if the following accurately represents your answer:
You can only be convinced to change your values by an appeal to more strongly-held values, which you can only justify by appealing to other, still more strongly-held values, etc., until you come up against “core values”, which are a product of your physical/biological nature. These values cannot be changed, because you have no other values to which you could appeal to justify changing them. Therefore it makes no sense to ask “ought I to hold those values?”
So essentially, there is no difference between “what I, as a matter of fact, most desire (or would desire, if I was rational and relevantly informed” and “what I ought to most desire”, because you can only justify the latter by reference to the former. And if science can answer the former, then it can answer the latter.
Bingo.
What you are talking about (as being the question everyone is asking) is called Moore’s Open Question. You have discerned the most relevantly correct solution to it.
I directly address MOQ in SaG V.2.2.5 and 7, pp. 331-32 and 338-39.
There I say:
I have since formalized the argument and improved its terminology: “your own happiness” reduces properly to “the available state that is most satisfying to you,” as I explain in my debate with McKay. In TEC I don’t even define it, since discovering what it is will ultimately be science’s job. But when we define it as satisfaction maximization, then I would now say MOQ becomes defeated even at the stage of pre-empirical analysis: it would be self-contradictory to say (a) we are more satisfied being dissatisfied, unless we agreed that in such a case we are pursuing our most satisfying state, whereas it is tautological to say (b) we are more satisfied being satisfied, and since (a) and (b) exhaust all logical possibilities, it is necessarily the case that maximizing our satisfaction is always the highest goal of any sentient being and therefore the pursuit of it will supersede all other goals, which will only be pursued insofar as pursuing them constitutes pursuing it (this is essentially what Aristotle argued 2300 years ago, only he did not completely tease apart the analytical from the empirical aspects).
Therefore, what science has to do is empirically discover what most satisfies people (of the actual options available to them in any given circumstance). Moral facts follow therefrom (when in conjunction with the facts of the world etc.). It makes no sense to ask “wait, should we live more satisfying lives?” The answer can only ever be yes. Otherwise we would have no reason to live. The question is therefore not whether that is what everyone wants, but what behaviors actually most reliably achieve it. About which people tend to have a whole lot of false beliefs. And that is what science could fix.
Thanks Richard. I think the force of Moore’s Open Question is in the fact that a “moral ought” feels like it should be something different from a “pragmatic ought”. It’s quite clear, as you say in The End of Christianity that if you want your car to run smoothly, you ought to change the oil regularly. But we’re so used to thinking of morality as “that which I should do, regardless of my desires”. It feels like the “moral ought” should be independent of any desires, but the more I think about it the more I realise that that makes no sense; “oughts” rely on values, and values are determined by our desires.
This goes right to the first premise of the first argument in your appendix to TEC: if there is a moral system, then it is a system of imperatives that supersede all other imperatives. That statement is not intuitively and immediately obvious to me, perhaps because it seems to strip morality of any necessary connection to ideas of kindness, generosity, selflessness, etc. But I think you’re right, that those ideas boil down to an expression of our values, which in turn determine which imperatives we will be motivated to follow.
Maybe it would just be more helpful to replace talk of “morality” with talk of “values”. That way, we’re not presupposing that there is a transcendent morality that has to apply to everyone, and the first question becomes “what exactly are our values, and do they differ at all?”
I have some lingering confusions: is there any distinction between a “moral ought” and any other kind of “ought”? It seems to me that in your system, there is no distinction. Note 35 in TEC says “willfull irrationality is immoral”. So is this the case even if my irrationality does no harm to anyone?
You add that if it is irrationality born of inaccessible information then it is not immoral, because then the agent has acted on all information “reasonably obtainable at that time.” How do you define reasonably obtainable? Couldn’t I always obtain more information about any decision I make? I’m not talking about the impossibility of perfect knowledge, I’m asking if even optimal knowledge is possible. Taken to the extreme, are we all being immoral if we don’t spend as much time as humanly possibly trying to learn as much as we can, and constantly honing our rational faculties?
PS: It’s great to see someone in the atheist movement who even understands what the problem of metaethics is, let alone has a strong answer to it. If you ever debate William Lane Craig again, I hope it’s on this subject. I find his claim that objective morality can only exist if God exists to be ridiculous, but so few atheists know how to respond to it, in my experience. Actually, I think Massimo Pigliucci was one of the best – according to him, Plato’s Euthyphro decisively refutes Craig’s claim (I’d be interested to know if you agree, or if you think it’s not so simple). And I have a lot of sympathy with Pigliucci’s virtue ethics, but he just doesn’t go the extra step and give the metaethical ground for his ethical system, as far as I can see.
Right. It’s counter-intuitive. But many true things are. And in this case, the counter-intuitiveness is traceable to a common cognitive bias, wherein we associate the wrong things with “pragmatic ought” (e.g. selfishness) and thus draw conclusions based on those erroneous inferences substituted for the actual premise, and don’t notice that we’re doing that. Whereas in reality, the “pragmatic ought” might not be what we think it is, either (e.g. being selfless might actually be a pragmatic ought). And failing to consider that, leads our intuition astray.
That’s a good way to put it.
Because science has since found that no human behavior, or even reasoning, makes sense in the absence of desires. All assent (to anything whatever) is desire driven, otherwise we would never do anything (we would just sit inert and starve to death–like the poor victims on Miranda in the film Serenity). So our old assumptions were scientifically false (no morality could even in principle have been based on disobeying our desires).
This is based on the same cognitive bias (mis-associating “do what we desire” with “act selfishly,” when in fact the latter is not logically entailed by the former, e.g. we can desire to act selflessly, and thus we are wrong to treat them as interchangeable).
I would love to re-frame the discussion in some such way, but sadly we can’t. Because the moment you change the subject to x, someone will come in and say “but that’s just immoral” or “morality is different and you should be obeying morality rather than x” and so on. So you can’t get away from it. People use “morality” as a totem, a sound that evokes emotional and thus rhetorical force. As long as they do that, we have to confront what that actually means, in terms that could actually warrant caring about its “emotional and thus rhetorical force” in the first place.
Technically, all oughts you ought to obey are moral oughts (there are other oughts, but they are overridden). However, morality is situational, and as such we already talk that way, so this shouldn’t be surprising (e.g. most of what we do is moral in the sense of it’s okay to do it, not that we must do it in every instance all the time, thus so long as we never do anything immoral, we are always obeying some moral ought or other, even in the most trivial of daily tasks–we take this for granted, hence we don’t stop to think about it much).
Of course most people when they use the word “moral” mean, or want to mean, “universally moral,” i.e. they want to know what the covering laws are, not the particular instantiations (unless the latter is unclear even when we know the former, which is what moral dilemmas are about). Even though the particular instantiations (which will often vary from individual to individual) are instantiations of moral laws (the covering laws) they are not generally what people mean by the morals governing your actions (they mean the covering laws, the universals).
There is a continuum between individual instantiations and universal laws, since there can be group-specific covering laws, too. And we are instantiating all at the same time (universal laws govern what group-specific laws we are beholden to, and in turn govern what particular decisions we make, or can make, e.g. we might have ten options, none more imperative than the next, in which case we can choose any of the ten and still be doing what we ought most to do: TEC, n. 33).
If by “anyone” you include yourself.
Because it can be immoral to harm yourself–if we use the broadest definition of “moral” as that which we ought most to do, but even if we don’t, there is still a way we ought to behave toward ourselves that is what we ought most to do, so regardless of what we call it, it’s still what you ought most to do.
I specifically mention moral irrational behavior in TEC (e.g. game play: n. 36).
Almost never. You are constrained not only by resources, but by time. Therefore there is always an absolute limit on how much information you can obtain, and that limit is often hit very quickly.
As you spend more resources (time and material) gaining information, you are losing other things (things you could have been doing with that same time and material), and there comes a point where taking the loss is more immoral than the benefit of having gained more information (e.g. as when you hesitate and do nothing to stop a crime or save a life etc., but the same reasoning holds even in mundane scenarios, as time and resource management is a part of living a satisfying life).
The line is blurry and not always clear as to where it is, so knowing that, we accept there are certain grey areas of allowed error and uncertainty (the “reasonable” area). It’s only when we pass well beyond the grey area that we are doing wrong (or when we don’t even come near it, and act more impulsively than we needed to).
This actually shouldn’t be surprising. This is how we all accept human reasoning and decision making to be. We just take it for granted and thus don’t think about what that entails as far as what we accept and why.
It also follows that the morally demanded losses incurred by information-gain increase with the risk of being wrong (thus the line moves in connection with the measured cost of making an incorrect decision). But even then, decisions have to be made eventually (a point beyond which your inaction becomes permanently the wrong decision, and irreversible). So even the greatest risks do not entail the greatest “possible” costs in gaining information. One’s resources also vary (the poor have less resources to spend on information gain–including time, usually) and that affects where the line is, too.
It all comes down to an analysis of the options (of where to put that line in each given case) and which place of putting it bears the statistically greatest chance of ensuring you choose the option that brings you the most satisfying life, of all the options available to you.
And for all that, it should perhaps be reminded that morality is not binary: some things are worse than other things, and some immoral acts are not all that big a deal whereas others very much are. One should not assume that every moral slip is as immoral as every other. Thus, in SaG I mention it’s pretty obvious that smoking is immoral. But in the grand scheme of things, it’s not a damnable offense like murder. People are so accustomed to Christian models of morality, where all vices are cause for condemning people as despicable (“we’re all sinners”). But in the rest of the world’s cultural history, that’s actually a bizarre way of looking at moral decision making, one far more damning of human beings than makes much sense.
>>empirically discover what most satisfies people
Unless those are things you don’t want them to be satisfied by, Richard, then you make up various rules ultimately grounded in sacred personhood mythologies so as to wiggle out of them and ban them.
Bingo!
And what about when we have no higher order desires, when we’re thinking – i.e., doing philosophy – about what we OUGHT to desire most? Again, you have no answer.
Bingo!
Again, just answer a simple example to show us how your theory works when we are THINKING about what we ultimately ought to do: should I sacrifice my life to helping my sickly family member or pursue self-fulfillment as an artist?
Again, you have no answer. (“Buy my book, I answered it”. Uhh, no, you haven’t.)
Bingo!
Why are you so afraid to draw the ultimate materialistic atheistic conclusion, Richard? Why this desperate clinging to what are ultimately religiously generated notions of truth and judgement? There is no answer to these silly moral questions. We’re just another bug on a rock in space for a brief period of time. There is nothing we ultimately OUGHT to do. It’s a silly as asking what a “true and proper” cockroach OUGHT to do. We animals do what we do and there is never anything “morally” right or wrong about our choices. When the wolf eats the lamb it is not WRONG, it’s just a bad deal for the lamb.
Now you’ve just descended into wholesale lying.
And totally ignoring my arguments and evidence. And all relevant science.
Nice.
Right, everyone is a liar who disagrees with you or asks you to answer a simple question exposing your flawed theories.
Richard, in your moral fanaticism and endless desire to judge the evil doers, you make even pedophile priests seem the more reasonable moral arbiters.
No, snowman, you are not a liar because you disagree with me. You are a liar because you lie about what I have said.
Quite a few comments written to express disagreement with your thesis, Richard, and I take my hat off to you for being so thorough in answering them (as usual).
While I’m dishing out recognition, one thing we can thank Harris for is bringing the topic to much wider awareness. The first time I ever particularly thought about this issue (other than to tacitly accept the conventional ‘wisdom’ that science and morality don’t overlap) was a couple of years ago when I saw Harris’s notorious TED talk. His arguments were so bad that I sat down determined not just to demonstrate his fallacies (easy) but also to prove his conclusion wrong. I found I couldn’t. Then in a blinding flash of comprehension (as often happens when considering the stupifyingly obvious) I understood that moral truth must exist. In hindsight, I wonder how I ever thought anything different.
Similarly, I’m continually stunned by the knots people tie themselves into, trying to deny this notion. Repeatedly, I’m told that this theory entails nihilism, whereas, of course, it is the denial of any physical for morality that trivially entails nihilism (and is trivially wrong – as I wrote somewhere else, recently: “I don’t want to be a nihilist!” they scream in terror.)
The discussions I have on this topic are pretty much exclusively with other scientists, yet on this issue, scientific method is completely forgotten. “Value only exists in minds,” I’ll say, and they’ll reply: “you can’t prove that with a syllogism employing 100% known premises.” Actually, I can, but really, who cares? Do they think the Earth is round? Do they have a watertight syllogism to prove it?
There is a lot of prejudice out there on this issue, but in my opinion, it is one of the most important practical issues for society. Too many scientists are happy to sit back and say ‘those are the facts, what you do with them is another matter, entirely. That’s politics.’ Too many voters and politicians agree (why on Earth should they not?). This has to change.
(As a subversive idea: perhaps you could try to win Harris’s prize yourself (under pseudonym), then in Socratic turn-around, explain why everything you wrote was wrong!)
Absolutely. It’s one useful thing we can do with fame: get more people to pay attention to things they need to be paying attention to (we can’t all do that for everything, of course, but we can each do it for things we are best at; which is why I spend so much time on this in comments here).
Your story about Harris’s TED talk is interestingly familiar because it’s exactly how I became a Bayesian. I was adamantly anti-Bayesian until I set out to actually prove it doesn’t work…during which process I realized it not only works, but is one of the most brilliant epistemological insights in human history. Even if some people are terrible at explaining it. (Even I have struggled to do better at that.)
Thank you for that paragraph. That made me laugh (and I mean LOL for real). In a very good way. You are quite right.
Quite.
One main reason there is so much prejudice against this is that if it’s true, it requires people to face one of their darkest of all fears: that they are wrong about how they ought to think and behave.
All of the most important objections raised here you answer by citing your book, but nobody who disagrees with you is going to want to shell out $20 for it.
The answers to those objections are the only interesting or novel things you would have to say, provided they aren’t fallacious, which I think they are, so this blog post is just extended beating around the bush.
I like how in your paranoid imagination the only reason why Foot would be snubbed is sexism. Most of your opponents probably do you the courtesy of assuming that you hold your beliefs in good faith and not out of spite. Let me tell you that I AM a sexist, and that I would not expect a groundbreaking philosophical argument from a woman, however that is negligible compared to the extremely low opinion I would already have of any argument that attempts to prove a thesis even remotely similar to Harris’.
If someone isn’t going to buy the book they want to claim is false, then they are basically saying they don’t care about being wrong enough to spend even twenty dollars on it. Talk about lacking the strength of your convictions.
In any event, public libraries will get you a copy for free (through interlibrary loan). So if even walking to a library is not worth the trouble, then you really don’t have any strength of your convictions.
And that is all I would need to know about someone to dismiss their opinion in the matter as wholly irrelevant to humanity.
P.S. I’m not sure how to take your admitting to being a sexist who dismisses the arguments of women. Are you joking?
Not that you need any help coming across as a total wanker, noot, but the Kindle edition of SAGWG is less than 3 bucks.
Just to be clear, though, I don’t control the pricing on SaG (its publisher does), and those prices might change from time to time (e.g. a three dollar kindle price might be temporary or a newly discounted rate). But yes, presently, it’s only three bucks. The price of two sodas.
@heliobates
Really? It cost me £8.04, which is more than four times this amount. Is this a case of Brits being ripped off, do you know something I don’t know, or are you just wrong?
Thus illustrating the Amerocentic folly of assuming someone is in the US or that how things are in the US is the same everywhere else. Point well made.
One of the best posts ever on Ftb. Thank you.
I am adding your book to my reading list, but I have not so far had the opportunity yet to read any detail on this. Still, I have to wonder about this idea. Unless I misunderstand the concept of imperative fact completely, the only reason there are imperative facts in agriculture is the desire to produce food maximally; and if you have no desire to produce maximally, there are no real imperatives. I believe similar arguments can be made for medicine and engineering. So, as of right now, it looks to me like these imperatives are still coming from goals outside the actual process of agriculture, engineering, or medicine, and that the goals themselves are not the results of these fields.
Is this from an inadequacy of the analogy, or is it a feature of your argument? Are the goals supposed to be coming from propositions 4 and 5? I have trouble seeing that, because the satisfaction in 4&5 comes from achieving goals, does it not?
I suppose my primary question is whether the goals that seem so integral to 2, 4, and 5 can be discovered by science (I don’t see where they would dome from 1, 3, or 6), or whether this proof just pushes the is/ought discussion back to the level of a discussion of the proper goals.
Absolutely.
So the scientific question is: are there desires all people share? Or all members of certain classes or groups of people, if some form of realist moral relativism is true? Etc.
There are lots of answers science could end up finding true. But we won’t know until we actually look.
See my reply to Disagreeable Me on this matter, since everything I say there continues this thought right where you need to go.
My objection would be to proposition 2:
This, to me, only seems true in an egalitarian society, not a hierarchal one. Two people with equal resources would find mutual pacts easier to implement. For a man with vastly disproportionate resources, acts which protect his wealth from others may be far less expensive than sharing that wealth to ensure equality.
For me, that leads to a fundamental value conflict: Should we value equality, or should we value “success” over “failure”? Do we create a situation where people can choose to be freeloaders and still be provided for, or do we create a situation where those who choose not to contribute are left to perish? Does my right to defend my property trump your right to live?
As a result the moral system for a hierarchal society which protects inequality would look very different from an egalitarian moral system, yet both create functional (and likely imperfect) societies.
That is not an objection to the thesis. Because morals are situational, so everything only applies ceteris paribus: what is true for a high status person is true for all high status persons, etc., just as what is true for a person facing a violent attacker is true for all such persons, yet might not be true for what’s acceptable for a person not facing a violent attacker.
Whether we should have a hierarchical society or how people in such a hierarchy should behave at each stratus of it is still an empirical question for science to answer. But I think in reality we can never have a true egalitarian society. So ethics have to be situational to status, and in fact thus take into account the consequences of that to the entire social system (in which everyone ultimately has to live). At the very least, for example, it can never be the case that we occupy the same point in geographical space, so everyone is disadvantaged in some degree, e.g. some people have to walk farther to come see me–that’s a trivial example, but it’s meant to illustrate a general point; a less trivial instantiation of which is that some people are far more physically beautiful than others, and there is simply no way to eliminate the advantage that gives them, nor is there an honestly cogent argument for trying to (except maybe in certain ramified cases, e.g. blind college admissions), although again that would ultimately be for science to answer (since the consequences would have to be determined and weighed against the conseqyunces of the alternative, both measured against our shared core values).
On value conflicts see my note in TEC, pp. 425-26 n. 33.
These are all resolvable empirically. And when they aren’t, they are functionally irrelevant.
Interesting article.I’d really like to hear your takedown of natural law theory, since it uses the sort of teleological reasoning that Foot used. I’ve been reading Edward Feser’s blather on the subject, particularly his fulminations on homosexuality, and I’d like to see your reply to this sort of thing, as well as the rest of his schtick.
If you have a link (maybe a summary somewhere?) I’ll take a look.
Just guessing from what you said, it sounds like you are asking about the fallacy of appeal to nature or FoAtN (“we are naturally hetero, therefore we ought to be hetero”). That’s simply a fallacy. Full stop. I have a specific relevant remark on it in TEC, pp. 427-28 n. 43. But you may find most helpful my analysis in Darla the She-Goat.
Although one can attack the FoAtN by attacking either of its premises: the major premise that anything natural is good or the minor premise that x is natural (as in the case “we are naturally hetero,” which is scientifically false), and people all too often focus on the latter and thereby give false legitimacy to the former. But the former is demonstrably false, too.
Tom,
Dr. Feser’s method (derived from the Scholastics) requires determining a primary purpose for every item/act. Unlike what Richard Carrier is suggesting (that we empirically research people to determine what they are and what morals would help them most), this determination is quite arbitrary, not objective, and not in the least bit reliable.
For example, regarding homosexuality (and general sexual behavior), Dr. Feser would claim that the primary purpose of sex is reproduction, and therefore any instance of sex that does not at least allow for reproduction (sperm in vagina) is going against the purpose of reproduction. Against the notion that sex might have a different primary purpose (such as forming closer bonds between people), Dr. Feser and his fellow Scholastics can seemingly form no coherent argument, nor can they offer an objective way to discern which among several different purposes is primary.
Richard,
You said:
The people who decide on what scientific research will be done are the funding agencies who accept research proposals and award grants. I have done research while working for government and corporate organizations. We wrote proposals and (when successful) were awarded grants to do the work. I have also been on the other side as a member of a committee that evaluated proposals. I have written lots of proposals and I have read lots of proposals. The way to initiate serious scientific research is to submit a proposal, not by trying to convince people you engage on the Internet.
You said:
It is time to end the philosophical debates and turn that assertion into a research proposal that is submitted to and evaluated by a one or more of the many organizations that fund scientific research in the US. This blog, plus your other papers would be a good start. Add to that the details of the research programs that you said you have prepared, or would be able to prepare.
The best way to determine if your (and Sam Harris’) “moral science” has any merit is to put it forward as a proposal to the NSF, National Research Council or other recognized funding agencies. These endless debates are a waste of time and effort. That is also why Sam Harris’s essay contest is not useful. He is a neuroscientist, after all. He should send his book along with his proposal to the NSF. They will send him back an essay he may (or may not) like. But it will settle the question of whether or not his case for a “scientific understanding of morality” is mistaken. And after you have sent in your proposals, your case will likewise be decided.
But that just gets the scientific enterprise started. If the proposals are funded, the work must then be completed, reported on, published, duplicated by other research teams and subjected to normal scientific methodology.
“Moral science” will not be science unless and until it is subjected to that rigorous process. Until that is done, all of your philosophical arguments (in your books and articles, most of which I have read) will be just that; philosophical arguments and not science. More back and forth arguments in this blog and elsewhere are not helpful and may, in fact, be counterproductive to what you and others of like mind are trying to accomplish.
I am genuinely interested in how this research program will be received and what it will accomplish. If you truly believe that you have hit upon the correct scientific methodology to resolve the moral issues that are tearing apart our society, then you, Sam and/or others should feel a sense of urgency in getting it into the scientific mainstream so as to produce results that would benefit so many people.
You mean public and private institutes, public and private universities and university departments, public and private foundations, thinktanks, and charities, as well as state, national, and international governments. And private citizens (my research of Jesus was wholly funded by individual donors; the internet even has things like Kickstarter now for facilitating that).
You have to get scientists interested first. Philosophers have been saying for years that scientists are neglecting this field. Scientists keep responding with ridiculous and fallacious reasons to keep ignoring it. What can we do but keep trying to break through that wall of bias and finally convince some scientists to do it?
It’s as if scientists refused to study the solar system. And we keep explaining why they should be, and how they might go about it. And then you come along and say we are wasting our time, that scientists should be doing this. Um, if they were doing that, we wouldn’t have to be doing this.
You have the wrong end of the stick here.
I am not a scientist. I am asking scientists to study this.
If there is a scientist out there who wants to work with me on developing a research proposal, I’m game. But they have to do that. I cannot telepathically compel them to do my bidding.
Even Harris is not a psychologist or sociologist, for example. He actually already is doing relevant neuroscience (e.g. he has been nailing down the neurophysics of belief formation). But whether he is able to develop the kind of social psychology study that is required I don’t know. If he does and isn’t, then you have a point about him. But you don’t have that point about me.
No, they are not. When scientists keep telling us there is nothing to study, obviously it is not a waste of time to keep explaining to them that there is.
Again, imagine if you said our “endless debates” over whether we should study the solar system were a waste of time…when all we were doing is trying to get scientists to study the solar system, and all they were doing is continuing to give us bogus reasons to continue not doing so.
You wouldn’t be helping. You would, in fact, be contributing to the very failure to act that we are protesting.
Moreover, you seem to be lost as to what exactly the core thesis of Harris is.
Sure. We are saying exactly the same thing.
The core Harris thesis is not that moral science is already a science. The core Harris thesis is that all statements about what is or is not moral are scientifically researchable questions of fact. Thus all attempts to know what is moral are either bogus or approximations to a scientific understanding.
Our situation now is comparable to psychology in the 19th century: there were scientific facts bearing on questions of psychology, but it was hardly a science. So one then could either draw inferences about psychology in as scientific a way as was available to you (and thus admit that what you are doing is attempting to predict what science would confirm if it checked, whether it could or not, and thus all your inferences are actually proto-scientific hypotheses, and thus would be revised the more scientific facts had been ascertained) or you could draw inferences about psychology in some other way (like getting it from the bible, or armchair speculation, or folklore and tradition, or ideology, or the whimsy of imagination, and so on). Obviously the former is the more correct way to do it. And so, too, morality now, even before we get it on a proper scientific footing.
Thus, we are not just saying scientists should start studying this, and stop giving excuses not to. We are also saying that all discourse about morality even now is approximating to a scientific understanding, which thus dispels most moral discourse as nonsense, and improves what remains in what sense it has, and in what ways it could be empirically confirmed in increasing degrees (only the gold standard of which would be a full-blown scientific study).
We agree. That’s just what we’ve been saying.
But not all philosophy is equal. There is bullshit (e.g. theism, supernaturalism, astrology, afterlife studies). And there is rigorous, scientifically informed philosophy that aims to develop better, more accurate empirical hypotheses about the world until we have enough information to start moving it into a science (e.g. naturalism, protobiology, immortality studies). We can start that process now. Because not all knowledge begins “scientifically certain” out of the gate (e.g. propositions about psychology in 1850); nor is what isn’t yet scientific “just as likely” as every other proposition on the matter (e.g. propositions about psychology in 1850).
Then read my chapter in TEC for some examples of what scientists could already be doing. Right now. If only they’d stop giving excuses for why they don’t.
But also remember that this isn’t just about getting scientists to stop making excuses for not studying this. It’s also about getting all rational people to chuck in the bin all the baloney moral discourse (that is akin to theology, supernaturalism, astrology, afterlife studies) and learn how to be able to identify moral discourse that is actually science-based and at least starting to approximate something testable. That’s philosophy. But it’s the right kind of philosophy. Because it’s philosophy on the right track and capable of getting somewhere (that somewhere only being ultimately, in the long run, actual scientific research).
Well, Feser doesn’t mean “everything that occurs in nature is good, and anything artificial is bad”.
Here’s his analysis of what he calls “classical natural law theory” and an example of the sort of thing I mean:
http://edwardfeser.blogspot.com/2012/10/whose-nature-which-law.html
I’m not sure that affects my point. To wit:
Except that it is also the nature of grasses to whither and die and fertilize the soil for the next season of grasses. So in fact it can be good for grass to get no water, and in fact it has evolved with the very expectation that it won’t.
A clearer example is a certain pinecone that requires a forest fire to open (and thus seed the land). It seems strange to say that it is in the nature of forests to burn down, that that is good for the trees. But in fact those trees evolved so as to depend on being burned down to survive (as a species).
When we get to homosexuality, this is where the widespread evidence of gay animals (which appear to exist in similar population proportions regardless of species) comes round to haunt guys like this. What’s “natural” is simply not going to be what he thinks a lot of the time. Some people are by nature not supposed to bear children, but divert their resources to support kin who do.
And that’s before we even get to the fact that nature doesn’t give a shit about us, so doing what’s “natural” in this sense has no logical connection to what’s good anyway. Airplanes are against our nature. Yet are not “bad” for us. It’s also natural that the human population be culled by a 50% rate of child death to disease and starvation (which is why the average number of children a woman naturally ends up producing and nursing before herself dying, often from labor, is near 4…we thus evolved to compensate for a high rate of child death, and by preventing child death we interfere in nature and thus cause runaway population growth…which we have to manage with “unnatural” things like contraception, which is no more unnatural than simply choosing to not procreate for reasons of rational calculation, as the latter is just as unnatural and differs only in minor details–no different a distinction than traveling with the use of wheels, as in a wagon or car, instead of your legs).
In fact, it is natural for humans to defy nature. Indeed, that is the single most distinguishing characteristic about us (our use of culture, technology, education, writing, environmental engineering, even space travel). So when we get to the one species for which it even makes sense to start making moral arguments, the argument from nature collapses altogether.
Richard (13.1),
Soma is not some acontextual vitamin supplement. I am talking about justifying exploitation because the exploited party is drugged to feel satisfied or have a sense of well-being. How is it a purely empirical question whether people should be treated like this? We could all feel wonderful, be healthy, and the underclass would love their situation, but that doesn’t mean it is right to restrict autonomy, even if it would, thanks to Soma, enhance everyone’s well being.
It is an empirical question whether moral reasoning is solely based on maximizing well-being or satisfaction, and the answer is no. The fact that people disagree with the thesis is some but not the only evidence of this. The thing is, an affirmative answer to that empirical question is the foundation of the thesis.
How would you empirically decide whether, say, an hour’s intense satisfaction is better than a week’s mild satisfaction? Better by what standard?
It’s an empirical question whether you can (so as long as its fictional, it’s irrelevant to real life) and it’s an empirical question whether it logically coheres with any values people actually have (without fallacy or false belief), since what values people actually have and their relative importance to them is an empirical fact (and so what follows logically from those values is an empirically determined fact, being entailed by empirically determined facts).
On magic pill fallacies (which is what you are talking about), watch my debate with McKay and read the follow-up (all linked in my article above). We go over them a lot.
Each and every person’s.
First, yours, if what we are talking about is which you would actually find more satisfying (actually find, not mistakenly think you might find). And we have instruments in psychology for measuring that now, and knowledge of cognitive biases to control for in measuring that, but we will also eventually have brain scan technology capable of verifying it.
Then, each other person’s, likewise. The question is whether the measures are relatively the same for everyone, of if they differ, and if they differ, why (what attributes co-obtain or cause the difference). This will then give us an empirical answer whether there is a universal measure or a measure that differs by categorical type of person. Either way, a scientific fact, from which we can then draw conclusions.
In reality, of course, you can’t compare an hour to a week: you have to fill in the rest of the week following the hour (is it awful, dull, what?) before anyone can logically answer the question of which week they would prefer–assuming they even have the choice (which generally people often don’t have just any choice imaginable, but choices are greatly constrained). It’s also possible everyone will rate both as equal. It’s also possible people will rate them differently when we put them in the context of a larger time period (e.g. a year of alternating between the two options may be preferred to a year without either, and so on).
Again, all of this can be studied. Empirically. Because what you (or anyone) would prefer is an empirical fact about you. And as such is accessible by science. Everything else follows.
I’ve been mulling over your post, and I have a couple of questions. First, it seems like your account is consequentialist, but without direct reference to how one’s actions affect others. In your account, the vital consequence seems to be related directly only to how one’s actions affect one’s own satisfaction. Is that a fair statement?
Your #6 above seems like the premise that would require the most empirical support. The range of values that can arise in the system that we call a human being is pretty wide. We have, for example, human beings who derive real satisfaction from harming others. Such a person appears to fulfill the criterion of making rational decisions — it’s just that what maximizes that person’s satisfaction is considered wack by almost everyone else. Is there a possibility that if we could derive the values intrinsic to a system, we might find that two systems of the same type might have contradictory values?
Also on #6: because of variability, only a subset of values held by a given system will be common to all systems. This means that the common set of values (the set of “moral facts”), when we’re able to ascertain them, might actually be extremely small. Would you consider any values that fall outside the common set (even if those values are fully entailed by the properties of the system) to be non-normative in some way? How should science approach these fully determined but non-universal values?
Fair but incomplete. How your actions affect others affects you directly (in the sort of person you thus become and have to live with being, in the sort of emotional life you thereby do or don’t enjoy, and so on) and indirectly (in how others then treat you, or in how the society behaves in general, to compensate for common behaviors generally, and so on). Thus in my analysis in Sense and Goodness without God I find that what science we have so far suggests that compassion and personal honesty go better for you (both in the direct and the indirect consequences).
You have to distinguish values people happen to have (which may be based on false beliefs or illogical inferences, and therefore false values, in the sense that one can only maintain them by maintaining false beliefs or inferences) from the values they would have if they had a rational face-based value system (meaning a value system logically entailed by true facts without fallacy).
Yes, logically possible. And that would not affect Harris’s thesis. The consequences would then be that we should defend ourselves against that class of person (by exterminating or imprisoning them or otherwise reducing the danger they present to us). Which is basically what we’ve ended up doing.
Because this is basically the problem of the psychopath. Harris actually speaks about this in his own book. But I do as well, citing the relevant science, and showing it suggests even the typical psychopath is not acting on a rational face-based value system (they are just mentally disabled and thus unable to correct their behavior to what would be more satisfying even for them).
For more, see my refs. in the earlier comment here.
Yes. That’s entirely possible. (Indeed I suspect it would reduce to commandments to cultivate a small set of virtues, which in turn entail a system of behaviors in various situations.)
It can simply document them as they are (that they exist, and for whom).
Whether you regard them as “normative” depends on what you mean by “normative.” Norms that apply to a sub-group are normative, but only for that sub-group. Norms that apply only to a single individual are normative, too–just only for that individual.
If by “normative” you mean “universal,” however, then your semantics answers your own question by tautology.
There is one interesting question which I was hoping to get an answer, and I don’t think you explicitly answered it.
AFAIK, earlier you said something which sounded to me like you think all truth necessarily reduces down to empirical reasoning.
My question(s): Addition in the Real Numbers is commutative. Will you agree that this is objectively true? Will you agree that this truth cannot reduce down to empirical truths? In other words, can you agree that we can talk about truths of pure logic and math, and that the scientific process and that evidence-based reasoning has absolutely nothing to say about the truth of commutativity of addition in the Reals?
Some analogies of what I’m thinking. You might say that you can test “1+1=2”, but my response would be that you cannot. You would first need a concrete model in which you place “1+1=2”. For example, it may be true that 1 apple put next to another apple makes 2 apples. However, consider this experiment:
Start at inertial rest in space. Drop a reference rock. Accelerate for 1 m/s^2 for 1 second. Drop another reference rock. Accelerate again for 1 m/s^2 for 1 second. Drop a third reference rock. From the perspective of the first rock, the second rock is moving at a relative speed of 1 m/s. From the perspective of the second rock, the third rock is moving at a relative speed of 1 m/s. So, by the naive logic of “1+1=2”, from the perspective of the first rock, the third rock should be moving at a relative speed of 2 m/s. However, it’s not. It’s moving at a slightly lower relative speed. For all practical purposes on this planet, the answer is 2 m/s (except GPS). However the answer is only an approximation. This is a consequence of basic relativity.
In Euclidean geometry, we can make all sorts of cool statements analogous to “1+1=2”, and no matter what we measure in the real world (like my 3 rocks example), those math truths are just as true. When we make an empirical model using math can we confirm or falsify the model with empirical evidence, but we never confirm or falsify the math. You only confirm or falsify the applicability of the empirical model to our shared (material) reality.
All mathematical truths derive from axioms. Those axioms are only known to be true empirically (if you include “theater of the mind” as empirical, which you should, since that is ultimately the only source of any data about anything you have–if data in the theater of the mind is not empirical, then no empirical evidence exists whatever: see Epistemological End Game). Therefore all mathematical truths are ultimately empirical truths.
You can make a distinction about which mathematical truths apply in the “real world” (e.g. whether a twenty-dimensional geometry is ever instantiated anywhere outside computational “models” on paper, in the brain, or in computers). For instance you can build a mathematics based on silly ad hoc sets of axioms pulled out of a hat, and still make true and false statements about the resulting mathematics, even though they won’t be true or false about the world we live in. But what you will have done in that case is construct a set of propositions about potential realities. Which is still empirical (you are making predictions about what will happen if the world were rearranged so that your axioms were true in it). It’s just that all the empirical tests you would then run are run on a model, and not on a real system (simply for lack of a technical ability to do anything else).
But that model is still run on a computer, such as, but not necessarily, your brain (see The God Impossible). Which is an empirical process. It can even be wrong (see Proving History, pp. 24-25, w. n. 5, p. 297; see also pp. 30-31, w. n. 8, p. 298, in light of what I say in Epistemological End Game; some relevant further discussion is in PH, pp. 257-65 in light of what I say in another blog, cited there: Our Mathematical Universe).
You might also want to explore what I’ve said about the ontology of numbers (and hence mathematics) not only in my blog articles Defining Naturalism and Defining Naturalism II, but in the comments following those, where the issue came up in even fuller detail (as I note in the first, this is all based on what I say about it in SaG, index “numbers, nature of”).
BTW, we confirm or falsify math all the time (just look at the history of Fermat’s Last Theorem for examples of both). So I assume you meant to say that when we confirm or falsify the claim that a mathematical model is actualized in the real world, we have not thereby confirmed or falsified the internal consistency of the model, only whether it is instantiated. Which is true. But that’s only because that’s all we were doing–not because we couldn’t do something else instead, like empirically test whether the model is internally consistent (and thus capable of being instantiated, if its components are). Just because that empirical testing goes on in the theater of the mind (“in a computer”) does not make it non-empirical. To the contrary, all the same perils attach (the testing we do in the theater of the mind to confirm the validity of a theorem can get results of verification or falsification; that test has to actually, and correctly, be conducted and its results observed in order to do either; and the whole process has a nonzero probability of being wrong about either).
Dear Richard,
I have written an account of why I disagree with your argument on my blog. A central chunk is:
“Carrier asks “Which of these premises do you reject?”, my answer is that it is the premise: “The moral is that which you ought to do above all else”, and I reject it because it is incompletely stated. As Carrier himself says: “all ‘ought’ statements are hypothetical imperatives”, they are “if, then” conditionals. Thus Carrier’s premise is incomplete without specifying the “if” goal, and thus begs the whole question. Once you try expanding that premise to include the goal, then it becomes either arbitrary or subjective.
“Carrier defends his premise saying:
“But, whether that thing is more imperative, whether you “really ought to do it above all else” depends entirely on what one’s goal is! Yes you can say that pursuing human well-being and satisfaction is *the* goal, from which morality derives, but that is an extra premise, an axiom, not something that either Carrier or Harris have derived from fundamentals.”
This is illogical. You are confusing the definition of moral with an actual application of the definition. Definitions are tautological by nature (so accusing them of begging the question is ridiculous and betrays a lack of understanding of how language works). When you get to reading my actual writings on this (as my article requests you do), you will not find me begging the question, but actually talking extensively about how to empirically verify the protasis (the “if” clause). You also seem to be conflating “arbitrary” with “subjective” (those are not the same thing), and seem not to be aware that all subjective truths are simultaneously objective truths (e.g. your subjective experience is verifiable objectively by a third party using an active brain scan). You also are conflating “premise” with “axiom” (those are not the same thing; an axiom can be a premise, but not all premises are axioms–some are empirical facts: like what goals you have!).
Hence:
That is actually my point exactly, and precisely what I argue in my chapter in TEC. Read it.
Our entire thesis (and what I prove in TEC) is that morality does not derive “from fundamentals.” It derives from empirical facts, namely empirical facts about you (your actual goals, which are the goals you would have once derived non-fallaciously from true facts about yourself and the world) and about the world (what the total consequences of different action choices really are).
You have therefore simply not responded to our thesis or our arguments for it.
Try again.
Your presentation of the argument is better than Harris’s, because at least your presentation is explicit. Nowhere does Harris lay out the premises and the logic, which allows him to say his critics keep missing his point. Well, the argument is in the eye of the beholder when the premises and the relations between them are only implicit.
I don’t agree with your argument, though. I reject the third and fourth premises. You could just as easily say a hypothetical imperative reduces to a categorical one, since if a desire is morally relevant, the reason a person desires X is that she assumes X is good and that that goodness is generally binding as opposed to being a matter of her personal taste which she could change on a whim. So the two kinds of imperatives might be biconditionally related: where you have one, you have the other, because they’re interrelated. Suppose there were absolute, divine commandments. Those would still have to be filtered through people’s desires and rationality for them to be followed, so wherever you’d have someone following an unconditional, absolute commandment not to kill a person, for example, you’d have the follower thinking, “If I want to do what God says is best, I’d better follow his commandment.” But to explain some pattern in the thousands of followers, moral realists would posit the unconditional fact that X is good. So a pattern in the hypothetical imperatives (in people’s desires and behaviour) would be explained by a categorical imperative, one that doesn’t depend on what individuals actually desire, so that not even all normal desires need be morally correct. It would be possible for whole populations to go astray, to desire the wrong things, morally speaking, because their desires might not be in line with what’s unconditionally best.
As for the fourth, I agree that people normally want to be satisfied with the way they’ve lived their life so that they have no regrets, but like the hypothetical imperatives I see this as extraneous to morality. If there are categorical imperatives (facts about what’s unconditionally right or wrong), it doesn’t matter what an individual happens to desire, since her desire won’t change the moral fact of the matter. Likewise, this self-centered question of whether we end up agreeing with or regretting our past choices isn’t especially relevant to morality. I assume there are a great many jerks that go through life causing other people all sorts of hardships, and whether through luck or through having a thick skin or a lack of shame, the jerks have no trouble looking at themselves in the mirror. That doesn’t prove they’ve made a single moral choice. (Take, for example, the boss David Brent on the UK show The Office.) And the fact that science could explain how they could have more efficiently fulfilled the selfish desires they happened to have also doesn’t show their desires are moral.
You’ll say that their desires would have changed had they been rational and had they gotten different information. This is the economist’s utopian view of human nature which doesn’t apply to the real world. In the real world, we’re usually irrational, as shown by cognitive science, and our desires are determined not by logic or science but by character, experience, and instinct. So maybe were David Brent perfectly rational and were he to learn how to get along with people more effectively, he would choose to be less of a jerk. But again, that’s not relevant to morality in the real world, because in that world no one’s perfectly rational.
So you’ll say that science can show us how to have the fewest regrets, by showing us what an ideally rational person would desire and how that person would try to fulfill those desires. That’s like saying Christian myths give us fictions about angels always doing the right thing. Those would be ideals that needn’t bind us, because we’re not teleologically related to them. Learning how an angel would live would be like a vagrant looking through a window of a sprawling mansion and seeing how a billionaire spends his time. Sure, the homeless person would rather be rich, but since the two are worlds apart, chances are he’d feel alienated from the billionaire, not inspired to take that master of the universe as his personal ideal. There are people who go from rags to riches, but like lottery winners they’re exceptions.
That would be the same thing. If a hypothetical imperative reduces to a categorical, it is still a hypothetical imperative. It doesn’t cease being so. So this is not a valid objection.
The question is why a person “assumes X is good” and whether a person really can change their fundamental desires on a whim. Those are empirical questions, not philosophical. So those are not valid objections here either.
The only way to refute premise three is to show there is at least one imperative statement that is demonstrably true and not a hypothetical imperative or reducible to one. You have not done this. Therefore you have no reason to reject premise three. (Especially in the face of the demonstrations I have published and refer all readers to; e.g. syllogism in TEC, pp. 360-61.)
The only way to refute premise four is to show there are people who prefer a less satisfying life to a more satisfying one (for reasons that are non-fallacious and based on factually true beliefs), among the lives actually open to them to choose, yet that would just show that what you call “a less satisfying life” is the more satisfying life to them. See here.
So you don’t really have any reason to reject premise four, either.
They would be meaningless unless we had a motive to obey them. Which reduces them to hypothetical imperatives. Hence all divine command theories reduce to a system of hypothetical imperatives. There is no escaping it. As I prove in TEC, pp. 335-39.
It doesn’t matter what you “see” things as. What matters is what they actually are. And this is one of those things: no matter what your intuition is telling you here, it turns out your intuition is wrong. Hence the syllogisms which disprove your intuition here in TEC, pp. 359-64. (For related discussion in the text: pp. 340ff.)
Note that you are confusing the ability to persuade someone of a fact, with whether it is a fact. The fact that you can’t ever persuade a creationist that their beliefs are empirically refuted and logically fallacious would not mean evolution by natural selection is false. Likewise, that human nature might make it impossible for someone to recognize that x is what they would really most prefer to do, does not somehow make that untrue–x is still what they would really most prefer to do, they just don’t see it, and can’t. But their not seeing it or not being able to does not change the fact of it.
I wrote several paragraphs in my article on precisely this point. Re-read them.
You should also read my chapter in TEC, as it answers your specific concern about the limits of moral knowledge and human rationality and how moral facts can actually accommodate them. But the last thing you should be arguing is “people are irrational, therefore there is no truth,” much less “people are irrational, therefore there is no point in discovering what the truth is so as to tell them.” If you really believed that, then you must believe the whole of modern science is a waste of time and should be discontinued at once.
Regarding premise 3, what I was trying to say is that the two types of imperatives are only correlated. If there were unconditional imperatives, they’d be useful only if someone wanted to follow them, in which case that person would have motives and there would be empirical work to be done to show how she might more efficiently satisfy her desires. But saying that two things are correlated isn’t the same as saying they’re identical.
You say a divine commandment would be “meaningless” if no one had any motive to obey it. This is like saying fictions are meaningless because they don’t apply directly to the real world. The divine commandment would be useless and perhaps merely ideal, like a mathematical system or a fiction that follows certain counterfactual rules to their limit. But uselessness isn’t the same as meaninglessness, unless we’re assuming some sort of verificationism.
Indeed, you should appreciate that latter difference, because the hyperrational economist’s ideal of rationality that your argument assumes is useless (fictional) but not thereby meaningless. I agree with you that, in principle, science can show how we can live so that we end up with a minimum of regrets. Science can even show us how to change our desires to help maximize our utility, or our satisfaction. But I don’t agree that morality must be rational.
You say that premise 4 is falsified only if we can show that someone might prefer a less satisfying life to a more satisfying one, but of course, given the nature of preference, this so-called less satisfying one would be the more satisfying one, after all. You’re thus taking the psychological egoistic line against the possibility of self-sacrifice (altruism). The fallacy here is assuming that because X is desired, therefore the desire is selfish. Suppose someone sacrifices her happiness by spending most of her time helping others. The egoist says she’s actually selfish and thus not sacrificing her happiness, because she does exactly what she wants; helping other people satisfies her.
But this needn’t be so at all. Her selflessness might be the Kantian sort: she means to discharge her moral duty, but she wishes the world weren’t so screwed up that her self-sacrifice would be morally necessary. So although she does desire to help others, she’s not acting to make her life more as she’d prefer it to be; she condemns the whole world that causes the suffering that moral people try to alleviate. And there’s no chance of science fixing all the natural causes of suffering, so the question of how to maximize utility is idle. A moral person will always suffer because she’ll feel empathy; her motive for helping others is itself a form of suffering.
The egoistic interpretation of her self-sacrifice thus fails to account for the fact that in one sense an altruist acts as she prefers whereas in another she might not do so. She can resent the fact that moral self-sacrifice is needed in the first place. She prefers to feed the hungry rather than let them starve, but that doesn’t mean she’s satisfying herself by helping them. On the contrary, every time she helps someone she suffers like Oskar Schindler in Spielberg’s movie, because she knows there’s always someone else in need.
So without playing the word game, I reject premise 4 because I reject psychological egoism. I don’t think all our decisions are ultimately about satisfying ourselves or being satisfied with our life. We’re not all so self-absorbed. In fact, hardly anyone’s that way. Indeed, I reject the economist’s instrumental view of rationality as pseudoscientific propaganda for consumerism.
I have to reiterate what I just said to Richard Wein:
Due to preparation for the Atheist Film Festival (and then the festival itself) I won’t have the time to read your comment carefully until next week. But at a glance it looks like what you are saying is precisely what I devote nearly ten pages to formally refuting in TEC (pp. 334-43). I suspect you need to examine what I say there before repeating arguments I’ve already refuted.
Richard,
This subject is interesting. Although I’ve never taken a philosophy course, I have taken courses in some related areas such as logic and ethics; and as a lawyer, I take continuing education courses in ethics regularly. I do own your book, Sense and Goodness without God (I’m about 2/3 through it) and have Harris’s audio book, The Moral Landscape (again, I’m about 2/3 of the way through it, as well). I am not prepared to argue for or against your position here, but I do have a few questions. I cannot address my thoughts and questions in philosophical jargon, and I hope that does not put off the philosophy majors here. Anyway, here goes:
Is the purpose of this proposal primarily to provide a counter to the theist’s argument that without (presumably their) god there can be no morality? Or is it to establish that a scientific inquiry into a non-biased method of evaluating “what is moral” is possible, plausible, or imminently achievable–a sort of quest for the morality algorithm, so to speak? Or something else entirely? I ask this because if the answer is other than “an answer to theists”, then I’m afraid I don’t see the point of the exercise.
The reason I ask is that at first read, I take it to be some form of the quest. In which case, I have the following examples on which I’d appreciate your take.
Moral Questions: Ought we all eat meat? Ought we wear garments made from non-harvestable animal products (wool=harvestable, pelts=non-harvestable)? If meat eating turns out to be moral after scientific analysis, does it make a difference if the animal suffers in its manner of death?
Let’s assume that eating animals other than carrion is found to be scientifically immoral. We don’t need to eat meat to survive and as you point out–just because some of us (even if it’s 90%) like to eat meat, that doesn’t make it moral, it’s just popular. Assuming that is the case, and I then postulate that it is moral to eat our dead and immoral to burn or bury them because the only moral meat meal consists of carrion and before we resort to killing innocent animals for the immoral among us who insist on eating meat we morally need to exhaust the alternative sources of meat. If you bury your father rather than let me eat his carcass you have not maximized the human/animal population’s happiness. Again, just because you have a preference for burying your father does not mean that it is correct to do so, only that you like doing it. I can understand how this issue is situational, though. If we assume that primitive humans struggled to find enough food to survive and that meat eating provided an advantage in that it had higher caloric value than alternative food sources, then meat eaters would be selected. So at that time and place, meat eating made sense as a survival imperative. So by your reckoning, was it that meat eating was moral because it was necessary, or was it objectively still immoral even if it resulted in increased human longevity and cognitive capacity? Or is it only immoral now because those need factors have been ameliorated? (I say all of this while scarfing down a sausage and pepperoni pizza, by the way…)
Within science, can it be immoral not to keep the genie in the bottle, if the parameters are such that the easily foreseeable consequences of certain scientific capabilities is to alter the nature of homo sapiens? When conversations with theists turns to notions such as the efficacy of prayer, they will often perform a two-step in avoiding the issue, saying that god gave man free will; so he doesn’t answer or act on prayers that would result in his will being substituted for, say, the Aurora, CO, shooter’s. What if the advances in brain science progress to the point where it is scientifically not only possible, but routine to “read” people’s thoughts? As your references to fMRI indicate, this ability is now (rudimentarily) entering the world. I see no technological barrier to expanding its capabilities to encompass the detection of emotions as well as simpler thoughts. I likewise am not certain that eventually, the hardware will not be as large and cumbersome as it is now, nor that it won’t be able to work without physical contact or immediate proximity. From there, it is not very much of a stretch to envision manipulation of the physical processes that result in a given thought or perception. If, or should I say when, that occurs, what will be moral? Do we live in a Minority Report world? Is humanity merely a colony? Does the good (at the survival level) of the species trump the individual’s freedom to act? If so, where is that line drawn, scientifically? Human cloning? Assuming the ability to tweak and re-engineer genes continues to progress, at what point do we make the decision to or not to construct Huxley’s Brave New World? If science defines morals how do they change over time? The algorithm approach is inexorable. Once science has “solved” a particular moral question, it is solved forever, right? Or if everything is situational, as you suggest, what weight is given to those situational aspects? What weight is given to conflicting values? How does one know whether all of the situational aspects and conflicting values have been perceived within the parameters of the algorithm? What is the threshold for changing the moral paradigm based on these factors?
If the primary reason for the inquiry is to “prove” that pre-supposing a god is not necessary to access the underpinnings of morality, then is this not over-kill? All you would seem to need is to use Bayes’ Theorem to show that given the evidence that objective moral values exist, is it more likely to be a result of divine edict or the product of 40 or so thousand years of cumulative human experience as developed through the ages with insights gained of all manner of perspectives; including art, poetry, prose, philosophy, and science. Simply put, morality is made by humans for humans. When the theist posits god as a source of morality, he/she is merely substituting god (as a real object) for the concept of a transcendent perspective.
As I said, this is an interesting topic.
No.
Both. (You framed your question as a false dichotomy.)
Most (but not all) of my chapter on this in TEC is about proving that “a scientific inquiry into…’what is moral’ [or to be more accurate: ‘how we ought to behave’; what you call it is irrelevant] is possible, plausible, and imminently achievable.”
But the rest is about the fact that even before we engage such a science, all talk about moral truth simply is talk about what people really want and whether their actions really will produce it.
Analogously, before the 20th century there was no scientific psychology. But all talk about psychology was still talk about what things produce mental phenomena and how they work, which is all talk about empirical facts. Thus all propositions about psychology then were in-principle empirically testable hypotheses about mental phenomena and their causes, some of which even then were more likely to be true (e.g. the brain produces a mind; there is a localization of mental functions across the anatomy of the brain; etc.) than others (e.g. psychic powers; disembodied minds surviving destruction of the brain; etc.).
The same is true now: even before we have a science of morality, if our thesis is correct, then it is still the case that all propositions about morality are in-principle empirically testable hypotheses about shared human desires and the cause-effect relations between human choices and outcomes, some of which even now are more likely to be true (e.g. honest and compassionate people, ceteris paribus, more reliably live more satisfying lives than heartless liars do) than others (e.g. abortion and homosexuality are always immoral; eating pork and answering a telephone on Saturday are immoral; men ought to treat women as inferior and subservient; killing people who criticize religion is a moral good; etc.).
On the killing animals question, see the discussion already had on this subject in comments above.
On the “what do we do when we can change our own fundamental nature” question, that’s for another time. As I wrote in the article you are commenting on:
Richard, would your ethical system lead you to disagree with John Stuart Mill, that it is better to be Socrates dissatisfied than a fool satisfied? Assuming the fool and Socrates have the same core values, would it not be moral for Socrates to give up philosophy and become a fool, if it would ultimately lead to greater life satisfaction? Would you here invoke what you said in The End of Christianity, note 36: “even that conclusion [that we ought to be irrational and uninformed] can only follow if we are rational and informed when we arrive at it.” Therefore, we can’t know that it would be better to be a satisfied fool unless we reached that conclusion from a non-fool perspective?
Presumably, Mill’s pig/human comparison is less difficult, because the values of a pig, and their capacity to feel satisfaction, differ so much from those of a human?
No.
Not only because the latter is impossible to rely on (a fool satisfied will soon be dissatisfied: because fools are by definition less able to achieve their goals and more likely to act self-defeatingly) and all but impossible to achieve (once you are no longer a fool, there is no going back; but all attempts to achieve your goals lead you away from the status of a fool, by educating you on the actual way of things).
But also because that ignores the first principle: the truth is not what the fool believes (as, being a fool, what they will believe will be fallacious or false), but what the truth actually is (which can only be what is arrived at non-fallaciously from true facts of the world–the very thing a fool, by definition, cannot access).
See TEC, n. 36, pp. 426-27.
See also the discussion of magic pill scenarios in my debate with McKay (linked in the article above).
The moral is that which achieves the most satisfying state of those available to you. Ignorance is so inevitably destructive of personal development and goal achievement that it can never be the most satisfying state of those available to you; and once you start down that path of realization, it is no longer a state that is available to you.
A better way to frame the matter is in terms of good and bad epistemologies. If you adopt a bad epistemology, it is statistically impossible to achieve the most satisfying state available to you (because your bad epistemology will continually prevent you from locating it or the means to achieve it or hold on to it). The only way to maximize satisfaction is to adopt a good epistemology. But a good epistemology will prevent you from having comfortable false beliefs (it will thus prevent you from becoming or remaining a fool).
You thus have to choose between living a life of weak and superficial satisfactions (being a child) or a life of greater satisfactions mingled with some disappointments (being an adult). For example, the pleasures of being an ignorant child pale in comparison to the pleasures of actually understanding who you are and the nature of your existence and the world. Thus, the satisfactions available to a fool are not comparable to the satisfactions available to a Socrates. That a Socrates might have to pay some tariffs for access to those greater satisfactions makes no difference to that comparison. The best Socrates is one who keeps the costs down–as much as they are able–while still getting those greater satisfactions. And when that becomes wholly impossible (and the costs are too great and there is literally no factually available escape) some people choose suicide (hence the euthanasia debate, as well as moral suicide, e.g. martyrdom and self-sacrifice for others) and they may be choosing correctly (so long as the facts are actually that way, and not merely misperceived to be…the danger of depression is that it subjects its victims to systems of false beliefs about life and the world, which if true could justify suicide, but being false, actually don’t).
Unless you have a different definition of well-being , this is probably not true. Many moral questions are a trade-off between values we hold as “moral” v/s wellbeing.
If I have cheated on my spouse , should I tell the truth v/s attempt to forget it (assuming that there is no other way for the spouse to find out). The latter suggestion might increase everyones “well-being” but its not necessarily “moral”. If you find that people ,empirically, are happier when they don’t say anything – would that be the moral path? Would cheating on your spouse in a way that you don’t get caught be considered moral?
These are all empirical questions. They are therefore the kinds of things we should go and find out. They are not objections to there being something to find.
Thanks for your reply Richard,
OK, here’s a shorter version. I dispute your claim: “The moral is that which you ought to do above all else” because I have no idea what the word “ought” as used in that sentence (and the paragraph following it in your post) actually means. I assert that either (1) it doesn’t mean anything, or (2) the sentence amounts to “the moral is that which is moral to do”. In the latter form your argument no longer links together.
[On minor points, the reason I wrote “arbitrary *or* subjective” was that I was distinguishing between the two; and yes, I agree and am aware that one can make objective statements about the subjective.]
If you don’t know what “ought” in that sentence means, then you have to read TEC. I give, and defend, the detailed formal definition there. It very definitely has a meaning, and it is not “moral.” Moral is a kind of ought. Not all oughts are moral oughts. You could argue that all true oughts are moral oughts, in the sense that all true oughts have to be moral things to do (as otherwise they could not be true, as a moral ought would supersede them), but that’s not true by definition, it’s true by consequence (the consequences of a statement using the word x being true are not synonymous with the meaning of the word x).
See the second half of my comments here and the second half of my comments here.
Very challenging and well-constructed post. I would just add one thing, which is that Phillipa Foot is probably ignored not because she’s a woman (just look at the following that Hannah Arendt has), but rather because she’s only recently deceased, which in the philosophy pantheon represents a mystifyingly large handicap. Give it a couple decades and I reckon she may begin to gain the recognition she deserves.
Not sure that holds up as a rule. I am not aware of any philosopher who has a field-wide following decades after their deaths who wasn’t just as famously paid attention to when alive and consistently thereafter.
For example, Ayer was at the center of philosophical debate in his day, and everyone paid attention to him even when they disagreed with him, and his work was immediately influential and fundamentally altered philosophy as we know it, and his influence has at best declined since then, not increased.
There is not a single woman philosopher in the history of philosophy about whom that can be said.
And it’s not for lack of worthy contributions. There are proportionately less, because women philosophers have been proportionately less numerous. But what would normally be a corresponding influence and prestige has not tracked. Foot is just one of the most astonishing examples of that.
Im not denying that these are empirical questions. My question was that if empirically, you find out that everyone is overall happier/satisfied(not just yourself) if you cheat on your spouse but do it well enough to not get caught , would that be “moral” i.e. is happiness/satisfaction/well-being the overriding factor in all moral questions. A second problem I have is empirical answers still do not imply an ought. Say taking a medicine to cure some pain for e..g. has a .1% chance of inflicting cancer . Should I take it ? the empirical facts are clear. 99.9% well-being/satisfaction increases . .1 % of the time I am screwed. What should I do?
The answer to the first question is yes. If that is what we found.
The only way you could argue against that is to produce evidence that the consequences of accepting that would be unacceptable (and the latter is a covert reference to human desires: in this case, we would mean the consequences to ourselves, e.g. our consciences, directly and/or the consequences to others in turn causing their behavior to then have consequences upon us that we would not like).
But if you could do that, then “that is what we found” would then be empirically falsified by the evidence you then produced (which the study you are confuting must have missed), and you would have thus produced the correct scientific conclusion in the matter.
The second question is a question in risk management (see Wikipedia, Risk Management Magazine, Risk Management Journal, the Institute of Risk Management, the Risk Management college major, books on Risk Management, and so on). Those kinds of questions are answered routinely across all of medicine, engineering, economics, industry, government, and military operations. And they are answered empirically. Insurance companies have perfected entire systems of analysis for this kind of thing. You also do this calculation routinely a hundred times a day (e.g. the probability of dying in a car accident vs. the desire to drive to the grocery store; the probability of your roof collapsing vs. the desire to remain indoors; the probability of a poisonous viper crawling up the sewer lateral and killing you on the toilet vs. your desire to evacuate your bowels in a sanitary fashion; etc.). So you obviously can answer the question. You do it all the time. Even with imprecise, non-scientific data (toilet vipers have a nonzero probability, yet I doubt there is any scientific study of their frequency–although, to carry the analogy through, that frequency is still an objective empirical fact, whether we know it or not).
Notably, making decisions like this rationally and informedly is one of the aims of Julia Galef’s company CFAR.
In your specific case, you can simplify the question by asking another:
If there was a 1 in 1000 chance your car would explode, would you get into it?
I am fairly certain your answer would be no (I’d have a hard time imagining any rational argument that it should be yes, all else being equal, since 1/1000th the value of a remaining average-trajectory life without the use of a car is still a pretty high value outcome to give up, indeed in Pascalian terms, given that there is no afterlife, it has an extraordinarily high value even when cut to a thousandth).
So that would be your answer in your scenario. Easy.
What probability of a car exploding (or plane crashing or getting shot in the streets etc.) would you deem acceptable to bear in order to enjoy the benefits of using a car (or flying or walking around town)? It will be much less than 1 in 1000. Unless, of course, your life is so dangerous that not using a car bears an even greater risk of death.
If we imagine instead the alternative is a life of unbearable pain, then we are getting closer to rational suicide considerations and higher risk choices being acceptable and so on.
However you alter the scenario, you just change the variables in the equation, the simplest of which is [expected net gain or loss on outcome x] x [probability of outcome x] :vs.: [expected net gain or loss on outcome ~x] x [probability of outcome ~x] (and that’s a simple version: science has empirically determined how to deal with all kinds of complexities in risk decision making, e.g. see this paper; indeed, science is far better at this than you or I probably are, and can greatly improve our ability to make choices like this).
Well you are consistent even though I suspect you see the problem- I believe you rationalise it away by thinking that the empirical answer will turn out to be different.
As to the risk assessment question – yes of course we do it all the time – but is there an objective truth even when the empirical answers are in?
No one denies that science is a very valuable tool to inform such decisions.
However I think I will buy your book 🙂
To the contrary. I actually suspect it might not be, or that it won’t be a simple yes or no answer. (Already I suspect polyamory is probably the most satisfying life for everyone, and those who are emotionally against it are like those who are emotionally against homosexuality or transgenderism or pornography, so we are stuck having to manage a certain form of enculturated sex-negativism in our mates and peers, which may require simply accommodating it until it culturally evolves away–which complicates the question of what to do now.) The point is that whatever the answer is, you don’t know what it is, without evidence that it is. And any argument you make for one conclusion or the other will appeal to either of two kinds of empirical facts: consequences of actions, and which of those consequences we (the one making the decision) would accept happening to us. That’s it. That’s the end of all sound moral discourse. There is nothing more anyone can say about what’s right that would be at all true. That may be a startling conclusion. But that’s what the conclusion turns out to be (as I show in TEC).
Yes. Whether we know it or not; and whether we can know it or not. Indeed, in an ideal future state of science, we could map your brain and clock your hormonal and neurochemical responses to different stimuli and incorporate that data into the map, and from that map alone know exactly how much risk you would find most satisfying assuming. Combine that with the actual facts of frequencies and effects (the scientific measures of true risk) and we’d have an objectively true, and empirically determined, answer to all your questions of risk assessment. It would be fuzzy at some scale (just as our regular maps are always going to be fuzzy about where a coastline is, and all we can do is get more and more accurate, never 100% perfect, so we will always have to accept some grey area of uncertainty, but we’ll know where the land and the water almost certainly must be, outside that grey area, and that we work with) and there would be decisions we won’t have the requisite data for (e.g. risk consequences or frequencies we haven’t yet scientifically measured), but then we are back to doing what we do now: making educated guesses about what those risk consequences and frequencies are, which are by nature guesses as to what they objectively actually are (that we can’t know for sure our guesses are right or how close to right they are does not change the fact as to what they are attempting to guess at). And when that’s the best we can do, that’s the best we can do. On this point specifically, see TEC, pp. 424-26, ns. 28, 34, and 35.
Just note that “my book” would mean SaG, which does cover all this at greatest length (though one could pursue it more by following my debate with McKay and my follow-up to that, with links to other blogs on various aspects of it, all linked in the article above), but the formal peer reviewed work, with the more rigorous arguments and the responses to criticisms of what’s in SaG on this point and the addressing of all manner of questions about it, are all in another book, not SaG, but TEC, which is an anthology of several writers edited by John Loftus–in which I have three chapters, one on this subject. I just wanted to make sure that was clear.
I would normally recommend reading the chapter in TEC first, and the rest as referenced therein. That all the other chapters in that anthology are also awesome and interesting (though not on this subject) does add to its value. But the downside remains that it only contains the one chapter on this.
Hello Richard,
I haven’t read your book, so you’d be justified in reading no further. But still I’ll give you my response to what I’ve read here and in other blog posts you’ve written on the subject of metaethics.
It seems to me that you and Harris make basically the same error: you conflate moral and non-moral senses of words like “ought”. Hypothetical oughts are non-moral oughts, and to treat them as moral oughts is to get the meaning of moral language fundamentally wrong. It would help if you avoided the word “ought” altogether, talking instead about “moral obligation”, since that term is hard to conflate with non-moral language.
I won’t attempt to defend this objection further at the moment, since it’s based on a claim about the meanings of words, and such claims are very difficult to substantiate. But this brings me on to a more general point, that metaethics generally turns on disputes about the meanings of words. These are not matters which can be settled as easily as you seem to think, and certainly not by formal deductive logic.
Words get their meanings from how they are used, which is a matter of empirical fact. So inferences about the meanings of words must be empirical inferences, and are best seen as inferences to the best explanation of the evidence. But since meaning lies in our heads, the evidence doesn’t speak clearly, and requires difficult interpretation. We are forced to rely heavily on our intuitive grasp of words. This leads to widespread error, and to it being extremely difficult to talk people out of misguided beliefs about meanings. It’s particularly difficult if they don’t even accept that meaning is a matter for empirical inference, since then they’re particularly unlikely to be swayed by evidence!
If you base a premise of your formal argument on a false understanding of the meaning of a word, then this undermines the truth of the premise and the soundness of the conclusion, regardless of the formal validity of the argument. In philosophy, formal deductive validity is easy and usually uncontroversial. The difficult part of philosophy is getting right the non-deductive inferences by which we get to the premises, and particularly our inferences about the meanings of words. So I’m afraid I’m not at all impressed when you say that your formal argument has been thoroughly checked. You’re focusing your attention in the wrong place.
Traditionally philosophers have put far too much emphasis on deductive argument. I suspect this was because (starting from the ancient Greeks) they took maths and formal logic as the models for good argument. Maths and formal logic seem so precise and undeniable; who wouldn’t want arguments of that sort? But this encourages us to just accept the ultimate premises of arguments on the basis of their intuitive appeal, instead of thinking about evidence. The success of modern science has helped us to see a better model for knowledge acquisition: inference to the best explanation of the evidence. Since David Hume philosophers have been moving towards a more naturalised, scientifically-informed way of doing philosophy. But it’s been a very slow process.
My own approach to philosophy has two (related) planks. One is a very naturalised way of thinking, including a naturalised epistemology. The other is a Wittgensteinian understanding of language and meaning. As Wittgenstein saw, most philosophical errors arise from confusion over language. (“Philosophy is a battle against the bewitchment of our intellect by our language.”)
I would encourage you to subject your deepest moral intuitions to the most thorough skeptical scrutiny you can manage. In particular, don’t take it for granted that there are any true moral facts. (Facts like “X is morally wrong” and “you have a moral obligation to do Y”.) The existence of such facts seems to be presumed by your premise #1 above. I’m not going to make an argument for moral error theory (my position) here. I’ll just say don’t take anything for granted.
Due to preparation for the Atheist Film Festival (and then the festival itself) I won’t have the time to read your comment carefully until next week. But at a glance it looks like what you are saying is precisely what I devote nearly ten pages to formally refuting in TEC (pp. 334-43). I suspect you need to examine what I say there before repeating arguments I’ve already refuted.
I want to say, thanks for your lengthy replies. I’m giving it some thought.
Dear Richard,
Following our discussion above, I have now obtained and read your chapter in TEC. As a result I’ve now written a second blog post on your argument and where I agree and disagree with it.
I am not convinced that you have established an objective moral system. Addressing your formal proofs (appendix to TEC chapter), in Argument 1 you early on define the symbol “v” as “what we ought to obey over all other imperative systems”. However, I assert (as indeed do you) that “ought” statements need to be of the form “if one desires goal X then one ought to do action Y”.
In your definition of symbol “v” you don’t specify the goal of the “ought”. This means that I don’t know what “v” means, and consider its definition to be incomplete. And that means that I consider lines 1.2 and 1.3 of your Argument (involving v) to be invalidly stated. As a result I don’t accept that you have linked “what is moral” to “what maximises human well-being”.
I give a longer analysis of your chapter in my blog post linked to above,
Cheers, Coel.
Oh, good. I’ll take a look at that next week. I’ve been swamped with work, travel, and server downages this week and have to get to some other items this week instead.
But just replying to what you’ve said here, you seem to be confused. v has to be determined empirically. That’s the point. Argument 1 is not at all about what v is. It only proves that T is v…regardless of what v turns out to be.
Hence there is no mention of “what maximizes human well-being” in Argument 1. This therefore cannot be a valid objection to Argument 1. Indeed, I would call you to notice that the phrase “what maximizes human well-being” (or indeed even just “human well-being”) appears nowhere in any of my syllogisms.
Whether T is “what maximizes human well-being” is something that has to be determined empirically. My syllogisms are only proofs that T exists and can be empirically discovered (and as such will be the true morality). What T turns out to be is a wholly separate question, one which my syllogisms don’t endeavor to demonstrate, because it is precisely what those syllogisms prove science (or some empirical approximation to science) has to determine.
Basically, science would have to prove W (p. 361) is “what maximizes human well-being” (whatever that means; note that I take Harris to task for phrases like that being hopelessly vague) before we could say T is “what maximizes human well-being”.
But what T actually is is a separate question from whether T exists and is empirically discoverable. The Harris thesis is the latter; he does not claim to already know the former with scientific certainty, only that he wants to.
@DisagreeableMe
Amazon.co.uk is charging £8.04. Amazon.com want $2.93 and Amazon.ca (from whom I purchased SAG) is only asking $3.03 CAD. I consider that a ripoff, but YMMV.
Since I don’t know everything that you know, then I don’t know how to answer that question. If you know how I could answer that question, then you know something I don’t know.
False dichotomy much? It’s possible that I was merely under-precise.
@ Richard
Not really. What the Right Hon. Me elided over in his haste to call me wrong was that noot set the threshold for his intellectual courage at $20.
Assuming noot’s from North America is just playing the odds.
Richard: is there a writing of yours I can recommend please t my theist frends re: why Objectiv Moral Values can be well grounded other than in a deity? I remember reading something about the scariness of bears and superman…and a peer reviewd essay?( uhmm, we don’t say “absolut” since if ‘absolut’..then a God would be subject to them??)
This is summat that ofn comes up in apologia. e.g. the host of Unbelievabl – Justin Brierley put to Dick Dawkins- who did not want to ansr:
“If everyone evolved so that raping woman was OK, then would it me moral?”
(As it happens, in Islam the dower price – the Muhr/sidaq is usufruct payment for the cunnus. In the Share’a the husband has dominion over it and can therefore legally use coercion to retriev his property.
The She’a state this is true for temporary contracts of marrij too.
http://wikiislam.net/wiki/The_Meaning_of_Nikah
Hav you red “Heresies: Against Progress and Other Illusions” by John Gray? He’s a philosopher who argues the secularists/humanists hav bought into the judeo-xtian myth’v man-is-special guff – hence there can be nothing objectiv…
Thanks
Afzal
The “fearsomeness of bears” was an analogy another philosopher developed (in MDAP), which I cited in SaG and then used in my blog article Moral Ontology.
Re: the rape argument, that confuses evolved tendencies with right actions. Neither I nor Harris argue that the moral is that which we evolved to do, because evolution doesn’t care about our happiness, and thus is not a reliable guide for pursuing it. See Darla the She-Goat.
Re: Gray, sounds like drivel to me.
One of these days I am going to ILL your books to actually read them, so I understand that I am, on lack of information, unable to give a full reply. Yet, one thing you said in the reply to animal eating above made me wonder:
“Case in point: eating animals does not cause them suffering. Death is painless; in fact, it ends all pain and suffering”
There is one part of your project that we already have settled science: The desires of dead people (they don’t have any).
But this is an apparent problem: How are we going to decide whether to kill someone? If we consider the hypothetical situation of the person being dead, that person’s previous desire to stay alive is irrelevant. There is nobody left that has this desire. Moreover, per your stipulation, an unanticipated painless killing does not cause suffering (surprise bullett to the head). Hence, our moral calculations should output that if anyone can fulfill his desires better by someone’s death this should be brought about. I.e. It would he been better if Hitler had quickly and painlessly killed all the Jews, people that cared about Jews, and people caring about Jew-careres, etc (That is, left no one but a group of self-loving Nazis and tribes isolated from the rest of the world).
I don’t mean this as an overblown knock-down argument, but as a technical calculation worry. These null desire values for dead people make it hard to calculate.
Let me phrase it another way:
My understanding is that morality will be what science determines as the fulfillment of people’s desires. So, theoretically there will be one big formula that outputs what everyone ought to do next. This will be based on the fulfillment of desires in that next moment. But dead people don’t factor in. Moreover, some potential actions are killings of people. But the desires of people that might be killed (actually, nothing about them, besides desires of other people who are staying alive) do not play any role in determining whether it is moral to kill them. That just seems to get something about morality wrong.
It also raises the worry that the “best” thing to do might just be to nuke everyone.
I hope this worry is coherent and on point,
best
It’s a question of what you destroy when you do. And whose conscious will you are violating. And what sort of world you thereby create. And what sort of person you then become (and how you will feel about that). All four factors are actual real-world consequences. The desires of dead people (or the lack thereof) are irrelevant to all four.
The third consequence, for example, is a function of Game Theory: What standard would you allow yourself to be killed under? That’s the standard you have to embody or else you can’t expect it to be applied to you; otherwise, you are helping to create the standard you actually endorse (so if what you are actually allowing is what you would rather not be victim to, you should stop being quiet about it: the point well made on a separate subject by Australia’s Chief of Army Lieutenant General David Morrison, “the standard you walk past is the standard you accept“).
This won’t apply to animals because animals can’t reason about things like that. And that’s precisely what’s so awful about killing people: they know what that means; they have something remarkable to lose (themselves as persons, and their futures as knowing and experiencing persons); they are far more like you (as complex self-conscious persons sustaining and building an experiential identity), and can enter into social agreements with you (implicitly and explicitly); and you are capable of appreciating all the above.
“Art and aesthetics depend on the existence of conscious minds—and specifically on the fact that such minds have various forms of aesthetic experience in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of art and aesthetics must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem to be beautiful.” Agree or disagree? How does it differ from the statement under discussion here?
As I explain in my book (SaG, VI), opinions about what is beautiful are not demands on action; morality consists in demands on action (imperatives). That’s the most pertinent difference. It is possible that creating and preserving beauty (in some respects or fashion or other) would be a moral imperative, but that’s an empirical question in the moral domain, which cannot be answered by merely determining that something is beautiful.
On the science of aesthetics (which notably is already an ongoing science: the very thing Harris and I want to see happen to morality) see my article on music and the references there (as to my discussion of aesthetics in SaG and its contrast with morality).
Dear Richard, returning to this issue:
My objection is more basic than that. I’m not querying what it is that fulfills the criterion “v”, I’m asking what the criterion “v” even means. It is defined as: “what we ought to obey over all other imperative systems”, but I don’t know that the means because I don’t know what “ought” means unless referred to a goal.
Thus I can only interpret the definition of “v” as meaning: “what we ought to, in order to achieve goal ????, obey over all other imperative systems”. Yet the goal is not specified in your definition of “v”.
Can you produce a version of your Argument 1 with an explicit goal accompanying each use of the word “ought” (specifically in the definitions of v and M, and hence in lines 1.2, 1.3, 1.5 and 1.6)? Alternatively, if you’re not referring each “ought” to a goal, can you explain what the word means?
Cheers, Coel.
Then you need to read the syllogism that establishes what “ought” means. That’s on page 360-61. The second syllogism.
In short, you are saying I do prove that v is T, but not what v is. Which is a valid point for that one syllogism, since it doesn’t claim to do that (but since it doesn’t claim to do that, your objection is not an objection to the validity or soundness of that syllogism, only to the applicability of its result). That is done in the subsequent syllogisms, starting with the very next one (which establish the applicability of what was proved in the first syllogism).
So you don’t have any logical objection here.
Hi Richard,
Sorry, I’m still unconvinced that Argument 1 proves this. Let’s take the first steps:
This looks as though it is defining in that morals are objective from the start. On what basis are you asserting that the moral imperatives must supersede all other imperatives? Why can’t you have a moral system that competes with or is subordinate to other imperatives? On what basis are we ranking different imperatives?
Presumably we “ought” to obey them because, by 1.1, they “supersede” everything else. Which brings us back to the basis on which you are asserting that they do supersede everything else. Whether we “ought” to obey them then depends on what our goal is, since which imperatives supersede which will (surely?) derive from that goal.
Or perhaps you are asserting that, given the set of systems of imperatives, one such system must always supersede all the others. If so, on what basis are you asserting that? (To me the relative ranking would seem to be goal-dependent.)
Or perhaps you are asserting that *if* there is one system of imperatives that *always* supersedes all others, then that system is what we call “moral” (or ultimately the “true moral system”, T). (And that we cannot have any “moral system” unless it has this property?)
I may be misunderstanding something here so would welcome your clarification about 1.1 and 1.2. [PS, my Kindle version of TEC doesn’t have page numbers.]
Cheers, Coel.
Because of Argument 2.
Certainly, if you want to talk about something else, something we ought not do, because we ought to do something else instead, and you want to call that “morality,” then you can (words can mean anything you want). But you won’t be talking about what I’m talking about, which is simply what we ought to do. (See pp. 348-49.)
But as soon as we admit what we are talking about is what we actually ought to do, then 1.1 is what we are talking about (by definition), and therefore v = T (and the latter is what is proved by argument 1).
Which is why my moral theory is called the Goal Theory of Morality.
Argument 2.
That is explicit. Read 1.1 and 1.2.**
If there is no v, then there is no m (and therefore no T). That would then prove moral antirealism (and establish that there are no moral facts at all, in the sense of things we actually ought to do).
** [Except for the “always,” since that is not required in Argument 1: morality can be situational and still satisfy 1.10, as explained in the body of the text–since “always” does not appear anywhere in argument 1; the “always” would only suit if it meant “always, when all relevant circumstances are the same”. Perhaps this is where you are hung up: thinking argument 1 proves moral absolutism, rather than moral realism conditional on there being a v? Otherwise, that there is a v is proved in arguments 2 and 3; that there is a universal v is proved in argument 4; that it can be empirically discovered is proved in argument 5. Argument 1 only proves that that v is then T.]
Dear Richard,
I’ve finally had time to get back to this. Your last reply clarified your argument for me, however, I’m still sticking to my stance as above. Regarding your arguments 1, 2 & 3 (paraphrasing to some extent):
Arg2: If we’re informed and rational, we will seek that which maximises our well-being.
I accept Argument 2.
Arg3: 3.1 to 3.5 There is a course of action that we would want above all, given that we will seek what will maximise our wellbeing. This seeking to maximise our well-being motivates us to pursue that course of action.
I also accept this so far.
Arg3: This well-being-maximisation that we are motivated to pursue equates to the “true moral system”. This is obtained from 1.8.
Here I do disagree with 3.6, the imported 1.8. Thus, as I thought originally, Arg1 seems to be the crux of the matter for me.
Arg1: *IF* there is realist morals, this is what it is. Counter: OK, but that doesn’t establish that there are *is* realist morals. I’m lost as to which line changes things from “If moral realism holds then …” to “moral realism holds”. Arg1 doesn’t seem to do this. Arg3 imports and thus depends on Arg 1.8, and the conclusion of Arg3 is that moral realism does hold (for an individual). But shouldn’t the “*if* moral realism” condition hold over to the conclusion of Arg3?
Second, I would dispute the idea (Arg1) that “if there is any moral system at all” then it must be a moral-realist one. I’d argue that non-realist moral systems are exactly what we’ve got and are entirely valid moral systems.
However, as above, my main concern with Arg1 is that it uses terms like “ought” and “imperative”, which are all about goals, but without specifying those goals. Thus I find it hard to follow or agree with the argument. It seems that the “goals” are specified later in Arg2 and Arg3. But I don’t accept that one can establish, as in Arg1, claims about “oughts” in the abstract, and only later fill in the goals. That’s because, whether the claims are valid depend entirely on what the “oughts” and “imperatives” are referring to.
Another way of saying this is that one could have abstract “oughts” (lacking a specified goal) *if* moral realism held. Then there would be something that you really “ought” to do. But assuming moral realism is begging the whole question.
Thus, as above, I assert that to be valid Argument 1 needs to be written out with an explicit goal attached to each use of the words “ought” and “imperative” Here is my attempt at that:
1.1: “if there is a moral system then it is a system of imperatives that supersede all other imperatives”.
To me this expands to either:
1.1a: “if there is a moral system then it is a system of {commands deriving from authority X} that {authority X ranks as being higher than} all other {commands deriving from authority X}”.
Or:
1.1b: “if there is a moral system then it is a system of {important things necessary for goal Y} that {are more important for achieving goal Y than} all other {important things necessary for goal Y}.
I’m presuming that you intend the latter and that — perhaps — your “goal Y” is then “maximising human well-being”.
If that is your intention of 1.1 then I’d disagree with it, or rather, I’d assert that it can only be an axiom, a definition, not something that is being shown to be true. In that regard, this begs the whole question.
If you don’t intend “goal Y” to be “maximising human well-being” (or something close to that), then what is Goal Y? I don’t accept that it can be left unspecified here, since to me the assertion 1.1 makes no sense without an explicit goal.
1.2 “the moral system” is “what we ought to obey over all other imperative systems”.
I’d expand this to:
1.2a “the moral system” is “what we {ought in order to achieve Goal A} obey over all other {important things necessary for Goals A to Z}.”
At this point I’m unsure whether Goal A is the same as Goal Y from 1.1b (it hasn’t been shown). Again, this is because the “oughts” and “imperatives” are all unspecified. I’m guessing though that maybe the intention is that A = Y and thus we have:
1.2b “the moral system” is “what we {ought in order to achieve Goal Y} obey over all other {important things necessary for Goal Y}.” where Goal Y is again (is it?) “maximising human well-being”.
Or, in other words:
1.2c “the moral system is doing what will maximise human well-being”.
This is the end-claim of the whole set of arguments, but it seems to me to derive from being input as an axiom in 1.1, only that axiom is hidden because the goals of the “oughts” and the “imperatives” are only made explicit much later (Arguments 2 and 3).
Anyhow, that summaries why I demur from your claim to have established objective morals.
Cheers, Coel.
I don’t understand what you mean by disagreeing with the statement “the true moral system is that which we have a sufficiently motivating reason to obey over all other imperative systems.” That has to be true by definition, or else you are no longer talking about “morality” in any relevant way. If for example there is some other system of imperatives, call it ytilarom, that “we have a sufficiently motivating reason to obey over all other imperative systems,” then your “morality” (whatever you mean by that term) cannot be the true morality, because there is something we ought to do instead–and an imperative statement we should not obey cannot be true, by definition…that is literally what it means for an imperative statement to be false. So how can you have a “true morality” that consists entirely of imperatives that are false?
This doesn’t make any sense to me.
Perhaps you mean to disagree in the sense that there is no T. But if there is a B, there is a T, by definition. So you would have to reject B. But you just conceded you accept the arguments for B as sound (Args 2 and 3). So this means you must simply be objecting to the use of the word “moral system” for T, which is just semantics–there is still a system of imperatives we have a sufficiently motivating reason to obey, whether you call it “morality” or “ham.” Changing the name of the thing doesn’t get you out of it.
Yes. That condition is explicitly stated in 1.10 (complete with the word “if”).
It is Arg 3 that establishes the condition is met (by establishing that B; already Arg1 establishes that B is m, as the definition of m, via s).
I’m not sure you are using the phrase “moral realism” correctly. Arg3 establishes moral realism. Are you confusing moral anti-realism with moral relativism?
It doesn’t have to. It doesn’t establish what those goals are. It only establishes that if they exist, then T exists. That they exist is then the job of subsequent arguments (e.g. Arg3).
You seem to be confused here. I never say any “ought” is true without a specified goal. In Arg1 I only talk about what is true *if* certain goals exist. If they exist, then they would obviously be specifiable, and thus “specified goals.” What the nature of those specifics might be is for science to empirically determine (Arg5).
Args1-5 only prove that there is something for science to discover. Not what it will be.
You don’t seem to be understanding this distinction.
First, the former reduces to the latter (so the distinction is moot; that’s the whole point of pp. 335-43).
Second, in the five syllogisms I state nothing about what goal Y will be, other than that it is discoverable by science (or, at least, empirically, if we don’t yet have the means to devote a fully scientific inquiry to the task).
The conclusion of Args1-5 is that there is a Y and it is empirically discoverable. It is not that Y is any particular thing–that would be circular. If Y can only be discovered empirically, it can’t be discovered by syllogism. Since my args prove Y can only be discovered empirically, my args can’t possibly be saying what Y is.
You seem to be assuming these arguments argue for some particular Y. I do not fathom how you got that notion.
Whether Y is “maximising human well-being” (whatever that means) is something that has to be determined empirically, and what my args prove is that the only way to do that is to show (with empirical evidence) that “maximising human well-being” is C (in Arg3; CH in Arg4; if there is no CH, there is probably still a C).
I suspect that isn’t what we would find, that C is a little more fundamental than that, and that “maximising human well-being” is just sometimes one way to achieve C (it would thus be, at best, a derivative value, not a core value), but sometimes not. But again, only science could tell (we’d need facts about human core vs. derivative desires, and facts about how the world, including social systems, causally operates).
Hi Richard,
Yes, that’s it. You are saying that *if* there is a “true morality”, in the sense of a moral-realist objective morality, then it has to be B. However, I dispute that moral realism and objective morality.
By definition of what? I can accept B without having to declare that B to be T, in other words without having to declare B to be “the true (realist, objective) morality”.
Yes, I accept that there is a B in any given system. Let me give an example. If, while cooking, I accidently put my hand on a red-hot hot-plate, then I have a sufficiently motivating imperative to withdraw by hand, which exceeds all other imperative systems. Thus (in that circumstance) I consider that I have a B. However, I don’t regard this as having any relevance to morals, and thus I don’t see any link between this B and T.
(I’m giving a specific example of B here because I usually understand things best by doing so; for a lot of the abstract definitions of Argument 1 I’m fairly unsure what they mean.)
It’s more that I’m objecting to using the term “the moral system” for B. Yes, you can call B “ham” or “morality” but that doesn’t mean the B is cured pig meat and it doesn’t mean that B maps clearly and straightforwardly to what people in general mean by “morality”.
One could *define* “morality” to be B, adopting axioms that T exists and that T is B, but that’s not the same as arguing that “given B, therefore T”.
Arg1 doesn’t establish that B is m, it argues that *if* m exists then B is m. As in, *if* there is a true and objective moral system then it has to be B. But it doesn’t work in reverse.
To make a comparison, suppose someone claimed that there was *objectively* a world’s best novel. They could then argue that the only sensible interpretation of that claim is that the “objectively best” novel is the “most liked” novel (let’s assume we can evaluate “most liked”). Thus, *if* there were an “objectively best” novel then it would have to be the “most liked” one (it being any other would not be sensible).
But, one could then fully accept a “most liked” novel and yet still reject the notion that this was the “objectively best” novel if one held that the concept of an “objectively best” novel was ill-founded. At least, one could hold that this only worked if one *defined* “objectively best” as “most liked”, and thus that adding the label “objectively best” doesn’t add anything to the real description, which is “most liked”.
In the same way I fully accept B, but don’t accept that calling it T (the “true moral system”) adds anything to the much clearer description of stopping at B. Equating T with B only makes sense if you do it backwards, defining T as being B. And sure you can add an axiom to that effect. But that doesn’t mean that others have to accept that axiom and accept the concept of a “true moral system” as being meaningful, any more than they must accept the meaningfulness of the concept “objectively best novel”, even if they grant a “most liked” novel.
That’s just semantics. It’s like saying “I can accept that small primates exist without having to declare that they are monkeys.” But “monkeys” is what they are called. You can’t avoid the claim that monkeys exist by insisting you don’t call them monkeys.
True moral propositions are propositions you ought to obey. That’s B. So you can’t avoid B by insisting you want to call it something else than T.
At the end of the day, you still ought to obey those propositions. (And it is irrational and self-defeating not to.)
Dear Richard,
Well, we’ve likely got as far as we’re going to get with this. I’m still unconvinced by your argument, essentially because when you say:
I still don’t know what you mean by the word “ought” as used in those sentences, but would assert that one can’t have abstract “oughts” with unspecified goals, and thus that those sentences don’t mean anything.
I do agree that we often have a “that which we have a sufficiently motivating reason to obey”, for example the withdrawing of a hand accidently placed on a hot cooker, and thus a B, but I don’t see that withdrawing a hand accidently placed on a hot cooker has anything to do with what we call morality.
Thanks for the discussion, it’s been interesting to talk to someone actually trying to defend moral realism,
Cheers, Coel.
As long as the specific goals are implied you can. We often use imperatives that way (“you ought to do something about that check engine light” does not require stating the goal; it is understood, and if queried, can be supplied–in the moral domain, that is what the entire field of metaethics does, and metaethics is what I am saying science could strongly help with).
In the syllogisms I set up, the goals are entailed within them and specified by empirical discovery. That’s the whole point: to discover the superseding goals, thereby discovering the superseding imperatives. That is precisely what a science of morality does.
As to the meaning of “ought,” I explain that in detail in the chapter itself. See in particular note 35 (p. 426) and Argument 2, with the syllogism on p. 349. All ought propositions convey the hypothesis that you will do x when you are aware of the relevant facts and making decisions rationally in accordance with them.
As to your final example, it may be morally obligatory not to withdraw your hand, if you are physically able to choose (e.g. when doing so will result in you and someone else falling to their death). That is the part you seem to be overlooking. Morality is the superseding ought. And even when it is merely to reduce pain and injury to yourself, that is still a moral thing to do.
Which is why smoking is immoral–it’s just not the kind of moral failing that people care to police as much because we are more concerned with immorality from a person that can harm us or those we care about, but that is merely a fact of how people prioritize their concerns…in actual objective fact, smoking is still immoral, and we merely tolerate that kind of immorality more because it is not outwardly vicious or callous, and we are more damning of the outwardly vicious and callous–for sound practical reasons. But being “slightly immoral” is still immoral. Merely because a particular immorality is tolerable or forgivable, or there are much worse immoralities, makes no difference to that fact.
Most moral systems in history recognize this. They merely derive morals from false facts or irrational inferences (and thus things as bizarre as masturbation or drinking alcohol or dancing become “immoral” for no empirically sound reason). But the concept of taking care of yourself being a moral imperative is present in nearly all moral systems humanity has ever contrived, so it shouldn’t surprise you to find that an empirically true moral system would arrive at a similar outcome, albeit one empirically and rationally sound. It’s even in Asimov’s Laws.
Hi Richard,
Note 35 says (paraphrasing): S morally ought to do A if A would lead to what S wants, such that S would indeed do A if rational and informed.
If this is an axiom (a definition of what “morally ought” means) then I would disagree with it, or rather assert that it isn’t what I understand by the term “morally ought” (and, I submit, not what most people understand by it). [If this isn’t an axiom, but is something you consider derived from other axioms, then I would likely dispute your usage of “morally ought” that leads up to it.]
Further, I don’t agree that “All ought propositions convey the hypothesis that you will do x when you are aware of the relevant facts and making decisions rationally in accordance with them”. Again, that isn’t what I understand by a “moral ought”.
In other words, I accept B but do not accept that it is T. You say that my rejection of this is semantics, that it still is T whether I accept it or not. But that is only so if you accept the above statements, which essentially define T as B, which I don’t.
Agreed. And, as I see it, it is the consequences for someone else that turn it into a moral issue. Because, to me, morality is about inter-personal relations. It’s not about what is good merely for me. Thus, my withdrawing my hand after accidently putting it on a hot cooker is nothing to do with “morality”, despite being a B, unless it has consequences for someone else (e.g. if burning my hand means I couldn’t work and my children starved then deliberately leaving it there would be immoral; if the only one harmed is myself then it has nothing to do with morality, as I see it). This is one example of why I don’t accept that a B is necessarily a T.
If what would please and satisfy me hugely is seeing every Shakespeare play at Stratford before I die, then “oughts” follow from that goal, but again I would not see them as “moral” oughts.
It is the conclusion of the paper.
Starting with a definition of moral as that which you ought most to do, and then moving through the syllogisms at the end.
Again, it does not matter whether you object on semantics. Semantics have nothing to do with reality. You can call it omphalouhg instead of morality…and it will still be what you ought most to do. And therefore whatever you then call morality will simply be false (it will be what you ought not most do).
That’s the bottom line.
And there is no escaping it. Semantics can’t get you out of reality.
Unless you want to split the set…what I call “morality” into two sets, each with a different name. Then you are simply agreeing with me on every relevant detail, and just requesting a proliferation of terminology for some reason to refer to the same things I am. To wit…
False dichotomy. Those are not mutually exclusive. And when they are, any morality thus derived is factually false. That is what my syllogisms actually conclusively prove.
What you seem to be doing is using a “defining convention” (see Sense and Goodness without God, pp. 316-17), which has nothing to do with what’s true, but merely what sounds you utter as lexical code to refer to it.
There is a set of all things you ought most to do. You want us to split that set into two subsets, and utter the sound “morality” when referring to one of those subsets (that which pertains in any way to effects on others) and utter some other sound (“omphalouhg” would do as well as any) when referring to the other subset. For no demonstrably useful reason. Just because.
That’s a little silly. But if you want to insist on being silly, it doesn’t change the reality that you ought most to obey both subsets, because the two subsets by definition never conflict and contain only true propositions. That’s what my syllogisms entail.
What you “call” each subset is thus wholly moot.
As a Bayesian – you probably have not only heard of Nassim Nicholas Taleb, author of The Black Swan, but might also be intimately familiar with his work. Anyway, Mr. Taleb considers John N. Gray “the greatest living thinker” according to his professional and home page. You might do well not to underestimate Mr. Gray, but that’s your own call.
That sooner makes me question the judgment of Taleb than admire the abilities of Gray. Gray has a bad reputation as an illogical anti-humanist out of touch with reality. He sounds like someone more in love with his own ideas than in getting things right. In Taleb’s defense, I think he was only referring to Gray’s work in political theory and economics, not his work attacking humanism or moral realism.
Every man is entitled to his own opinion of other men’s work, even to (over hastily?) criticize the character of other men. You, a superstar of ancient intellectual history, have more entitlement than I to judge. My esteem for Gray is because I think that Taleb is the preeminent superstar in the intellectual firmament. One of the supremes of brilliance and genius. If Taleb has high enough esteem for Gray to call him prophetic, the modern thinker for whom he has the most respect and the greatest living thinker – that’s good for me at the present. If you are comfortable dissing Gray as illogical, anti-humanist, and out of touch; that is your prerogative. I think that might be making a grave mistake.
In your reply in this thread #34.1, you wrote of one of our greatest fears, viz What if I am wrong about how I ought to think and behave? How many human beings are so out of touch with reality, refusing to face their fears, as to not even give a second thought to “oughts” in terms of thinking and behaviour.
Modern prophets poets artists journalists etc are routinely disrespected and ignored by the masses because of the great fear that you mentioned. Is this a lively fear for superstars or only for peons? Do the elites have nothing to fear?
Perhaps the atheist and I count myself in the numbers of “no religion” & skeptical toward spirituality… proving to himself the non existence or the secular nature of the numinous, is a way to deny the fear of how I ought to think. Perhaps you contradict yourself Richard. On page 8 of Proving History you state “[Believers] need Jesus to be real; but I don’t need Jesus to be a myth.” I wonder. Why the work to prove an ahistorical Yeshua, establishing the historical factualism of mythicism? Perhaps it’s a way of denying fear of how I ought to behave, as in ought-not-to-be-a-follower.
What does this have to do with Taleb and Gray? It just allows us to be part of the masses, ignore or let a better man judge a “prophetic voice.”
Re: Gray and Taleb, you still seem to be conflating Gray’s writing on humanism with Gray’s writing in economics and politics. You completely ignored my remark that Taleb’s comment seems to be referring to the latter rather than the former, and I only found serious fault with the former, not the latter (which I have not explored). You started this thread by conflating those two things. And now you are still conflating them (this time with a dose of rather uncomfortable hero worship) as if I had never pointed out that you might be doing that. This does not look like a productive conversation to me.
Your remaining commentary about the fear of being wrong doesn’t seem to have any relevance in this thread at all. It does not respond to or contradict anything I have said. So I can’t fathom what it’s a comment on or what your question is meant to be.
The only premise I’m not certain about is number 4, unless you’re dealing with how Mises defines desire. Would it not be possible for a human to be in a state of even extreme dissatisfaction and be aware that their choices lead them to dissatisfaction but be impelled by an aberrant process in their decision making to continue making such choices?
Aberrant processes by definition cannot be normative.
Massimo Pigliucci has recently blogged about the “morality as a science” debate. He summarizes his position thusly: “ethics is about reasoning (in what I would characterize as a philosophical manner) on problems that arise when we consider moral value judgments. This reasoning is informed by empirical evidence (broadly construed, including what can properly be considered science, but also everyday experience), but it is underdetermined by it.” He then gives examples of ethical questions that he thinks are underdetermined by empirical facts. For example, whether felons should regain their full rights as citizens (eg. the right to vote) after serving their time: “One can’t just say, “well, let’s measure the consequences of allowing or not allowing the vote and decide empirically.” What consequences are we going to measure, and why?”
Richard, I think what you’ve said is most useful when thinking about personal moral decisions. If my decisions are ultimately based on my fundamental desires, I just have to figure out what those desires are to act accordingly. It’s less clear to me how this helps for moral decisions involving other people, that only affect me very indirectly, such as Massimo’s example above. I’m not a felon, so the issue doesn’t affect me directly. I could imagine someday being in this situation and, for example, desiring to have my right to vote restored to me, but imagining a hypothetical case might not necessarily impact on my desires in the here-and-now. So how would I arbitrate this question, using your metaethics? Would you say that, in this case, there is no right or wrong ethical answer for me, because my own satisfaction is not impacted? Or would you say that reflecting on the hypothetical case (of someday being an ex-felon, and being denied the vote) should motivate my desires in the here-and-now?
Massimo’s 5th example seems even more removed from personal desires and satisfaction. Should the current generation apologise for the actions of previous generations? Are the descendants of victims of injustice owed an apology for said injustice, although they have not been personally affected? (Think of reparations for slave descendants in America, for example) What sort of empirical information would you employ in answering cases like this?
As usual, Pigliucci’s remarks are illogical. One could say that of every moral question whatever: “What consequences are we going to measure, and why?” Well, all of them, Dr. Pigliucci. The more we do, the more informed our decisions will be. Certainly, when we know a certain subset of consequences to be too slight or too variable, we don’t have to explore further (since then we know we don’t need to), but when we know there is a subset of consequences and don’t know whether or not they are grave, we ought to be especially concerned to find out…and not remain willfully ignorant.
Moreover, that there may be questions we can’t answer is no more a problem for moral science than it is for any other science. Can you imagine Pigliucci arguing that science can’t tell us anything about biology because there are questions in biology we can’t now answer and might never be able to? That would be illogical. Yet that is exactly the boneheaded argument he is making here.
It’s even worse that he is basically arguing “Science is hard; so let’s not do any.” Really? “What sort of empirical information would you employ in answering cases like this?” is precisely what science is for: answering questions exactly like that. To say it’s hard therefore we shouldn’t do it is just ridiculous. But typical of the bad reasoning I’ve come to know Pigliucci for.
As to how science can help all of us this way, read my chapter in TEC. As to the question of whether you should entertain hypotheticals in order to understand the real ethical dilemmas of other people (a basic function of empathy), you really shouldn’t have to ask. Obviously the answer is yes. Otherwise we’d all be psychopaths.
OK, but in taking that starting point: “Starting with a definition of moral as that which you ought most to do”, I do not understand what you mean by the phrase: “that which you *ought* most to do” as used in that sentence.
Every time I ask you what it means you tell me what the conclusion that you arrive at at the end of the argument (namely: “S morally *ought* to do A if A would lead to what S wants, such that S would indeed do A if rational and informed”). But if that is not input as an axiom, as the meaning of the *ought* at the beginning, then I do not understand what “ought” means in your starting point, which is why I am withholding consent from the first few lines of Argument 1.
Or, to put it another way, your starting point seems to me to be: “starting with a definition of moral as {some unspecified and undefined concept}”, or “starting with a definition of moral as wibble wibble wibble wibble”. It seems to me that the whole issue is this very first definition of “ought” in the moral context, and that what you are effectively doing, by leaving “ought” undefined early on, amounts to inputting an axiom defining “ought” as “S morally *ought* to do A if A would lead to what S wants, such that S would indeed do A if rational and informed”.
Now, I could (in reading your starting point) supply *my* interpretation of the word “ought” as used in moral contexts, but under my interpretation of it your resulting argument does not hold.
Cheers, Coel.
Then you are asking for the ontology of imperative language…so you must not know what it means when I say a surgeon ought to sterilize her instruments, because you don’t know what “ought” means.
So you seem to be talking in a circle. You claim not to know what “a surgeon ought to sterilize her instruments” means because you don’t know what “ought” means, but then say you know what ought means (as “that which must be done to achieve a goal”) and yet still ask me what ought means even though I’ve told you, several times, that’s what it means, which even you have agreed it means.
So what on earth are you asking for?
On the one hand, I am using “ought” in exactly the same sense as in “a surgeon ought to sterilize her instruments.” So if you know what “ought” means in that sentence, stop asking me what it means, because you already know.
The only difference between “a surgeon ought to sterilize her instruments” and a moral ought is, as I have said, and repeatedly, moral oughts are those actions that must be taken to achieve our principal goal. Which means that which we want most to accomplish at the time…which means what we would want if we were rational and sufficiently and correctly informed, and not what we just happen to thoughtlessly want at any random moment or what we want based on false information or fallacious reasoning, as if surgeons could achieve their goals by doing what they thoughtlessly want at any random moment or based on false information or fallacious reasoning, which they can’t, any more than a moral agent can.
And when those two align, “a surgeon ought to sterilize her instruments” is also a moral proposition (e.g. it would be immoral for a surgeon to not heed that imperative unless she had an overriding reason to…e.g. if she can’t sterilize but the patient will certainly die if nothing is done, ad hoc surgery would be the moral imperative, and ignoring the imperative to sterilize would be the moral imperative).
Ontologically, this does reduce all imperatives to straightforward indicative hypotheticals: “a surgeon ought to sterilize her instruments” means “a surgeon who is rational and informed and desires a favorable outcome will sterilize her instruments.” And thus obeying a true imperative is simply confirming that you are, or act suitably like, a rational and informed person with the requisite desires. Which is generally what rational and informed people desire.
This should all have been obvious to you from Argument 2. Note, for example, 2.3, 2.4, and 2.5.
As to why we define the moral as that which we ought most do (which is entailed by the outcome we most want), that is simply because the word becomes useless if you define it any other way. If you define the moral as something other than what we ought most do, then you are talking about a set of false propositions (imperatives that we ought not follow, because we ought to follow some other…the one we actually ought most to follow).
So why do you want to define “moral” as a set of false propositions?
Shouldn’t we only be interested in the set of true propositions? The ones we actually ought to follow?
OK, I don’t get it. Your definition of morality seems to include the notion that the moral thing for anyone to do is that which serves their rational self-interest, or fulfills their desires. But any moral system I’ve ever heard of is designed explicitly to prevent people from pursuing their own self-interest (however rational) when it threatens the interest of society as a whole. Are you suggesting some kind of American/Ayn Rand/Adam Smith thing which says that everyone acting exclusively in their own self-interest will ultimately result in a system where everyone is best served because, to get what they want, they have to give others what they want? Nice in theory, but we live in a world where not everyone has the same drive for, or capability for obtaining, power over others, or the same regard for the interests of others. People with the wrong combination of these have caused great misery simply by acting in their own rational self-interest. Some have been taken sown by people acting in theirs, others have got everything they wanted and died happy. I don’t know what you are talking about, but I don’t think it’s morality??? PS: Just why, under “objective” morality, is smoking immoral and drinking alcohol not? That just sounds like your own personal preference (which makes sense, as that seems to be the basis of your whole theory anyway).
Read pp. 343-44 of TEC.
For more on the same point (esp. the difference between egotism and egoism, and between self-interest and selfishness, which are not the same things) read p. 347 of SaG.
Hi Richard,
I know what “ought” means there solely because, given the context, the goal is implicit. It’s simply a shorthand for “in order to maximise the chances of a healthy outcome … ought …”.
Yes. So Arg 1: goes:
1.1 if there is a moral system, it is a system of imperatives aimed at attaining Goal A which supersede (supersede because they better lead to Goal A) all systems of imperatives aimed at attaining Goals B to Z”.
1.2 Given 1.1, “a moral system” is “what we ought to — in order to achieve Goal A — obey over all imperative systems aimed at attaining Goals B to Z”.
1.3 “what we ought to — in order to achieve Goal A — obey” = “that which we have a sufficiently motivating reason to obey”, the goal producing this motivation being Goal X (which we’re motivated to place above all other goals)”.
At which point I ask, what says that Goal A equates to Goal X? (If it doesn’t then 1.3 is clearly false.) Or, in other words, what says that the goal of the ought in the definition of v is the same as the goal of the ought in the definition of B?
What we have “a sufficiently motivating reason to obey” might be nothing to do with any “moral system” talked about in 1.1 and 1.2, it would only be the same if the Goals A and X were equated, which hasn’t been shown. This 1.3 seems to me a crucial step that links B to T, and then gets imported into Argument 3.
By that I mean that I’m sometimes unaware of what Goal is being implicitly assumed, and thus unclear on what the long-hand version of the “ought” phrase is. I find your Arguments 1 and 3 hard to follow precisely because the goals of the oughts are not specified, and thus it is hard to keep track of the goals — which I see as essential in verifying that each line of the argument does work.
Yes, and it’s that link between “moral oughts” and “our principal goal” that I don’t think has been established.
Here I again disagree, the word “moral” can have a useful and real meaning without having to mean that. I suggest that you are simply adopting this equality as an axiom, and that others can validly dissent.
Here you are identifying the goal of morality with our principal goal. Again, this to me is an axiom, and one that I do not see as required (and one I do not accept).
So see this, let’s present an alternative. You and I are both atheists, but in principle a universe could have the property that “morality” equates to God’s goals or to Satan’s goals, and not to our humans goals. In such a scheme your above statement would not hold.
Now, you might reply, isn’t it entirely arbitrary to equate the goal of morality with God’s goals or Satan’s goals? (Euthyphro), to which I reply, yes indeed, but isn’t it equally not-established to equate morality with human goals? Isn’t that either not-established or an axiom?
I can entirely see why you do equate that, because any scheme for objective, realist morals fails if you don’t. Fine, says I, ditch both that axiom and moral realism. In other words, that identity seems to require the premise “moral realism holds” — which then begs the entire question of whether moral realism does hold.
As an aside, I reject the labelling as “false” propositions those about imperatives that are not the ones that “we actually ought most to follow [if we are pursuing our principle goals]”. They are not necessarily “false”, they simply relate to some goal different from our goals. Nothing elevates our goals to “truth”.
Because I reject moral realism and would not apply either label “true” or “false” about moral ought statements. I would go no further than describing them and their goals.
Sure, we should (in pursuing our goals) indeed by interested in what leads to our goals. But I don’t see that putting a moral-realist spin on that achieves anything, and as I see it it has a range of severe drawbacks of the sort that are usually advanced against realist morals (evaluating and aggregating over different people for example).
Cheers, Coel.
And when the imperative you are looking for is the imperative that supersedes all imperatives, the goal is also implicit: it will be whichever goal supersedes all other goals.
Which goal that is is then an empirical question.
QED.
You have a simple choice: obey imperatives that are true, or obey imperatives that are false. That’s it.
Which do you want to obey?
If you choose to obey imperatives that are not what you ought most to do, then you are obeying false imperatives. Because there is then some other imperative you actually ought more to obey.
Therefore if you want to obey only true imperatives, you should only be concerned to empirically ascertain what you ought most do. Which follows necessarily from what goals you ought most obtain. Which follows necessarily from what goals you will most desire to obtain when rational and sufficiently informed.
That’s it.
Anything else is just false.
And why would you care about a false morality? Much less adhere to it…
Hi Richard,
There is no such thing as an imperative that always supersedes all other imperatives. The only way of ranking imperatives is with respect to a goal, and if the goal is changed then the ranking order will change. Thus “whichever goal supersedes all other goals” can only mean anything by *first* specifying the goal which you want to hold paramount. What Human A holds as their primary goal could well be entirely different from the goal from which a moral ought derives. Nothing requires that these be the same, unless that is adopted as an axiom.
I deny your choice. There are no “imperatives that are true” and “imperatives that are false”, there are only “imperatives deriving from Goal A” and “imperatives deriving from Goal B”, etc. Well, it seems like we’ve been round this circle a few times and are not going to persuade each other.
Cheers, Coel.
I didn’t say “always.”
Morality is situational, as I’ve said repeatedly.
Instead, there is, in any given situation, always an imperative that supersedes all other imperatives. (Notice where the adverb has moved.)
That is proved in Argument 3.
Not any true morality.
That is proved by Argument 1.
Certainly we can invent moralities that you have no superseding reason to obey.
Those are simply false. For the simple fact that you have a superseding reason to do something else instead.
When you ought to do something other than x, “you ought to do x” is false.
So again you have to choose: you can have a moral system every single imperative in which is false, or a moral system consisting of only those imperatives that are true (or a mixed system, but then that’s just the same dichotomy iterated for each imperative, and you still should not want any false imperatives in your moral system, so it comes down to the same question).
Argument 2 shows if you do not have an overriding goal satisfied by any imperative A, then imperative A is not a true imperative (it is not, in fact, what you ought to do). A is then false. Period.
You can’t escape this by inventing moralities consisting of false imperatives. All you will end up with is a false morality.
The only way to get a true morality is to get a moral system consisting of true imperatives.
And argument 2 proves those can only be imperatives that achieve your overriding goals.
And argument 3 proves there is some such system of imperatives–actually true imperatives that you ought to obey over all other imperatives.. Even if those “other” imperatives are for some reason called “moral” they will constitute a morality you should not obey, because you ought to obey true imperatives and not false ones, and if an imperative to do X is true, an imperative to do anything other than X instead is by definition false.
Hi Richard,
I accept your qualification about situation, but it is tangential to the point I was making. I disagree with your claim just stated. I assert that there is “in any given situation an imperative that supersedes all other imperatives given Goal A”. Further, there is a different “…. imperative that supersedes all other imperatives given Goal B”, and ditto for all other goals.
The only way you could rank these to arrive at an overall “imperative that supersedes” is by first ranking Goals A, B … Z. And the only way you could do that is w.r.t. meta-goal Alpha. In other words you need to have specified a goal in order for the concept “imperative that supersedes” to be meaningful.
Now, one can adopt a goal, such as “What Person A most wants”, or “What Person A would most want if they were fully informed and rational”, or “What Person A’s cat wants”, etc. But selecting one of these goals for paramouncy is arbitrary (or it’s an axiom). I don’t see a basis for elevating one these goals to the status such that imperatives about that goal are “true” whereas imperatives about other goals are “false”.
Further, I don’t see any basis — except as an axiom — for identifying one of these goals with the goal from which “moral oughts” derive (as it seems to me that you do in Argument 1 line 1.3, which then carries in to Argument 3).
Cheers, Coel.
That isn’t necessarily true. One doesn’t have to know why they prefer B to Z to know that they do. Thus there might conceivably be no single meta-goal, just a hierarchy of goals, full stop.
However, I do believe we can scientifically prove there is a meta-goal (if we set out the right research program). That meta-goal is greater satisfaction with yourself and your life (relative to alternatives realistically available). This can be demonstrated. See the Carrier-McKay debate on exactly that issue.
It’s not an axiom, it’s an empirical (indeed physical) fact about a person.
It can certainly be arbitrary (e.g. human sexual preferences are an arbitrary product of evolution that have no objective basis beyond our happenstance biology), but it’s still a fact. And unlike an axiom, a fact can’t just be switched off or changed or swapped out. If a person prefers B to Z, and there is no argument that can ever change their mind on that even if they are correctly informed and reasoning without fallacy (note that caveat is crucial: ignorantly or irrationally resisting a change of mind doesn’t count when talking about what is factually true: that a certain choice will, as a matter of fact, result in your being more satisfied with your life, is an objective fact irrespective of what you merely think or believe will happen), then that person simply prefers B to Z and B-achieving goals are for that person moral goals, by definition (when for them B supersedes Z), because there will be no other true imperative for them.
The “point” of elevating B over Z is not that we choose to do that, it’s that it’s already been done: it is already an empirical fact about a person. Full stop.
And that is what science should get busy determining: what actually are the value hierarchies in people who reason rationally from correct facts–which is notably different from what those value hierarchies just “happen” to be in any given person or population, since most people don’t build their hierarchies rationally or from sufficient available information–and so far, science has only studied the latter (irrational/uninformed hierarchies people just happen to have), not the former (what the value hierarchies become when rational and informed), but it could, and should. And the question one then can ask is whether there are any universals, or whether there are shared commonalities among groups of people, and if commonalities among groups (or indeed, only such), then what properties characterize one group from another (allowing you to correlate group-type with associated value hierarchy).
These are all empirical questions. All accessible to science, if it built the requisite research program. And no other imperative system is true but one that would result therefrom.
But in the meantime, the relevance of all this is that every single argument about what people ought to do (what moral system they ought to obey) is simply an attempt to approximate that scientific outcome. It is always an argument appealing to values the target already has (the assumption, for instance, that they will always prefer B to Z when rational and informed–which assumption may be false, but only when it is true will your claim that they ought to follow the resultant morality be true) and the relevant ways the world works (the assumption, for instance, that doing X in situation Y will always get a person B over Z–which assumption may be false, but only when it is true will your claim that they ought to follow the resultant morality be true).
This therefore allows you to test and analyze all claims that you ought to obey a certain moral system: You know that that can only be true if, when you are thinking rationally from sufficient available information, you will value the outcome of doing so more than doing anything else (which is two facts: what you value, and that the actual outcome will be what you thus value).
You can therefore query the reasons given for you to obey that morality: Do you really value the thing being claimed will result above all other outcomes? And if not, is that because you are reasoning about what to value fallaciously or from false or undemonstrated premises? And then if not (so you should responsibly do your best to be sure), then they are simply wrong, and the moral system they are recommending is false. But if, when rationally informed, you do agree that outcome is best in your view, the question then shifts to the other side of the equation: Will the behavior being recommended actually most readily achieve that claimed outcome, or will some other behavior (some other moral system) do so? If the empirical facts show it’s the latter, or fail to show it’s the former, then they are simply wrong, and the moral system they are recommending is false. But if the empirical facts show to any reasonable satisfaction it’s the former, then they are right, and the moral system they are recommending is true. And you will then agree it’s true, because by then you will see that when rational and informed you do want that outcome more than any other and obeying that system will get it for you more readily than any alternative.
I use Christian morality as an example of all of this in TEC (pp. 335-39), showing that Christians are agreeing with me: they claim there is an outcome everyone desires (and that is an empirical, testable–and thus falsifiable or verifiable–claim) and that following their morality will achieve it (and that also is an empirical, testable–and thus falsifiable or verifiable–claim…one that just so happens to be either demonstrably false or not demonstrably true).
All arguments for any moral system reduce in exactly the same way.
And that is what my syllogisms prove.
Hi Richard,
I entirely agree. I entirely agree that in any given situation there are empirical facts such as “What Person A most wants”, “What Person A would most want if fully informed and rational”, “What Person A’s cat most wants”, and lots more along those lines. And from each of those goals derives “oughts” and ranking orders of options.
The arbitrary step, the axiom, is in picking out one of these factually-existing goals and saying that that goal is what *moral* oughts are about, or that one of these goals (and thus also the oughts deriving from them) is “true” and the others “false”.
Cheers, Coel.
That’s not arbitrary if you define “moral” as “true imperative.” The only other option is to define moral as “false imperative.” So why do you want to define morality as a system of false imperatives?
As to whether imperatives are true or false, you agreed there are true and false imperatives, and what makes them true and false. “A surgeon ought to bleed his patient” is false and “a surgeon ought to sterilize his instruments” is true, and these are not arbitrary distinctions, and there is no difference for any other goal and action-consequence pair.
You seem to be confusing “true imperative” with “true goal.” An imperative is not a goal. An imperative is true when the action recommended actually has an outcome that is one’s goal (as opposed to an imperative that assumes you want an outcome that you don’t or that assumes the incorrect way to achieve it). The only sense in which goals are true is that they are factual, i.e. actually exist, and for goals you ought to have, that means goals that will actually exist for any individual who determines their goals by logically valid reasoning from all available information. As opposed to goals you don’t have, or that only exist for you when you reason illogically or from needlessly distorted information–which goals you should obviously not want (and it can be shown that you do have a superseding goal that is thwarted by deliberately preferring irrationally or ignorantly determined goals; the relevant endnote citation is upthread).
So there is nothing arbitrary here. If you want morality to be a system of true imperatives, then morality is simply, factually, empirically, actually that system of imperatives that is true for you. It cannot possibly be anything else…except a false morality. So, again, why do you want a false morality?
If you want a true morality, then it simply, factually, empirically, actually is that system of imperatives that follows necessarily from your superseding goals (which are the superseding goals you would have if you determined your goals rationally from sufficient available information, and not the superseding goals you might just happen to have as derived illogically or from bad information).
This shouldn’t be so hard to grasp.
Hi Richard,
That requires picking one goal to be supreme, and the labelling of imperatives that follow from it to be “true”. Both of those steps are arbitrary. Given several possible goals (“What Person A wants most”, “What Person A would want most if fully informed and rational”, “What Person A would want most if drunk”, “What Person A’s cat most wants”, etc), then I don’t see any method of ranking these goals except by adopting a meta-goal, such as Person A’s long-term well-being.
That only holds given moral realism and objective morals — and thus your system assumes this rather than showing it. The alternative is to not hold to moral realism and thus not assign any truth-value to moral imperatives.
Given a particular goal (the health of the patient), yes; given a different goal then possibly not. The truth or falsity of these cannot be established except by reference to that goal.
Agreed. And thus the truth or falsity depends on the goal.
Agreed. Any many goals factually exist (see above list). To thus arrive at a “true” imperative one first has to pick one of those goals.
Here you introduce an “ought” about which goals “you ought to have” — you can only do this by introducing a superseding goal — perhaps the goal of Person A’s long-term well-being. Doing so is an axiom.
Why? What elevates “goals Person A would have if rational and informed” over “goals Person A has while being uninformed and illogical”? Your only way of ranking these two is from a meta-goal of Person A’s long-term well-being. Thus your system needs an axiom stating that.
That’s not at all obvious — unless we are taking an implicit goal of Person A’s long-term well-being. But if we are then we should say so.
Which I don’t, not being a moral realist, and thus not wanting to put a truth-value on them.
Cheers, Coel.
Not picking, discovering. What someone most desires is an empirical fact.
Reality often defies what people expect. So what you don’t see is not relevant to what is.
If people just rank values for no reason, if that’s the empirical fact of the matter, then it’s just the empirical fact of the matter. Whether it makes sense to you or not. Facts just are. They don’t conform to your expectations.
But I suspect people rank values by the meta-value of life-satisfaction. Why I suspect that (and thus predict it is what we would discover to be the case through any properly constructed scientific research program) is explained in the Carrier-McKay debate. As I said.
Impossible. If imperatives are empirically true, they are empirically true. So you don’t have the option to claim an imperative that is empirically true is false. You can no more do that than claim “surgeons ought to sterilize their instruments” is false.
Moral realism isn’t something I assume. It’s something I empirically discover.
As long as the moral is that which you ought most do, then moral realism is necessarily true. Because there is always something an agent wants most as an outcome, and therefore always a behavior that best achieves it. And no other imperative (labeled moral or anything else) will be true but that one (Argument 2). Therefore that is the only true kind of morality. All other moralities are false.
If you want to only call “morality” moralities that are false (!?), and want to call the true system of supreme imperatives (the things you actually in fact ought most to do) something else (!?), that is just a semantic game that doesn’t get you anywhere. Call it morphlegolpl. It will still be an empirical fact that you ought to obey it. (Per Argument 2, and thence.)
If you want to defend the truth of the imperative “you ought not to be rational and informed” be my guess.
But if instead you are not being disingenuous, then I already answered this question. I told you: TEC, n. 36, pp. 426-27.
I see the cycle the two of you are in, and really think the two of you are talking past each other.
The issue, though, is that we aren’t really interested in what people actually DO value, but in what people WOULD value if they were properly interested in being moral. It’s certainly credible to say that most people rank values by “life satisfaction”, whatever that means, but the challenge that I think Coel is making is this: why should I think that THAT is what it means to be moral. Which leads through this:
And if you define “God” to mean “the universe”, then theism is at least empirically true. But I think that it would be reasonable for someone to question that definition of “God”, and I think what Coel is doing is questioning your definition of moral as “That which we most value.”
And the reason for this, it seems to me, is obvious. I and most people would roughly agree that if someone is going to be a truly moral person, the moral will be what they most value. So, I in fact insist that being moral is what I value most. Your argument then seems to be you coming in, taking that statement, and saying “Great! Then what is moral for you is what you value most, and we can figure that out empirically, so it’s all settled!”. Except that my answer is that that isn’t what I meant. What I meant was that whatever it means to be moral, that’s what ought to be my highest value. Thus, if we figure out what it means to be moral, and that implies that I should value B most of all, and it turns out that I actually do value A more than B, then I would argue that I ought to then change to value B most of all. It sounds like you’d try to argue that I shouldn’t change, but should change my definition of moral to imply that I should value A the most. Which isn’t what people mean.
So, what we need here is a definition of what it means to be moral, so that then we can decide what we ought to most value. Your definition of morality doesn’t work, because it becomes a vicious cycle: what I value most is to be moral, and to be moral is to seek to achieve what I most value, which is to be moral (I made a longer discussion of this on my blog from a longer comment that I didn’t post here). So we need to know what it means to be moral, truly, and it isn’t clear that that can be settled empirically. As Coel points out, once you define what it means to be a surgeon, or a scientist, or healthy, then you can set out the imperatives that are true in relation to that definition, but you have to do that definition thing first, and that’s what’s missing in these comments, and why you end up talking past each other: Coel keeps asking for a definition and one that’s justified, and you keep simply saying that it’s about imperatives without justifying why those imperatives are in any way moral or apply to a definition of moral.
I suspect that the reason Coel rejects moral realism is that he doesn’t think that the problem of defining what it means to be moral can be done in a way that isn’t either axiomatic — just assumed without justification — or arbitrary. I’m not as skeptical on that point as him, and so do think that moral realism is true. However, I don’t think you have a working definition here, let alone one that’s justified.
That can’t work. You can’t arbitrarily define “moral imperatives” and then somehow magically get imperatives that are true (i.e. that anyone actually ought to follow).
You have to first agree that you are only interested in calling “moral imperatives” imperatives that are true.
Then you go out and find what those things are. You don’t define them before looking and then declare them true, as if by coincidence you will find exactly what you arbitrarily declared from the armchair would be true.
Once you agree you are only interested in calling “moral imperatives” imperatives that are true, then it is necessarily the case that “moral imperatives” are “imperatives that supersede all other imperatives” (because all other imperatives are superseded by “imperatives that supersede all other imperatives” and therefore false).
Thus, Argument 2 gets you to Argument 1 and those together lead you to Argument 3, and then Argument 4, and then Argument 5 (all deductive proofs, which no one here has validly challenged: TEC, pp. 359-64).
Hi Richard,
This is the nub of where we disagree. I agree entirely with that second sentence. From there you implicitly equate “what you ought most to do” with “behaviour that best achieves what an agent wants most”.
Again, this uses a short-hand where the goal of the “ought” is not specified, but it needs to be there for the sentence to mean anything. Thus your claim, in long-hand form, could be:
“what you ought most to do [in order to achieve what you most want]” is “behaviour that best achieves what you most want”. This is true (somewhat tautological but true).
The only other possibility is the bracket being “[in order to achieve some goal *other* than what you most want]”, in which case the claim is false, so I presume you don’t mean this.
So, I presume that your first sentence, in long-hand, is:
“As long as the moral is that which you ought [in order to achieve what you most want] most do, then moral realism is necessarily true”.
Now, I agree entirely with that sentence. However, I do not agree that “the moral is that which you ought [in order to achieve what you most want] most do”. Would you agree that that is an axiom at the root of your system?
Cheers, Coel
If “what you ought most to do” is to be a true imperative (and not a false one), then Argument 2 proves it has to be “behaviour that best achieves what an agent wants most.” All other “what you ought most to dos” are false. Unless you rephrase them as “what I wish you would dos” or something, but that would be useless, since it would not be an imperative for anyone to do what you wish (etc.).
My only axiom is that moral imperatives must be true imperatives (because I’m not interested in false moralities; neither should you be).
Everything else follows therefrom. With no additional axioms.
That wouldn’t change the fact that they ought to obey what I am calling moral. That they want to call it something else doesn’t make the imperatives of that system false. They are still what they ought most do, as a matter of empirical fact. So semantics is a waste of their time. They can’t escape the truth that way. So what would be the point of trying?
But that’s not what I do. I define “moral” — not arbitrarily — and then once I have that definition then and ONLY THEN do I start looking for moral imperatives, which would be those imperatives that would apply to a moral agent. You can’t get true imperatives without knowing the type of domain/thing/agent the imperatives are supposed to apply to. I can show this taking “surgeon” and some imperatives:
I1) A surgeon ought to sterilize their instruments.
I2) A surgeon ought to go bowling on Saturday nights.
In order to determine which of these are true and which of these are false — meaning which ones are really imperatives for a surgeon — I have to know what a surgeon is. Once I discover that being a surgeon means being “Someone who operates on people in order to improve the health of the patients”, then we can clearly see that I1 is a true imperative for surgeons and I2 is a false imperative for surgeons, because I1 follows from what it means to be a surgeon and some additional empirical facts, while I2 has no relation to surgeons as surgeons, even if for some surgeons it may be a true, non-surgeon related imperative.
The same thing applies to morality. Take these imperatives:
MI1) A moral person ought to commit rape.
MI2) A moral person ought to give to charity.
Now, most of us think that MI1 is false for moral persons, and that MI2 is quite likely to be true. But we can’t decide if these are true or false for moral persons until we know what it means to be a moral person, just as we couldn’t decide if I1 or I2 were true or false for surgeons until we figured out what it meant to be a surgeon. From this, the only way your objection to me makes any sense if you assume that determining “what it means to be moral” just means outlining the true imperatives. But as we just saw, that’s ludicrous; you need to know what it means to be moral before you can determine the truth value of imperatives that relate to being moral.
And this confusion is causing the problem here, because it looks like you define “true imperative” to be true in the sense that the person either does or at least ought to consider it to be true imperative, without making reference to domain. But for an amoral or immoral person, moral imperatives will NOT be true imperatives for them, by definition, because moral imperatives are those imperatives that are true for moral persons, just as surgeon imperatives are those imperatives that are true for surgeons and bachelor imperatives are those imperatives that are true for bachelors, etc, etc. Now, you seem to try to sidestep this by appealing to the highest imperative that we can or ought to have, and then call that one a moral imperative. But as I said in my initial comment, that goes wrong if someone interprets your 2 as “I ought to value the moral more than anything else”, which reveals, it seems to me, the equivocation in your argument, as 2 can be interpreted in two ways:
2′: A human being ought to value being moral — whatever that is — above all other values/imperatives, even if they currently do not do that.
2”: What it means to be moral is to strive to achieve their highest true imperative, or at least the one they would have if they were properly informed and rational.
When I read 2, I interpret it as 2′, but then you can see how that definition does not easily, or even by your argument, lead to your conclusion, since there is no clear relation between what human beings do value or what imperatives they do have and what they ought to value or what imperatives they ought to have. 2”, however, DOES support your argument … but most people — myself and I suspect Coel as well — don’t think it obviously true. So if you mean 2′, I agree with that premise … but it doesn’t lead to your conclusion. On the other hand, if you mean 2”, your argument is valid … but then I reject 2”, with reason.
This is why your constant replies about only wanting “true imperatives” don’t ever progress the argument, because the counter is always that we don’t know what it means to have true moral imperatives yet, and so either you are putting forward a definition that what it is to be moral is to satisfy your highest true imperative — which we can challenge as being a definition of moral — or else you don’t have one yet and are thus talking about true imperatives far too early.
It’s the other way around. If you define “moral” as “what we can prove a God will punish us for not obeying” you will find no moral imperatives. But that doesn’t allow you to say there is no morality you ought to obey. All you did was exclude one moral system.
What you have to do is define true imperative first and foremost (because all false moral systems are thus eliminated, saving you tons of time) and then ascertain what the difference is between just any imperative and moral imperatives specifically within the domain of true imperatives (because all false moral systems have already been eliminated, so you are no longer considering them). Then you go out and look around to see if any imperatives in that subset exist.
No other method makes sense or could ever work.
No, you need to know what it means for a moral imperative to be true before you can determine the truth value of imperatives that relate to being moral.
Because false moral imperatives are of no interest.
So the question becomes: if there is an imperative, x, that is what you ought most to do in situation y, is there any possible sense in which there can be a moral imperative that overrides x? No. It’s logically impossible. Because only true moral imperatives count, but if there is a true moral imperative that overrides another imperative, then that moral imperative would be x (it cannot be distinct from it).
There is no getting around this. You can’t semantically define your way out of facts. You can arbitrarily choose to call some false system of imperatives “morality” and the true imperatives (that which you actually ought most do) something else, but that gets you nowhere, because you still ought to do the latter. The rest of us just call that morality. And your pouting about that wouldn’t change a thing.
Sigh. Really? Still? You don’t know what the truth conditions are for an imperative?
I suspect you do know what they are, and are thinking that there must be some “different” truth conditions for “moral” imperatives, but I welcome any attempt you might make to show what those are–because any honest attempt you make at that will end up exactly where I am already. (Per above.)
Then we can roll up our sleeves and actually get something done.
“Anyone familiar with Kühn should already have worked this out ” Heh. Anyone familiar with Kuhn should already know how to spell his name!
The American spelling lacks the umlaut. I get the umlaut from reading internationally (e.g. this). But I confess it’s possible Kuhn might object to foreign publications adding it. And writing in English I suppose it should be dropped in any case.
Yeah, Kuhn might very well object to the umlaut being put in, because that would be a misspelling of his name. If his name did contain the umlaut, there would be no problem including it when writing in English, and in fact it probably should be. As is usually done, for example, with Gödel and often with (Hans) Küng.
Hi Richard,
It would be really helpful if you could give a yes/no reply to two question. First, is it the case that by “true imperative” you mean one that “best achieves what an agent wants most”? (As above, I can’t think of any other sensible meaning of the phrase as you use it).
Second, given that, and given that you’ve said: “My only axiom is that moral imperatives must be true imperatives”, is it the case that your statement amounts to: “My only axiom is that moral imperatives are imperatives that best achieve what an agent wants most”?
In response to the suggestion of not accepting that axiom you say:
As usual, I’m having to guess at the goal that applies to that “naked” ought (oughts only have meaning with respect to a goal). I’m thus guessing that the goal is “achieving what they most want” and thus that your sentence means:
“They ought (in order to achieve what they most want) to obey what I am calling moral”. Further, given the above axiom defining what you are calling “moral”, the sentence becomes: “It is a fact that they ought, in order to achieve what they most want, do what best achieves what they most want”.
Yes, I agree with your statement here.
Cheers, Coel.
Yes. Although to be clear, it’s even more basic than that: a true imperative is a hypothetical imperative for which the protasis and apodosis are both factually true (the if clause and the then clause, i.e. the desire assumed or stated actually exists, and by causal consequence will actually be best fulfilled by the recommended behavior).
It is only then by analysis we discover that the only imperatives that can be true are superseding imperatives (since if you have a greater desire than x, namely y, then you do not in fact desire x, because you actually desire y; that people can be mistaken at this point, and not realize they desire y more than x, that many false moralities, and moral failures, result; the other source of false moralities and moral failures, of course, is having incorrect beliefs about what the total consequences of an action will be, e.g. fetuses have souls that God has to cuddle in heaven after they are killed and he gets really mad about that and we don’t want to make the gods angry).
But in the end, that still gets us where you now agree we are.
Morality would then exist in the broadest applicable generalization of particular behaviors (so, eating ice cream is not a moral imperative, but it is a moral act in that it is morally permissible, and also in that it is a fulfillment of the more general moral imperative to find happiness or enjoyment in life when you safely can, rather than making yourself miserable for no valid reason).
i’m not done reading this, but i have some thoughts.
i personally have figured out some things similar to what you are outlining here, there is so much i agree with. however, i think something could be improved here.
i see so much disagreement and confusion about this topic, and it’s really annoying to me.
so here is my idea: it seem to me that calling this something like “satisfaction science” rather than “objective morality” would help to clear up so much of the confusion and disagreement.
take those two words, “objective”, and “morality”. they are big fuzzy words that so many people kinda use in different ways. to me, since the whole hypothetical imperative is based on desires etc etc etc that makes the word “objective” throw me off, it initially confuses me, because objective makes it sound like it is not based on a subjective experience like desire or satisfaction.
then there’s “morality”. i don’t think i’ve ever heard anyone give any thinking about morality that was even close to hypothetical imperatives. there is no “if then” statement when people usually talk about what they mean by the word “morality”. it’s just “oughts”.
you can of course go on at length how the phrase “objective morality” can be defined in such a way to work the way you use it, sure. but that’s exactly what will be required: length!
but i’m pretty sure something like “satisfaction science” (or “the study of attaining satisfaction”?) would be both accurate and much less confusing to everyone! it would accurately describe the entire idea in two unambiguous words. “objective morality” is a phrase which cannot do this. it took me paragraphs of reading to understand what the position was, and to learn that i was wrong to assume it was totally opposed to my views.
either way, thanks for this writing, i think i’m learning a lot.
and thanks for pointing out that word “satisfaction” which is so much better than “well-being” and “reducing suffering”. i’ll be using it from now on.
i can see my only quibble has been dealt with in the comments above:
great article, great comments section.
*i probably should have given credit to a source that influenced my views on morality etc.
a major influence was the video called “the end of theistic morality” by the youtube person called “knownnomore”. he does use the “well-being” phrase though, if i remember correctly.
also, the “taboo your words” thing over at lesswrong is something i use so much, i think it also helped me think through some meta-ethics stuff.
Sorry if I’m late again.
What is the definition of ‘evil’ amung non-theists? is there such a concept?
And is this like asking the converse question ‘what is the definition of good?
And is it unlike asking what the definition of ‘beauty’ or ‘art’ is since this is ‘subjectiv’ ?
Nice-wun.
That depends on what you mean by the word. See my discussion of defining good and evil in Sense and Goodness without God V.2.2.7, pp. 337-39 (and for how words acquire meanings in the first place: II.2.1, pp. 29-34).
If by “evil” you mean “the causing of gratuitous harm,” then obviously evil exists (and has nothing to do with gods), because gratuitous harm exists (and would exist in almost every possible universe lacking a god).
Dear Richard,
How you solve the objections of
1. That normal biology is the universal criterion by which you judge morality
2. Someone who has been diagnosed with Congenital insensitivity to pain with anhidrosis. (CIPA)
Surely normal =/= right. Is there a reason why abnormal(Sadists, psychopaths, CIPA)=wrong?
“Normal” biology is not the universal criterion by which I judge morality. See TEC, pp. 347-56 (note in particular the role of allergies: pp. 355-56). On sociopaths especially, TEC, p. 353 (w. n. 41 on p. 427). And on bizarre (i.e. nonhuman) biologies and their effect on moral facts, pp. 354-56.
There are two problems with the idea of objective morality.
The first question is, how do we measure it? How do you arrive at what is best for someone? Is it what makes the person happiest? Is it what increases his chances of survival, or perhaps of his genes’ continued survival? Should we maximize a person’s freedom, or should we constrain it to increase the person’s survival chances? How do we factor in uncertainty, because there is a considerable amount that we simply cannot know, given the complexity of life.
Let’ s suppose for the moment that there is a single benefit currency and that we can easily measure the impact of certain choices on that benefit measure. My second objection arises as to the optimum way of distributing this benefit across a population. Do we adopt the utilitarian principle of greatest good for the greatest number, in effect maximizing the total amount? There are plenty of objections to utilitarianism. By the utilitarian principle, we have to count both the pleasure derived by the sadist as well as the suffering of the victim. What if the pleasure outweighs the suffering? Does this justify cruelty? If we reject utilitarianism, what do we use instead? Should we maximize the minimum benefit? This could have the effect of lowering the average and causing us to concentrate on the least well off. Should we minimize variance? What standard do you propose for the distribution of benefit? And, given the finite resources at any moment, there are always tradeoffs.
That’s a scientific question. Psychology and sociology have been developing instruments for the purpose for years, and even better ones can be developed.
That’s a scientific question. The answer is what the person would deem best for themselves when fully informed and reasoning validly. Which is a natural fact about the person. If science finds commonalities, then we can group types of people, or even empirically verify that everyone is the same in this one dimension.
Most satisfied with life and themselves and their decisions. Watch the Carrier-McKay debate and read the follow-up for why satisfaction optimization is the base of all other goals (i.e. the only thing everyone pursues for itself, and which no one would prefer anything else to).
That is a political question, not a moral one. It’s moral dimension exists only at the level of what an individual ought to do, which relates back directly to what would most satisfy them–if they were fully informed and reasoning validly.
If everyone (who is fully informed and reasoning validly) would be thus satisfied, yes. If not, then no.
Hence, this is an empirical question for science. It cannot be answered from the armchair.
Note that the system I am describing is a version of “desire utilitarianism.” Those objections do not apply to desire utilitarianism. That’s why desire utilitarianism is probably correct.
Not on desire utilitarianism. If it would not satisfy me to do that (when fully informed and reasoning validly), then I shouldn’t do it. End of story.
Likewise every other objection you raise, which are objections to traditional utilitarianism, and which don’t even apply to desire utilitarianism, except at the level of individual goal fulfillment, but at that level, the answers become empirical questions, not analytical ones.
People vary in what is considered “fully informed” and “validly reasoning”. How for example do you determine the moment, if any, when it is right to perform an abortion? There is considerable disagreement on that. At one time fully informed people, like Aristotle, thought that slavery was proper. If there is a choice between living a long boring risk-free life and a possibly shorter and riskier but more enjoyable life, how can you possibly decide? And how do you handle uncertainty, which is always present and not necessarily measurable.
As to putting down the distribution of benefits among people as political and not moral, the choice still has to be made. The real question is, what is the right thing to do? You can’t ignore it, regardless of how you do your labeling.
How for example do you determine the moment, if any, when a hill becomes a mountain?
If it is immoral to level a mountain but not a hill, then it is so because of what the difference is between them. That difference is empirically discoverable.
And as with all science, we improve our knowledge as we gain more knowledge. Scientific facts are tentative and revisable. But they are still empirical. And they are the best we have.
So you aren’t saying anything that isn’t just as true of medicine, agriculture, engineering.
And as those sciences continue to work and make progress, so will moral science. In all the same ways, by all the same means, and with all the same limitations and uncertainties.
@sawells
obviously mileage may vary because i am not richard carrier, but i think i can address a few things.
you are a moral philosopher?
also, i’m not sure where you have argued with him. i don’t see your name in the comments section.
ya…totally correct.
i can already see you misunderstand. you think the first point is supposed to provide something else.
it assumes there is? no. it is a bare definition. if there is nothing that fits that definition, then nothing is moral.
but if you read what he wrote, he does address this kind of objection in brackets in point 2.
i’m not sure what other things you are attempting to say here.
but parhaps some confusion is that you don’t quite get the definition (seems this way when you say ” the moral is what you ought to do in order to be moral. Whoop-de-do.”). here is how i would rephrase the definition:
moral imperatives are any imperatives which are more imperative than all other imperatives.
now i hope you can see that we can indeed judge whether something fits the definition, and it is not as empty as the “A = A” that you described it as.
so you say “absolutely”…good, you agree.
this does not necessarily make someone a moral nihilist, though.
these first three points are quite important, i think. i don’t see any real challenge that you have posed.
hopefully you understand a bit better now. but i’ll go on to the next points just in case.
i don’t understand your sentence here, or why you think that.
hopefully my writing above has sorted out some of the confusion here.
hmm he did talk about our conflicting inner stuff in a debate he linked to.
but i have to say, human minds are deterministic, and not exactly random. so i’m fairly sure what you say here can be classified as a difficulty, not an impossibility.
hopefully my writing above has sorted this out a bit.
who eats whom? you are thinking in terms of some different morality than the one rc has presented.
he has presented one in which no one is obligated to be eaten, i hope you have noticed.
other than that, it seems you are just declaring the problem too difficult for you to reason through, and therefore not possible to reason through…or something.
Excuse my English, it’s not my mother tongue.
I really enjoyed reading this, so thank you. I have one large question mark, though. I am not sure what your conclusion is: Is it that there exists an objectively correct answer to the question “What should I do?” in some situations, or in all situations?
If you mean “in some situations” then you can ignore the rest of what I am about to write. But if you mean “in all situations”, then I have a problem with statement #6 (“There are many fundaments of our biology, neurology, psychology and environment that are the same for all humans”). It’s not that I think it’s false, but I think it’s irrelevant for those situations where people respond differently to a decision despite their fundamental commonalities.
So let’s take this question as an example: A teacher is considering a new method of teaching. For simplicity, let’s assume that she has full information regarding all consequences. She knows that 60 percent of her students would benefit and 40 percent would suffer. Should she apply the new method?
One attempt at resolving such an issue is to say that the new teaching method should be applied only if the benefitting students can pay off the suffering to the point that they become indifferent between the old and new method. If one has a problem with the notion of money being a determinant for life satisfaction, then think of it as the benefitting students hiring extra teaching assistants for the suffering students to the point that they no longer suffer. If this isn’t worth it for the benefitting students, i.e. their benefits can’t compensate for the suffering, then the new teaching method should not be applied.
Similar arguments can be done regarding, say, slavery. If the slave owners can compensate the slaves enough so that they become indifferent between being slaves and free men, then all is good. In practice, slave owners have slaves because they are cheap, and no slave would accept a compensation that gives him a smaller wage than that of a free man. So this wouldn’t happen. But if slavery still would occur, then no harm.
All though I do find such a decision making process attractive in theory, I wouldn’t say that it is objectively the morally correct way of making decisions. And in particular, it doesn’t follow from the stated prepositions in this post. So that leaves us at still not being able to objectively determine correct decisions from incorrect ones in situations where some people benefit and others suffer.
That depends on what you mean. It’s just like any claim to fact. Often, there will be a correct answer, which we lack the ability to know (but may in future have access to the information we need to know). Often, there will be many correct answers (i.e. many different things that would satisfy the obligation entailed by the facts of the case, no one of them more obligatory than the others). Whether there would ever be no correct answer at all is harder to say, since I’d need a concrete example of what you mean, otherwise, e.g., “there are no moral obligations on you in situation s” can be factually true (so a “correct answer” exists even then, whether discernible or not).
You don’t mention the most crucial piece of information needed to answer this question (others, like cost, I will assume you mean are equiter paribus): does she know of a method that performs better? If not, then she is obligated to use that new method, any other being an abrogation of her responsibility. If she knows only of another method that perform the same, then equiter paribus, she is meeting her obligation as long as she uses either. If she knows of a method that performs better, she is obligated to choose it over her new method.
So none of your subsequent analysis is even relevant.
In reality, no method of teaching is so awful. We know of methods (and certainly, method combinations) under which virtually no students suffer (and 0% is far better than 40%; even the students that fail even that, would succeed under no regime known to us). Finland is a famous example, now widely studied for that reason.
You also falsely assume the purpose of education is income maximization. In reality, the purpose of education is to ensure a stable democracy. For only informed and rational citizens vote well; that they can also get jobs is merely a correlated byproduct, also necessary for a stable democracy, but only insofar as they can thus support themselves, how much they earn beyond that is not a primary concern.
Moral facts only follow from reality. They don’t follow from false statements about the world. If you can actually describe a realizable system in which slaves would prefer to be slaves than slaveowners (or are indifferent to which they are), then you might have the beginnings of a point. But such a world would be physically different from the one we live in. Indeed so much so that I cannot even imagine it–at best I can imagine engineering a new species that likes slavery, but that entails engineering features into them that are contrary to their pursuit of happiness, like independent thought and self-preservation, so the overall objective remains unrealizable (the side-effects of their liking slavery will ensure their unhappiness by making them less effective at managing their own happiness than they would be otherwise). And again, we aren’t that species, so what would be true for them wouldn’t be true for us. And what we would have to do to correct for those side-effects is such that by the time we were done, there would be no difference between the slave system we ended up with, and a system of free labor, e.g. slaves would be compensated the same, given the same autonomy, slaveowners would have the same obligations to their slaves as they would to free laborers, and so on, to the point that the system we created would not actually be slavery anymore by any contemporary definition. Indeed, the system of wage slavery we have now is closer to actual slavery than the system you would be required to imagine. And then it’s simply unclear whether even that system would be realizable, e.g. the obligations placed on slaveowners to ensure slave happiness would actually be burdensome and thus diminish their happiness, as well as the efficiency of the whole system, relative to simply giving everyone in the system the same regulated autonomy we are already striving for.
That said, I don’t see anything in your comment that even challenges statement #6. Statement #6 is not that people are all the same. It’s that people are enough the same that there are goals we all share in common and therefore situational propositions about goal-achievement relative to means that are true for all people. It does not follow that everyone is obligated to behave identically to everyone else. For example, that you ought to earn your keep when able in no way entails you should perform exactly the same job as everyone else. Thus, similarly, that some students differ enough as to require a different educational method says nothing about whether we should educate them. To the contrary, it entails objectively factual statements about how we ought to educate them. Example: special education for the learning disabled. That they ought to be educated is as true as that they ought to earn their keep when able, but that in no way entails that they have to be educated the exact same way as everyone else, any more that it entails they have to occupy the exact same job as everyone else. And so it is: we have different educational tracks for the learning disabled and the non-disabled, using methods tuned to serve best each community.
Arguments must attend to the way the world actually is. And this is the way the world actually is.
Thank you for your through answers, very much appreciated!
I thought of the example in the way you interpreted it (and with only two methods available – old and new).
You say the teacher is obligated to use the new method. Now I assume that you also think that anybody who 1) accepts your six statements and the conclusion, and who 2) have the same facts as you regarding the teacher-case and evaluates those facts objectively, would necessarily come to the same conclusion: The teacher is obligated to use the new method.
I evaluated the teacher problem when taking your six statements and the conclusion as given, but I failed in deducing the morally right thing to do, while you succeeded. Does the new teaching method increase life satisfaction? For some students yes, for others no. Does the new teaching method increase life satisfaction overall? That depends on how we weigh together the benefits and harms of different students. It seems to me that you managed to do that in an objectively correct way, but as I said, I don’t know how I should have known what way that is.
My point wasn’t that slavery is the way to go. Although I’m not convinced that we would need to engineer a new species or completely change what it means to be a slave in order to find rational people who would agree to become slaves for the right pay. That’s an empirical question, and in my opinion it hasn’t been sufficiently answered. But as I understand you, it has.
Remember, this means “As opposed to what?” That is, what is her alternative? If there is none, then your entire question is moot.
And also remember, your scenario is contrary to fact. We already have better methods than that in reality, so no such conundrum as you suggest exists; it would therefore be better to pick an example where it actually does, rather than arguing about what would be the right thing in an imaginary alternative universe we don’t live in.
You are simply asking the wrong question. The correct question is “Does the new teaching method increase life satisfaction relative to any alternative action the teacher can take?” And of course, the question needs to relate to the teacher’s life satisfaction, not the students’. The students’ life satisfaction is relevant insofar as it affects whether the teacher can feel good about herself ruining other people’s lives or bettering them as she is able. The latter will improve her overall life satisfaction (certainly in terms of risk management, but even in more direct respects as well). Thus, the question for her is whether there is any method that performs better, that is within her means. If there is not, she can’t feel bad about the poor performance of the method, except insofar as this motivates her to keep looking for better methods (which is why in reality we have found them; no teaching method we can wisely recommend today performs as poorly as the one you imagine), because otherwise there simply isn’t any better method available. This is more commonly the reality in medicine: many treatments, e.g. for cancer, perform poorly, but doctors use them because the alternatives are worse.
Note the key word. Empirically finding people who would do that does not entail their decision was rational.
I agree it could be better explored empirically than it has been (although there are moral reasons why it may never be) if your aim is greater certainty. But we only need sufficient certainty to know the risks of experimenting with that arrangement are greater than any benefits likely to accrue; the non-slavery system works fine.
One empirical proof that that system you imagine won’t work is that free laborers are already objecting to their treatment and pay and restrictions on their liberty. And it can be independently shown those objections are entirely rational. Since enslaving them would entail even more restrictions on their liberty and even lower pay and even poorer treatment, it follows a fortiori they would never rationally agree to being enslaved.
So you would perhaps want to imagine some sort of slavery whereby slaveowners are required by law (and the state even more reliably enforces that law than it does labor laws now) to give their slaves even more freedoms and resources (i.e. pay) and improved conditions and treatment than free laborers enjoy in our present system (enough to eliminate all rational objections). But in what way would that imagined system be slavery any more? If a slave has more freedom than a free laborer, to call it slavery has become a mere semantic game.
Thus, you have to think through what “slavery” entails (for it to even warrant being called slavery) and work through the risks that entails for the slave (e.g. their decisions will be overridden and they will be forced to do or suffer things they would otherwise choose not to), and then think through whether any rational person is likely to prefer that to the alternative. There is enough empirical evidence of human nature and the way the world works to conclude, to a high certainty, that the answer is going to be no.
Thank you again for the replies!
The alternative is to continue using the old method.
I think we are talking past each other here. This was what I wanted to say: With the new teaching method, 60 percent of the students would benefit in comparison to the old method and 40 percent would be harmed in comparison to the old method . That isn’t unrealistic, and neither does it mean that the new teaching method performs particularly poorly. If we take the best teaching practice we have today and try to improve on that, it seems quite likely that any resulting new method would not benefit every student, but a fraction would be harmed in comparison to the traditional method.
Earlier you wrote that “does she [the teacher] know of a method that performs better? If not, then she is obligated to use that new method, any other being an abrogation of her responsibility.” I then got the impression that you by “performance” meant the performance of the students. But now I get the impression that you mean that the teacher is obligated to use the new method if she cares more about the benefitting students than she cares about those who would be harmed (again, harmed in a relative sense). And that she, on the other hand, is obligated to continue using the old method if her heart mainly goes out the students who would otherwise be harmed.
Which is better or worse than the new one?
That’s the question.
That is meaningless. Is the mold method successful or not? That is, how many students subject to it reach the assigned benchmarks? That is a binary question. It is meaningless to say the same number reach the benchmark but do 60% better and 40% worse. The benchmark is the benchmark. You either make it or you don’t. We could perhaps imagine the 60% and 40% relate to speed, i.e. 60% reach the benchmark sooner under the new method and 40% reach it slower, but then the question is how sooner and how slower and how does that affect the overall educational program. Only then can we actually know whether the new method is better at all.
Moreover, in reality, we would segregate the 60% to continue with the new method and apply the old method to the other 40%. So there would be no 40% decline, only a 60% improvement (and this is how our real world system works, e.g. mainstream vs. special vs. advanced-placement education). So your imagined scenario is impossible in reality. And we shouldn’t be wasting time talking about non-reality like this. Only reality matters.
Yes to the first point. This is desire utilitarianism, not utilitarianism. Utilitarianism is non-motivating and therefore literally false (it entails no true imperatives). To the second point, it’s not quite that simple (a fully motivated teacher cares about all her students), but in a bracketed sense yes, a good teacher would not feed nearly half her class to the other half to make that other half stronger, the consumed be damned. Conscience would prohibit it. But it’s also irrational in the plainest sense, since even in a Machiavellian way, society does not benefit if it is losing half its students. It needs everyone educated.
So if the old method is succeeding, i.e. getting students to the benchmarks, there is no need for a new method, much less the one you imagine.
But again, even the new method you imagine need only be used to segregate the two categories of students, and the 40% won’t perform worse than the old method after all, because the old method would still be used on them. The other 60% would then accelerate with the new method. And this is, in fact, what we now do.
Hi again!
I think I’ve finally put some of the pieces of this puzzle together. (Think!)
Still, one thing that troubles me with this moral framework:
Let’s assume that everybody in a society act as to maximize their life satisfaction, and that nobody is misguided. In other words, everybody acts morally. Let’s denote this society by A. Now assume a parallel society where people act according to another set of rules. (We don’t have to worry about why, they just do.) It could still be the case that a majority of people in society A would have a higher life satisfaction living in society B.
For example, in game theory you can construct such situations. First each player maximizes their life satisfaction and you arrive at some social equilibrium. Then you force each player to minimize their life satisfaction and you arrive at another equilibrium. Now you compare these equilibria and find each player being better off when minimizing their satisfaction as opposed to maximizing it. The Prisoner’s dilemma is an example of such a game.
So my point is that a society where everyone maximizes their life satisfaction need not give a society where people necessarily have a high life satisfaction, in fact, satisfaction-maximization could even be counterproductive to that end. The schism occurs because the life satisfaction of a specific individual in itself depends on her being part of this satisfaction-maximizing environment.
This doesn’t refute any of the propositions, or the conclusion, as stated by mr. Carrier. Correct me if I’m wrong, but I don’t think you have argued that a society where everybody is maximizing their life satisfaction would, in itself, be desirable or preferred by its citizens. I can’t help having some troubles with that bit.
I don’t understand your point. Isn’t that already what my theory predicts? Maybe you meant to say “It could still be the case that a majority of people in society B would have a higher life satisfaction living in society A”?
That is not an accurate description of prisoner dilemmas. Prisoners who cooperate do not minimize their satisfaction–that is the whole point. They both reduce their sentence by cooperating (hence they lower but do not minimize their satisfaction). If they defect, one of them reduces their sentence while the other increases it. That does not describe your society B. Which is precisely why society A results from cooperating and therefore cooperating is to be preferred. A fuller demonstration of this fact is presented in Good and Real.
I agree with all this. That is indeed why Game Theory determines correct moral outcomes, not the wholly selfish pursuit of individual goals, which becomes self-defeating (and only the rare lucky avoid the consequences, like drunk drivers who just by chance don’t die or kill anyone–a fact that in no way argues driving drunk is harmless). That is simply a factual observation of how social systems work.
I assume you mean here by “maximizes” something different than I do. When I say X maximizes their satisfaction, I mean in system Y, so in fact I mean not the most satisfaction X could ever achieve ever, but the most satisfaction X can actually reliably achieve in system Y (even accounting for the ability to redesign Y). That is the maximum satisfaction available to them. Moreover, decisions have to be based on risk management: like the drunk driver, their action is not guaranteed to have a negative outcome, but it increases the risk of a negative outcome beyond any rational pursuit of satisfaction maximization. Theoretically, an omniscient person would always know when to drive drunk (and thus would never harm themselves or anyone else), but then for them it wouldn’t be wrong. What’s wrong (because it is irrational–in the circumstances given what is at risk) is undertaking an unnecessary risk to life satisfaction, regardless of what the actual outcome is. This is what the prisoner’s dilemma is about: prisoners who cooperate get the best outcome in terms of risk management; prisoners who defect endorse a system where half the time they will be screwed. In reality, of course, a prisoner’s choice to defect is based on actual risk (e.g. how much they trust their cohort), so the correct decision is not always to cooperate. But that is, in fact, why you shouldn’t be a criminal in the first place (and that remains true no matter what you replace “criminal” with, since a prisoner’s dilemma is not only realized for criminals).
Let me try to explain what I meant. You have two rational players who both maximize their utility (or life satisfaction/pay-off/…). This is, in other words, society A. And the social equilibrium in this game is given by “betrayal-betrayal”, i.e. none of the players cooperate. But if they would have cooperated both of them would have been better off. But that wouldn’t be the result of rational and utility-maximizing agents. (Here I’m thinking of a one-shot game where players can’t communicate. Repeat the game several times and then maybe another equilibrium could be possible.)
But now we create two other players who act as if they wanted to minimize their utility. (Why would they do that? It doesn’t matter. They just do.) And they will end up better off than any of the players in society A.
We could create a third society where players choose their strategy according to some other rule, say they choose it randomly. That could potentially also give a higher expected pay-off than that of the utility-maximizing behavior (depending on exactly how you construct the game).
Now I do admit that I’m no game theorist. And I’m sure there are games and maybe versions of the prisoners dilemma where it’s rational and utility-maximizing to be kind, cooperative and truthful. What I’m saying is that there are also games where this is not so, and I don’t think they are merely theoretical models with no foundation in reality.
(The link didn’t work by the way, but I found the book.)
I mean the same thing you do.
I’m sorry, I must be missing something. Because this makes no sense. How can the players have maximized their utility, if by doing something else “both of them would have been better off”? That is a direct self-contradiction.
Also, how would choosing an option that leads to both being better off not be rational and utility-maximizing? You just said it wouldn’t be. That’s also a direct self-contradiction.
I am not sure what you mean by “as if.” That they are pretending? If so, then it does matter why they would be doing that. You can’t reason through a social model without accounting for the agents’ motives. That’s like trying to get a rocket to the moon without accounting for gravity.
If, on the other hand, you mean they are not pretending but actually do want most to minimize their utility (and reached this conclusion rationally) then by definition minimizing their utility is maximizing their utility. There is then no distinction. Your question becomes moot.
That is not rationally possible. Randomly choosing a method assumes unnecessary risk. You want to minimize risk by choosing the most reliable end-means behavior. You can’t make rational choices by leaving those choices to random chance, unless there is minimal risk (i.e. all outcomes are acceptable), guaranteed benefit (i.e. all outcomes improve your lot as much or better than any you could have chosen non-randomly), or no alternative (i.e. you have no choice available except the randomizing one). This is why betting your life savings on a single hand of cards is not rational. Unless you’re good at poker and your life savings is so paltry you could regain it rapidly if lost.
That needs to be proved by actually presenting one. Indeed, one in which no statements are contrary to fact (i.e. no fictional scenarios that never happen in real life). You can only come up with two possibilities (or none): (1) those in which the real scenarios possible are extraordinarily exceptional and almost never happen and almost never apply to anyone (these are traditionally called lifeboat scenarios; that different morals govern them than normal social systems is not a significant finding if what you want to discern is how to behave in normal life) and (2) those in which the real scenarios possible are eminently realizable as a norm or commonality (in which case you will indeed have discovered something interesting about how normal social systems should or could operate). A trivial example of option (2) are games (where nothing significant is at risk and one can simply enjoy playing the game by whatever bizarre rules are agreed upon). Non-trivial examples are harder to come by. And yet I assume that is what you want to find.
So, until you do, this is all just ivory tower musing about the unknown. Which I find uninteresting.
You are not missing anything in the sense that it does sound self-contradictory. But it isn’t. The best strategy for both players is to betray the other player, i.e. no matter what player #2 does, player #1 gets a higher pay-off by betraying (and the other way around). So both players betray. Ironically, both of them would have been better off if they both cooperated, but that isn’t the equilibrium solution.
I’m not sure that this made it any clearer. It helps a lot to see the game, i.e. see the pay-off matrix and how the game is played out. Then it does become clear how such a seemingly contradictory thing can actually happen.
I could also add that nothing I’ve written here is controversial nor my own interpretations.
Yes, you could say that they are pretending or that their cases have been handed over to misinformed lawyers (it’s not that they actually like disutility, so that disutility is utility to them).
It’s not a realistic assumption, off course, but I didn’t make this example in order to predict actual human behavior. The example just illustrates that if players acted as if they wanted to minimize their utility, then they would reach a better outcome (than that of utility-maximizing agents) in this particular game. But the main point was already made by noting that utility-maximizing agents don’t cooperate, but if they would, both of them would be better off.
When I wrote that which you comment on above I was partly referring back to the Prisoner’s dilemma. So I already presented you with one. Is that an example of a number (1) or (2) possibility?
However, oftentimes in game theory a game has several equilibria (although not in Prisoner’s dilemma). That, already, should get us thinking that it may not be so simple that rational and utility-maximizing agents naturally find themselves in a dominant equilibrium, by which I mean an equilibrium where at least somebody is better off and no one worse off, in comparison to another equilibrium.
Actually, no. If both players betray, each has a 50% chance of getting the worst result. That is therefore not the best strategy. The best strategy is cooperation. That’s the point.
Are you confusing irrational conclusions (falsely believing a 50% disaster rate is better than a < 50% disaster rate) with rational ones, the ones based on reality (that cooperation is always the better outcome in terms of risk management)? Because then the question answers itself. Irrational conclusions are by definition false.
Again, you can’t reach that conclusion, because you have not accounted for the effects of their actual motives on the system. Just like trying to get a rocket to the moon without accounting for gravity. To know whether things work out best for agent A, requires knowing what agent A wants, and whether they would actually want that if they were fully rational and informed. That is a fundamentally required premise for all reasoning about true and false moralities.
This is incorrect. The game is deterministic. But let’s try your argument. You say that if player 1 betrays, he has a 50 % chance of getting the worst result (and likewise for player 2). That would be true if a) player 2 chooses his strategy by flipping a coin, or if b) player 1 doesn’t know what player 2 will do, but guesses that both strategies, betray and cooperate, are equally likely.
Neither of these (a or b) are correct. First note that the game is symmetric, it looks the same from the viewpoint of both players. So if there is one dominant strategy, then it has to be true that in equilibrium both players use it. Which both of them know, as they are rational. And in that case neither a) nor b) are correct.
And as it turns out there is such a dominant strategy (betrayal). It is dominant because whatever the other player does it pays off to betray.
But as I said, this isn’t anything I’m making up. This is the way the game is played out. You can confirm this using any introductory textbook to game theory of your liking. Taken from the online-book “A Course in Game Theory” by Martin J. Osborne and Ariel Rubinstein: “This [Prisoner’s dilemma] is a game in which there are gains from cooperation–the best outcome for the players is that neither confesses–but each player has an incentive to be a “free-rider”. Whatever one player does, the other prefers Confess to Don’t Confess, so the game has a unique Nash equilibrium (Confess, Confess).” (Note 1: Confess = Betray; Don’t confess = Cooperate. Note 2: “prefers Confess to Don’t Confess” means that they get a higher pay-off playing Confess than Don’t confess.)
If you really want to debate anything about this, then a better way forward is to say that the one-shot version of the game happens seldom in real-life situations; how often it occurs is discussable. However, it’s not debatable that the equilibrium solution of rational and utility-maximizing agents is “betrayal-betrayal”.
Actually, I can. The mathematics of the game doesn’t care about why players would minimize their utility. The equilibrium solution is the same no matter what the reason is. In the same way, we don’t need to bother about why people maximize their utility (the standard assumption) in order to find the equilibrium solution.
That isn’t true, and I think you are confusing apparently good strategy with the actually rational one. You keep stipulating that both agents in the imagined scanerio are fully aware of the situation and will take the most rational course. Given that stipulated premise, both will choose to cooperate. Because defection gives them a 50% chance of losing disastrously. Whereas cooperation gives them a 100% chance of losing only mildly (and one should also add that the rewards for demonstrated loyalty in future scenarios also offsets the mild loss, if we are to translate the scenario to the real world, and a full run of this scenario in Game Theory includes that fact). Only if one or the other agent isn’t rational would the success rate of cooperation be less than 100%, and even then the success rate will be the rate of loyalty (which is why criminality is a bad lifestyle: by nature, criminals are disloyal and cannot be trusted; but we are imagining rational agents, not common criminals, by your own stipulation). Even the textbook you quote is saying what I just did. Just because there are irrational incentives to defect, does not mean defection is the best strategy for rational agents. To the contrary, the whole point is that cooperation gets the best outcome, and only an irrational agent fails to understand that (which is why rational agents will rarely find themselves in such scenarios–they won’t likely be criminals in the first place, and when they are, they will not be common criminals, but criminals loyal to their cause).
But this is all just academic at this point, since you haven’t successfully mapped this onto any real-world moral problem.
Seems like we’re not going to reach an agreement here. I am not sure why we fail. I’m starting to think that we have different views of what “rational” means. In this context I have used the game theoretical one.
But still, I think we ought to be able to agree on the following: 1) The assumption of the game is that players are rational and utility-maximizing, 2) The game theoretical equilibrium is given by “betrayal-betrayal” and 3) There exists a better outcome, namely one where both players cooperate.
If we don’t agree on the basics, then I don’t see how we could even start discussing more difficult issues, like “What are the implications of this for real-world situations?”
Another, maybe better, explanation for why we fail to agree: You assume that I’m talking about the iterated Prisoner’s dilemma, while I’m talking about the classical one-shot game. That would explain a lot of the confusion.
Ah. Right. We don’t live in a one-shot world. (And the only scenarios that come close are life-boat scenarios, which are by nature different from everyday situations.)
G’day Dr. Carrier.
Although I quite like some of the points Harris makes regarding morality (e.g., his coverage of the ‘moral landscape’), there have always been some major gaps. As you mentioned above, he really does need to be more rigorous. Everyone says that Harris is just being colloquial and casual for the laypeople, but a tremendous problem with his approach has been – as again you mentioned – his indifference to the philosophy of morality. The Moral Landscape was a remarkably thin book, which really needed a great deal more elaboration of points and address given to opposing arguments from both theists, and perhaps secular humanists (e.g., Anthony Grayling, though he tends to get side-tracked by romanticising the good life and rarely addresses the scientific / psychological literature regarding morality).
If you leave gaping holes in your own knowledge, you are asking for a whole heap of trouble. For example, in his debate with Dr. Craig, he failed to address some ostensibly damning arguments from Craig, which were proclaimed to be ‘knock-downs’; no doubt because he was unsure how to respond. Despite annihilating Dr. Craig’s objective moral foundation, he left his own open to the same fate. He seemed to be much more interested in delivering a polemic on theistic morality, than in delivering an alternative. His ‘casual’ approach may lead to the very laypeople he is trying to communicate his ideas to seeing them as having more flaws [floors for analogy] than a skyscraper. Consequently, they will dismiss a pre-existing thesis which has been better argued for by proponents for that same thesis such as yourself.
Admittedly, I have also committed the crime you noted. I have never even heard of Phillipa Foot, so I will have to read up. On that note, are there any debates of this nature on YouTube that you would recommend watching (i.e., ones where metaphysical naturalism / secular morality have been more comprehensively argued for and defended)? At this rate if I stick to books, I’m going to go cross-eyed, lol.
I’m so glad I stumbled upon this.
I’ve been thinking along exactly the same lines, and this clears up a few of the remaining details.
Thanks!
Hello Dr. Carrier,
I was talking about the moral system you outlined here with some others. I’m getting caught up on the idea that previous moral frameworks can be reduced into this version you described. Particularly with deontology. It is pretty easy to point out that theistic morality is not deontology. And I fully understand how maximizing satisfaction is better grounded in reality than any other system. I’m just hoping you can clarify a little bit more about the phrase I quoted below.
“even Kant’s deontological ethics reduces to a special form of teleological ethics which reduces in turn to a special form of virtue ethics, which reduces in turn to a system of hypothetical imperatives.”
Richard, if you think Harris and Shermer should find experts to collaborate with on this subject, then for El’s sake, get on Twitter and TELL them!