A few years ago, Sam Harris put on a contest, that awarded $2000 to the best essay critiquing his “moral landscape” theory of moral facts—and could have awarded them $20,000 had it convinced him. It didn’t. I agree it shouldn’t have. But he should have learned something from that critique. And he didn’t learn anything. And here, I’ll explain what I mean by that.
Nevertheless, someone won the two grand. And many of you who watched and discussed the contest announcement, might not have kept up with what resulted. And many might not have even known this contest happened! I think this is the kind of contest that can be extremely useful to progress in philosophy—if I were a multi-millionnaire I’d likely set up a whole institute devoted to a more rigorous application of the same contest procedure broadly across the whole spectrum of key philosophical debates. Just on a better model than Harris ended up deploying.
Here I’ll summarize the debate, what happened, and examine the contest winner’s case and Harris’s response to it. But first a little background to get you oriented…
The Backstory
I think Harris is correct in his thesis—moral facts are empirically discoverable facts and thus a proper object of scientific research (just, no one has ever done that research yet, in the sense required—so this science looks more like psychology did in the 19th century right now); and moral facts may indeed be describable as peaks on a moral landscape (I’ll explain that in a moment). The latter proposal is actually the less controversial of the two (albeit still “shocking”), and people usually ignore it to attack instead the first proposal. For how dare he say morality is a scientific question and scientists can tell us what’s morally right and morally wrong! They can’t, BTW. Any more than they could have told you how your brain works in 1830. Because the science actually hasn’t been done yet. But if it were, then yes, then scientists may, one day, be able to tell you what’s right and wrong, and they will have as much factual warrant to say that as they now have to say the earth is round and billions of years old.
The key piece missing right now is the normative values side of the equation. Most people who are at all informed know that science certainly does answer the question, “What are the actual consequences of doing A rather than not-A?” And that science is always the best means to answer that question (even if it hasn’t been tapped for that purpose yet in a given case). Where they founder is on the notion that science can answer the question, “What consequences should we value?” In other words, what consequences should we be preferring over others, such that the moral thing to do is to prefer those consequences? Harris (and other defenders of the same thesis, like Michael Shermer) have traditionally done a really poor job of answering this criticism. I suspect because they hold philosophy in contempt, and that serves as a barrier to their learning how to do it well, and then engage informedly with actual philosophers on this issue (see my discussion of the Shermer-Pigliucci debate as an example of what I mean; on the problem of this contempt for philosophy in general, see Is Philosophy Stupid?).
But that failure to articulate well the correct response to this is what the contest winner’s essay also reveals, once again. So it’s clearly the Achilles’ heel of Harris’s program to convince people of his thesis.
The second proposal, the “landscape” theory, is actually the more interesting, and where I think Harris contributed a new and valuable feature. (I had already defended the first proposal long before he did—it’s the central feature of my book Sense and Goodness without God, published in 2005, and the thesis of my peer reviewed chapter on the subject in The End of Christianity, published in 2011, soon to be back in stock at Amazon). His landscape notion is that value systems are interacting systems, and as such there may be multiple value systems that are equally good, but mutually incompatible, because of the coherence and effectiveness of their internal interaction, but individual pieces of one value system will only be good when placed in the correct system—move them over to the other system, and their interaction will cause problems. And science might well find that there are several “peak moral systems” on a “landscape” of moral systems of varying quality, and any one of those peak systems will do, as long as you stick with one whole coherent system and not try to mix and match.
A mundane example of the same principle is traffic law: there is no fact of the matter whether driving on the right (as in U.S.) or the left (as in the U.K.) is better; but each system only functions when it is internally consistent. So everyone does need to drive on the right in the U.S., and everyone does need to drive on the left in the U.K., for the system to work and maximize traffic safety and efficiency. And only systems that have one or the other are maximally safe and efficient. So there is a fact of the matter (indeed, a scientific, empirical, objective fact of the matter) that “you ought to pick a left-driving or a right-driving rule and stick with it within the same traffic system, if you want to maximize traffic safety and efficiency.” So here we have two peaks in a landscape of traffic systems, both are equally fine, but you do have to pick one. Incidentally, here also we have an ought that reduces to an is.
For more background on this, and where my moral philosophy fits in or contributes, see my original discussion of the Harris contest in What Exactly Is Objective Moral Truth? and my follow-up before the contest found a winner, in The Moral Truth Debate: Babinski & Shook. In the latter I summed up the situation again:
All moral arguments reduce to appeals to either (or both) of two claims to fact: the actual consequences of the available choices (not what we mistakenly think they will be), and which of those consequences the agent would most actually prefer in the long run (not what they mistakenly think they would). Both are objective, empirical facts.
[And as such] … all talk about moral truth simply is talk about what people really want and whether their actions really will produce it.
… [And that means] …
Moral facts follow from actual values—which does not mean the values you think you have or just happen to have, but the values you would have if you derived your values non-fallaciously from true facts (and not false beliefs) about yourself and the world. Hence, your actual values (what you really would value if you were right about everything).
In the most reductive sense, all moral propositions, of the form “you ought to do A,” are predictive hypotheses about how your brain would function in an ideal condition. If your brain was put in a state wherein it made no logical error (no step of reasoning in its computing of what to believe, value, or do was fallacious) and had all requisite information (no relevantly false beliefs, and all relevant true beliefs), then it would in fact do A. And in telling someone they ought to do A, we are really saying they are acting illogically or ignorantly if they don’t; that even they themselves would recommend they do A, if they were more informed and logical.
Some moral theories frame this in the way of an “ideal agent.” But always the “ideal agent” is just an ideal version of yourself. Someone might yet claim we should not be rational and informed (and “therefore” the ideal agent we should emulate is not a fully rationally informed one), but it’s easy to show at the meta-level that there is no relevant sense in which such a statement can be true (see TEC, pp. 426-27, n. 36). It shouldn’t be hard to see how this actually makes all moral statements scientific statements. Even at the level of choosing not just actions, but values. An “ideal you” would choose a set of values that might differ from the values you have now, and that is a statement of fact about how a machine (your brain) will operate in a given factual condition (being rational and possessed of true beliefs). That’s an empirical statement. And therefore open to scientific inquiry.
The difference between you two (the current you, and the ideal you) and the values you each choose to prioritize, would then only be that the non-ideal you chose different values because you are more ignorant or illogical (than the version of you that is neither). And once you realize that, there remains no coherent reason not to change your values to match theirs (meaning, the values that would be adopted by the most rational and informed version of you). Only, again, an irrational or ignorant reason could prevent you from thus revising your values. So moral facts are just statements about what a non-irrational, non-ignorant version of you will do. Harris has never really explored or articulated this well at all. He should.
Now to the winning essay and Harris’s answer to it…
The Setup
You can get more details, and read the winning essay, at The Moral Landscape Challenge: The Winning Essay. Philosopher Russell Blackford (the contest’s judge) concluded that the most common and important objection raised in the 400 or so entrants to the contest was that:
[T]he primary value [in Harris’s proposed system], that of “the well-being of conscious creatures,” is not a scientific finding. Nor is it a value that science inevitably presupposes (as it arguably must presuppose certain standards of evidence and logic). Instead, this value must come from elsewhere and can be defended only through conceptual analysis, other forms of philosophical reasoning, or appeals to our intuitions.
And that is what the contest winner argued, and by Blackford’s judgment, argued better than any other entrant. That entry was produced by Ryan Born, who has degrees in cognitive science and philosophy and teaches the subject at Georgia State University.
His only error is in thinking that “philosophical reasoning [and] appeals to our intuitions” are categorically different than empirical science, when in fact they are just shittier versions of empirical science (our intuitions are only attempting to guess at facts through subconscious computations from evidence, and philosophy is always just science with less data: see my explication of that point in Is Philosophy Stupid?). The more you improve the reliability of those methods (intuition or philosophy), the more you end up doing science. Maximize their reliability, and what you have, is in fact science.
But then, how you get to Harris’s conclusion (that “the well-being of conscious creatures” is the outcome measure distinguishing true moral systems from false) is not obvious. And it’s made worse by the fact that that’s a really confusing and imprecise statement to be of use scientifically. What kind of well-being? With respect to what? Which conscious creatures? How conscious? Etc. It’s also not reductive enough. The real outcome measure that distinguishes true moral systems is the one that determines whether anyone will, when fully rational and informed, obey that moral system. If a fully rational and correctly informed agent would not obey that moral system, then there is no relevant sense in which that moral system is “true.”
Because of this, you should be looking instead for “satisfaction-state” measures—which option (e.g. by choosing which value system, which in turn produces which behavior) maximizes the agent’s satisfaction (with yourself and with life; and in duration and degree; which inevitably means in terms of risk reduction, e.g. since no actions have guaranteed outcomes, always the question is, what options most decrease the probability of dissatisfying outcomes, a fact philosophers all too often overlook). Then you may find that “improving or maintaining the well-being of conscious creatures” does that (that an ideal agent pursuing satisfaction maximization will agree “improving or maintaining the well-being of [all] conscious creatures” is a lot or all of what really, in actual fact, does that: maximizes the agent’s own satisfaction as well).
But you may find it’s not quite that, but something a bit different but that overlaps that. Since the only way to get a moral statement to be true is to get a moral statement that an ideal agent will obey, you can’t start right out of the gate by assuming you know what that will be. As Harris just “assumes” it will be “the well-being of conscious creatures,” but actually, science hasn’t empirically determined that yet. Science may find that a rationally informed agent would pursue something else—a goal that may include “the well-being of conscious creatures” in some ways, but won’t be literally identical with it. Not realizing this is where Harris goes wrong. It’s not that he’s wrong in his core thesis (that science can determine what morals an ideal agent would obey). It’s that he keeps skipping steps and assuming science has already answered certain questions, when it hasn’t—it hasn’t even tried yet.
We need Harris and other advocates of this notion to start articulating an actual scientific research program. We need to know what value system an ideal agent would choose. To find out, we can’t really create an ideal agent and see what it computes. But lots of things in science are understood without being viewed directly (we can’t see atoms, black holes, other people’s thoughts, etc.). The way to go about it is to start removing the things that de-idealize an agent and see what happens. What happens when you get an agent to reason out a system of values without a fallacy (e.g. with fewer and fewer fallacies, by detecting and purging them) and with only true beliefs, and with all relevant and accessible true beliefs (e.g. with fewer and fewer false beliefs, undefended assumptions, gaps in available knowledge, etc.)? You might not get perfect knowledge, but you will start to learn things about which values are falsely clung to (values that can only be justified by fallacious reasoning, false beliefs, or ignorance), and thus start trimming them down to the values that would most probably survive any further purging of bad data and logic.
I predict the result will be something closer to an interactive hierarchy of reasonableness, compassion, and honesty. The effect of adhering to those values will be, in most cases, an improving or maintaining of “the well-being of conscious creatures,” but it won’t be identical with it, and indeed I suspect it will turn out that an ideal agent will sometimes correctly act against that outcome. But that, and where the boundaries are, is an empirical matter. We can argue over it, case by case, in a proto-scientific way with the data we have so far. But we would still need to turn the full engines of science on it to have a scientific resolution of any such argument. And so far, no one is doing that. Or even trying to work out how we would. Not even Harris.
The Critique Worth Two Thousand Dollars
With all that understood, you will have a better perspective on the context of the main points in Born’s critique (at The Moral Landscape Challenge: The Winning Essay). I believe the most relevant points are as follows:
- Born: Harris’s “proposed science of morality…cannot derive moral judgments solely from scientific descriptions of the world.”
This statement is correct for Harris. Harris has made no argument capable of meeting this objection. And Born does a good job of showing that. But we can meet the objection. Born is incorrect to claim that because Harris hasn’t done this, that it can’t be done. This is a common error in philosophy: to insist something can’t be done, simply because it hasn’t yet. This error is most commonly seen in Christian apologetics, but even fully qualified scientists and professors of philosophy make this mistake from time to time. And an example of what I mean is that I make a case for what Born argues Harris didn’t (in my book and subsequently peer reviewed chapter). Born hasn’t reviewed my case.
The gist of my case is what I just outlined above: all moral judgments (that are capable of being true—in the sense of, correctly describing what an ideal agent would do; because we have no reason to prefer doing what an ideal agent wouldn’t do) are the combination of what an agent desires and what will actually happen. And both are scientific facts about the world. One, a fact about the psychology and thus neurology of the agent; the other, reductively, a fact of physics—e.g. it also involves facts about social systems, etc., but those all just reduce to the physical interaction of particles, including a plethora of computers, i.e. human brains. So the statement that a “science of morality…cannot derive moral judgments solely from scientific descriptions of the world” is false.
- Born: “a science of morality, insofar as it admits of conception, does not have to presuppose that well-being is the highest good and ought to be maximized. Serious competing theories of value and morality exist. If a science of morality elucidates moral reality, as you suggest, then presumably it must work out, not simply presuppose, the correct theory of moral reality, just as the science of physics must work out the correct theory of physical reality.”
This is a spot on criticism of Harris. It’s exactly what I explained above. Harris can’t just presuppose what the greatest value is, from which all moral facts then derive, any more than Kant could (see my discussion of Kant’s attempt to propose a “greatest good” that he claimed motivated adherence to his morals, in TEC, pp. 340-41) or Aristotle (and his notion of eudaimonia, which differed in various ways from Harris’s) or Mill (and his ambiguous and ever-problematic “greatest good for the greatest number”). A moral science must somehow be able to empirically verify which of these (or which other) fundamental good is actually true. Which means Harris must work out what it even means for one of them to “be true” (such that the others are then false).
There are many ways to make moral propositions true. You could say that moral statements just articulate what the stating party wants the agent to do (“I don’t want there to be thieves; therefore you ought not steal”), and as such, all such statements are true, when they do indeed articulate what the stating party wants. Thus, on this proposal, if it’s a true fact in brain science that I really don’t want there to be thieves, then it is also a true fact of science that “you ought not steal.” But then all that the sentence “you ought not steal” means is “I don’t want you to steal.” Which may be of no interest to you whatever. Why should you care what I want? Just because I want you to not steal doesn’t mean you shouldn’t steal. Thus, making moral facts mean this, plays games with the English language. We do not in fact mean by “you ought not steal,” merely “there are people who don’t want you to steal.” Even if that’s what people secretly do only ever mean, it’s not what they want you to believe they mean. Otherwise they’d just say “I don’t want you to steal,” or “I don’t like thieves.”
No. People want “you ought not steal” to be understood as meaning something much more than that. They want it to be true that you will not steal, if only you understood why it is true that you ought not steal. Just as outside moral contexts: if I believe “your car’s engine is going to seize up unless you change the oil” I can state that as “you ought to change your car’s oil,” and what I am saying, really, is “if you don’t want your car’s engine to seize up, then you ought to change your car’s oil.” I’m really just appealing to your own values, your own desires—not mine. It’s not about what I want; it’s really attempting to claim something about what you want. “You ought not steal” is meant to mean, “really, trust me, even you don’t want to steal.” Hence if you are a scientist testing when an engine seizes from neglecting oil maintenance, my statement “you ought to change the oil in your car” will be false, and I will even agree it is false, because it is no longer the case that avoiding the engine’s seizing is what you want.
In actual practice (as in, in real use, in the real world—outside all ivory towers and armchair imaginations), when people call an imperative statement (“you ought to do A”) a moral imperative, they mean an imperative that supersedes all other imperatives. In such a way that, if it were true that you ought to do something other than A, it could not be true that doing A is moral. The moral is always and ever the imperative that in actual fact supersedes all others. That, at least, is what we want people to take moral statements to mean. But that then reduces all moral statements to empirical hypotheses about means and ends with respect to the agent’s values. “You ought to do A” then can only be true if “When fully rational and informed, you will do A.” Otherwise, it’s simply not a recommendation we have any reason to obey, and therefore it isn’t true that we ought to do A. (Because we only have reason to emulate an ideal agent, not an irrational or ignorant one.)
Thus, Harris has overlooked the fact that his proposed science of morality has to start there. Just as Born says. For moral statements to be true, in any sense anyone has enough reason to care about (the only sense that has a real command on our obedience), they have to appeal to what the agent wants above all other things, because only outcomes the agent wants above all others will produce true statements about what they ought to do (otherwise, they ought to do something else: the thing that gets what they want more). And when we debate moral questions, the issue that we really are getting at is that the agent would want something else more, if only they weren’t deciding what to want most in an illogical or uninformed way. So what moral facts really reduce to, is what an ideal agent would do: what a perfectly rational and informed version of you would prefer above all else.
That’s an empirical question. Our only access to it is through logical inference from available evidence. And that means science can improve our access to it, by increasing our access to pertinent evidence, and cleaning up errors in our logic (e.g. fallacies that result from bad experimental design, faulty statistical inferences, etc.). Thus, this is a matter for science. We just have to actually do the science.
- Born: Harris’s “two moral axioms have already declared that (i) the only thing of intrinsic value is well-being, and (ii) the correct moral theory is consequentialist and, seemingly, some version of utilitarianism—rather than, say, virtue ethics, a non-consequentialist candidate for a naturalized moral framework.”
This is a valid criticism insofar as Harris has not, indeed, answered it. But it is an invalid criticism insofar as it is, actually, quite easily answered: all moral systems are consequentialist (see my Open Letter to Academic Philosophy: All Your Moral Theories Are the Same). Aristotle and Kant were just arguing that different consequences mattered more than someone like Mill (later) said mattered. It’s all just a big debate over which consequences matter, and which matter more. Virtue ethics says, it’s consequences to the agent’s “happiness” (whatever that is supposed to mean). Kant said, it’s consequences to the agent’s “sense of consistency and self-worth.” Mill said, it’s the consequences to everyone affected by an action. And so on.
So we’re back to asking science to find out: What does an ideal agent conclude matters most? The answer may be universal (we all, when acting ideally, would agree the same things matter more). Or it may be parochial (each of us, or various homogenous clusters of us, when acting ideally, would differ in this conclusion from others). But either way, it will be an empirical fact that that’s the case. And science is the best tool we have for finding that out (see TEC, pp. 351-56).
Harris’s Response
Harris answered Born in Clarifying the Moral Landscape: A Response to Ryan Born. I will close by analyzing Harris’s reply. But already you can see where I agree with Born, but why I still think Born is wrong—and it’s only because Harris hasn’t correctly analyzed the question of how to turn a quest for moral truth into an actual scientific research program. If you fix that error in Harris, Born’s critique is mooted.
- Harris: “The point of my book was not to argue that ‘science’ bureaucratically construed can subsume all talk about morality. My purpose was to show that moral truths exist and that they must fall (in principle, if not in practice) within some (perhaps never to be complete) understanding of the way conscious minds arise in this universe.”
This is a very good thing of him to say. I assumed this. But many who read his book did not. I quote it here to head off anyone who wants to level that criticism at him (you should also read his ensuing examples). He did not argue that scientists will now be the final arbiters of all things moral. Rather, he argued that moral facts are ultimately empirical facts, and thus scientific facts. Whether science is looking for them or not, a scientific method of finding them is always going to be more reliable and more secure. Which means we should use as scientific a method of discovering these facts as our access to the evidence allows. For lack of means, that’s usually going to mean methods that fall short of scientific certainty, as with the rest of philosophy and public policy. Especially now, where we still have no moral science program going. He is fully aware of this. His critics need to be fully aware of it, too.
Once we’ve built the appropriate scientific research program and applied it widely for a century or so (about the length of time it has taken to get psychology as a science up to its present state, and that’s still far from perfected), scientists will indeed be able to say a lot about what is and isn’t morally true. And they will have produced more certainty on those conclusions, than anyone else will ever be able to match (whether theologians or philosophers). And even when they can’t reach scientific certainty on some fact of the matter in moral science, due to technological or empirical barriers or financial limitations or whatever it may be, they will still be able to say a lot about what’s morally false. Just as, right now, scientists can’t say for sure how life on earth began; but they can say with scientific certainty it wasn’t ghosts.
- Harris: “Some intuitions are truly basic to our thinking. I claim that the conviction that the worst possible misery for everyone is bad and should be avoided is among them.”
Here Harris concedes the debate. He can’t answer Born’s criticism. He’s effectively just giving in and saying “I dunno, I just feel it in my gut or something; and I’ll just assume so does everyone else.” Intuition can only be giving you a correct answer if that answer can in principle be empirically verified. If it can’t be, not even in principle, then there is no possible way your intuition can know it’s true either. Harris of all people knows intuition is not magic. We do not have souls or psychic powers. If his brain is giving him that output, why is his brain giving him that output? What inputs is it using to generate that output? And is it correct?
Even if it is true that everyone agrees “the worst possible misery for everyone is bad and should be avoided” (and Harris has never even demonstrated that—even empirically, much less through science), and that’s “why” Harris’s brain generates that intuitional output (his brain, let’s say, having subconsciously done some statistics from his experience with people across history and geography), there still has to be a reason why that’s the case—and more importantly, there has to be a reason why its being the case warrants our agreeing with it. Many a firmly held intuition is actually false, and we are actually not warranted in agreeing with it. Indeed rarely more so than in debates about what really is the greatest moral good!
But that’s not even the problem. It’s a problem. Harris doesn’t deal with it. And that’s a problem. But the real problem is that this is not even what we should be looking for. If you want a scientifically true hypothesis regarding what we morally ought to do, then you have to do the same thing science does to produce and test true hypotheses regarding what, for example, we medically ought to do (for example, to surgically treat a laceration to the heart). The answer to those questions always comes at the conjunction of two facts: what we want (a study of human desires and motivations), and what produces it (a study of causal consequences). If doctors want a heart surgery patient to survive in the best possible post-op state of health, then there are certain procedures that will effect that outcome to a higher probability than others. The latter is a straightforward empirical question in natural science. But so is the former. It’s just a different question (about the desires and motivations of doctors).
Thus, if you want to discover a true proposition about morality, you have to scientifically discover both what the consequences are (what effects does stealing tend to have? what effects does refraining from stealing tend to have? and in each case, we must mean effects both on the world, and on oneself—reciprocally from the world, and internally from what it changes in you) and what the moral agents we are making these statements about want. Moral statements are, indeed, statements about moral agents. When we say “you ought not steal,” we are claiming something is true about you. And that truth, as in all other imperative contexts, is a question of what will happen, in conjunction with what you want. More specifically, it’s about what you would want to happen if you were reasoning without fallacy from all and only true beliefs. Because with such moral statements, we are recommending an action, such that if it is not already obvious to you that that’s what you’d always do anyway, then your not realizing that (and thus actually considering doing something else) must be a result of an error on your part: either of reasoning (some logical fallacy) or of information (false beliefs or missing data).
The result is that, scientifically, we don’t look for something like “the worst possible misery for everyone is bad and should be avoided.” We first look for why someone would prioritize a moral goal at all. In other words, the only way it can be true (of you, or any other moral agent) that “the worst possible misery for everyone is bad and should be avoided,” is if that is a goal that serves your desires (your desires when arrived at by an ideal process—again, meaning, rational and informed desires), and does so more than any other possible goal. Does being compassionate, for example, make your life better, such that any other life you could live instead (all else being equal) will be worse? (Or, “more likely” worse, since this is always a risk theory; no outcomes are guaranteed.) And is there an optimal degree of compassion, a “too compassionate” point, whereby your compassion is so extreme it makes your life worse again?
These are indeed scientific questions. As is the question of whether everyone (when in the requisite ideal state) will answer these questions the same way, or will some people have different answers (even when fully rational and fully informed). But always this is the only way to discover true moral propositions: by discovering what moral agents, when in an ideal state, would want most, in terms of the consequences of their actions. And then discovering what actions maximize the odds of procuring those consequences. Those two discoveries together, produce all true moral facts (as I explained in the intro sections; and prove in the literature). And this is true, regardless of whether any scientific access is available. In the absence of scientific tools, we have to rely on the best empirical tools that are available. But always, these are empirical facts we are looking for. They are discoverable facts about the world, including physical facts about moral agents (about “conscious minds” as Harris puts it).
Missing this, is where Harris has lost the narrative, and why he can never clearly outline any scientific research program to discover moral knowledge.
Here’s an example of what I mean:
- Harris: “Ryan seems to be holding my claims about moral truth to a standard of self-justification that no branch of science can meet. Physics can’t justify the intellectual tools one needs to do physics. Does that make it unscientific?”
No. That’s not the issue here. Yes, philosophy has to build the analytical foundations of science. And it’s unfair to expect otherwise. But this isn’t an analytical question we are discussing. The issue is: what makes any imperative that Harris’s proposed science discovers “true”? Not tautologically true; empirically true. If, for example, Harris were to scientifically prove “stealing increases misery, and misery is bad,” he still hasn’t proved “therefore you ought not steal.” Because why should we care to avoid what’s bad, especially when it doesn’t affect us? In other words, how does he know someone else’s “misery is bad” in the sense of bad for us? Lots of misery may be good or even necessary (e.g. the pain of exercise; killing in self-defense). In order for a statement like “you ought not steal” to be true, it has to be true of the moral agent you are saying it is true of. But if that moral agent literally has no reason whatever to care about the increase in misery in the world, in what sense is it “true” that they ought not increase that misery?
Harris never answers this question. Yet it has to be answered if you want to turn morality into a science. You need to know what the hypothesis is: What is it that you are claiming is true about the world? If it’s just “Sam Harris doesn’t like misery and so would you please kindly not cause any,” then he isn’t talking about morality any more, in any sense anyone in the real world means. It can be scientifically true that Harris doesn’t like misery and would like there to be less of it. And science can certainly discover what actions will make more or less of it. But that’s not morality. That places no obligation on anyone else to care. It doesn’t even obligate Sam Harris to care. He might on a whim change his mind tomorrow about misery, and conclude he likes it again. What’s to stop him?
That’s not a rhetorical question (the way Christian apologists use questions like that). It’s an honest question. A necessary question. Because the answer to that question, is the very thing that makes morality true (and this is so as much of Christian morality as secular: see TEC, pp. 335-39). Even if a Christian says the answer is “God,” that’s not really an answer. Unless they mean, God will literally vaporize you if you change your mind, or will force your mind to change back, thus eliminating the existence of anyone who thinks otherwise. Beyond that, an answer like “God” needs explication. How will God stop him liking misery? Threats of hell perhaps. Something about how God made humans to only be happy if they adopt certain values. Whatever. It has to be something. But always, it will be the same fundamental thing: an appeal to what Sam Harris really most wants.
For example, suppose the answer is “God will burn Sam Harris in hell, and Sam Harris will like that less than changing his mind back about misery.” How do you know even that is true? Maybe Sam Harris will actually prefer hell. “But he could only prefer hell if he is being irrational, or not correctly or fully informed about reality.” Well, hey ho. That is exactly what we are saying, too. That moral truth follows from what you would conclude when you are not being irrational and are correctly or fully informed about reality. And the question of what Sam Harris really most wants, “hell, or aligning his values with an entry ticket to heaven,” remains fully apt even if there is no God. Hell then just becomes the consequences, whatever they are, that will befall Sam Harris or that Sam Harris risks upon himself. These will be external (e.g. social reciprocity) and internal (e.g. self-contentment). But still, it always just comes down to what he wants most for himself. Only that can motivate him in such a way that it would be true to say “Sam Harris ought not steal.” And as for him, so for everyone else.
A science of morality therefore must attend to determining what it is people really want. Science must resume what Aristotle began: the study of eudaimonia, and not as he defined it (that was just an empirical hypothesis; some other may be true; indeed some other is likely to be, as we know a ton more than Aristotle did), but as whatever it turns out to be. Meaning: science must discover the thing people want most out of life, the reason they continue to live at all, hence the one thing that would motivate them to act differently than they do (or exactly as they do, if they are already wholly aligning their behavior with what is morally fit); the one thing, whatever it is, that makes “you ought to do A” true—for you, or anyone else it is true of. And not just what people happen to desire most or say they desire most (already two things often not the same), because they might be deciding what they want most from a logical fallacy, or from misinformation or ignorance. We want to know what they would want most when they aren’t irrational and ignorant.
The study of morality is entirely driven by our desire to know what a non-irrational and non-ignorant version of us would do. So that’s what it should be looking for. It therefore must concern itself with human desires, and ultimate aims; with what it means for us to be satisfied with life and with who we are and what we have become. Because all moral truth requires knowing that. At the very least, it requires having some idea of it. You can’t just skip straight to “misery is bad.” You have to answer why anyone should care if it is. And not just care; but care so much, that they will prefer nothing else to minimizing it, that there won’t be anything else they “ought” to do. That’s the only thing that can make an “ought” statement true of someone.
So it’s almost close to getting it when Harris says…
- Harris: “…if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.
Spot on. Exactly my point in my Open Letter to Academic Philosophy. But notice what this means: the truth of any moral system (including Kant’s, including Aristotle’s) derives from what people think matters the most, is the most important, is so good it trumps anything else we could gain from our actions instead of it. All philosophers for thousands of years have unknowingly been admitting that the truth of moral imperatives is a function of what people really most want out of life.
But we can’t, like Aristotle and Kant did, and like Sam Harris now does, just sit in the armchair and conjure from our intuition the answer to that. Because we can be wrong. We can be wrong because we arrived at our conclusion illogically or uninformedly. We can be wrong because other rationally informed agents reach different conclusions (because, it would then have to be the case, they are physically different from us in some relevant way). We can even be wrong because, though we intuit correctly, our articulation of what we intuit is so semantically amorphous it leaves us no clear idea of what exactly constitutes misery, or when exactly it actually is bad (likewise what constitutes happiness, or when exactly it actually is good; or any other vocabulary you land on). These are things only empirical investigation can answer (What, really, exactly, is it that people want most and will always prefer to anything else, when they are rational and informed?). And science is simply a collection of the best methods of empirical investigation we have.
So, Harris gets at least that people think happiness and misery are a guiding principle in the construction of our various competing moral theories. But why do they think that? What do they mean by that? And are they right? Or are they confused? misinformed? irrational? What exactly is it they should think? In other words, what will they think, once they reason without fallacy from true information? This is what a moral science must explore. In addition to the more obvious study, of the various consequences of the available actions, choices, and values.
Ironically, Harris doesn’t apply his own criticism to himself when he says exactly what I just did, only of someone else…
- Harris: “For instance, John Rawls said that he cared about fairness and justice independent of their effects on human life. But I don’t find this claim psychologically credible or conceptually coherent. After all, these concerns predate our humanity. Do you think that capuchin monkeys are worried about fairness as an abstract principle, or do you think they just don’t like the way it feels to be treated unfairly?”
Good question, Dr. Harris. But…how do you know you aren’t just as mistaken as you now admit even John Rawls is? How do you use science to determine that you are right and he is wrong…and not the other way around?
Of course, I’ve been explaining exactly how we would use science to do that. But Harris doesn’t even seem aware that we need to explain that. That we need a research program to do that.
- Harris: “‘You shouldn’t lie’ (prescriptive) is synonymous with ‘Lying needlessly complicates people’s lives, destroys reputations, and undermines trust’ (descriptive).
This is false. It’s so obviously false, I can’t believe Harris really thought this through. Because there is a key thing missing here. Merely because “lying needlessly complicates people’s lives, destroys reputations, and undermines trust,” it still does not follow that one ought not lie. Thus, they cannot be synonymous. How do you connect the empirical fact that “lying needlessly complicates people’s lives, destroys reputations, and undermines trust” with an actual command on someone’s behavior that they will heed? That they will care one whit about? Much less care so much about, that they will place no other outcome higher on their list of desired outcomes? Harris doesn’t realize he needs to fill in that blank, in order to get ‘you shouldn’t lie’ to be empirically true—of anyone. You can’t just list the consequences of lying, and then conclude therefore no one has any reason to lie anyway. (Even apart from the fact that there are probably many moral reasons to lie.)
Thus, Harris doesn’t answer Born. Harris confuses himself into thinking he has. But he hasn’t.
We need to answer Born, if we want to make moral science a thing. So far as I know, I’m the only one trying to actually do that. And under peer review no less. (Not that peer review is such a hot ticket in philosophy; but it’s still better than not being peer reviewed.)
- Harris: “There need be no imperative to be good—just as there’s no imperative to be smart or even sane. A person may be wrong about what’s good for him (and for everyone else), but he’s under no obligation to correct his error—any more than he is required to understand that π is the ratio of the circumference of a circle to its diameter. A person may be mistaken about how to get what he wants out of life, and he may want the wrong things (i.e., things that will reliably make him miserable), just as he may fail to form true/useful beliefs in any other area.”
But how can Harris claim a person “wants the wrong things”? What does that statement even mean? He needs to answer that, before he can claim it’s even logically possible to “want the wrong things,” much less that anyone does want the wrong things, even more so if he wishes to claim to know that the person wanting the wrong things isn’t himself. Likewise, how can Harris know there is no imperative to be good, or smart, or sane? Maybe in fact it is morally imperative that the sane seek therapy, that poor thinkers practice more at thinking well, that someone who has false beliefs actively seek to discover and fix them? How can Harris know in advance whether these are or are not morally imperative, if he hasn’t even begun to apply his moral science to finding out? (Even if only in a proto-scientific way, like the human race has already been doing in philosophy for thousands of years.)
Harris talks a lot about the need to empirically vet claims about morality. And yet seems keen on making a lot of claims about morality he hasn’t empirically vetted. He needs to attend to that. It makes it look like he doesn’t know what he’s doing. It makes him look like a bad philosopher.
- Harris: “Ryan, Russell, and many of my other critics think that I must add an extra term of obligation—a person should be committed to maximizing the well-being of all conscious creatures. But I see no need for this.”
Then you can never produce any true proposition about morality. Harris is thus saying we need to use science to prove what is true in morality, while simultaneously insisting he sees to need to prove any of its results are true for anyone he expects to obey. What’s the point then? The next Saddam Hussein can also use science to produce a thoroughly coherent system of moral imperatives that best serves the goal of creating the ideal totalitarian society. Ayn Rand practically did the equivalent, with yet another completely different goal in mind. How would Harris argue we should obey whatever Harris comes up with, and not what this imaginary tyrant does, or what Ayn Rand did? “My gut feeling” just doesn’t cut it.
In actual fact, “you ought to do A” can only be a true fact about you, if in fact you will do A when fully rational and informed. Otherwise, why would you have any reason to do A? If even a perfectly rational and informed version of you wouldn’t do A, why should you? Why would you even want to? Why would you ever want to do what you know is irrational and uninformed? Harris has no answer. And that’s why he has no science. He has the idea of a moral science. But he has no idea of how to get there. And he is so stubborn, he even rejects the only way he could get there, the only line of inquiry that’s actually capable of getting him what he wants: moral imperatives that we can confidently claim to know are true statements about the people we expect to follow them.
Harris says “the well-being of the whole group is the only global standard by which we can judge specific outcomes to be good,” but he doesn’t realize that’s not true, unless each individual, when fully rational and informed, agrees that that outcome is indeed what they want most for themselves. Otherwise, it is not true that they want that outcome, that that outcome is best for them and therefore what they ought to pursue. They ought, instead, act differently. To get it to be true that “the well-being of the whole group” is what everyone across the globe values (or would, absent irrationality and ignorance), you have to tie “the well-being of the whole group” to what’s good for the individual—and not just tie it in, but show that it is more important to that individual than anything else that individual might prefer instead (again, that is, when they are rational and informed).
Otherwise, you are just making shit up. We can make up false moralities all day long, each with some seemingly glorious goal that sounds cool. But is it true? How do you know? Harris can’t dodge these questions and expect to be making any progress in moral thought.
Conclusion
Ultimately, all real answers about how we ought to behave, require solving real problems of conflicting values. It’s not always a zero sum game (even if sometimes it is). But it’s still not obvious how “you ought to decrease misery” or “you ought to increase flourishing” works out in practice when it is not possible to do any of the one without causing some of the other, as in almost all cases that’s going to happen—and that’s even if you can define misery and flourishing in any operationally testable or usable way to begin with (and Harris hasn’t). When is reducing misery less important than increasing happiness? Or vice versa? You have to work out which is more important and when. And Harris’s deepity about “reducing misery is good” just doesn’t answer that. It’s not even capable of answering that.
My point is not, like the skeptics, to say these things are unanswerable. My point is that to develop a moral science, you have to answer them. Even if it’s really hard. Like most science is.
Harris cannot dismiss Born’s point that Harris needs to finish the equation. To get any “you ought to do A” statement to be true, you can’t just work out empirically what the consequences are of different choices. And you can’t get it to be true by just asserting you know in your gut what the only true ultimate goal is. Imperative propositions can only be true when the consequences of A are what the agent actually wants most—when that agent is concluding what to want with full true information and without fallacy. Otherwise, “you ought to do A” simply isn’t true for you. Or anyone. You literally won’t have any reason to give a shit. Much less give so much of a shit, that you will sacrifice literally every other possible thing you could pursue in its stead.
That’s a really difficult thing to discover. Harris needs to realize, it’s going to be hard work actually finding that thing. He can’t just conjure it from his gut. It doesn’t just fall out of the ether like magic. People need to know why they should make any sacrifices whatever, for anything. Because all actions entail sacrifice—you always sacrifice some kind of gain, time, energy. Why should they care to sacrifice, and how much, and for what? The only way to answer that question, is by discovering what each individual really wants. And what they really want, will entail the only morality capable of being true.
Very good, as per; but are you having this conversation with Sam Harris? There doesn’t seem a whole lot of point if you are not.
I already told him all this years ago. As a public intellectual, he can engage if he wants. I can’t make him. He needs to be persuaded to pay attention by his fans and peers. It won’t help coming from me.
Thanks Dr. Carrier! I have some questions. First, what is the difference between the proposed scientific program, and what moral psychologists are doing? Second, are there really no scientific results that begin to answer the questions proposed? Third, who is in the best position to get the program going? Moral psychologists? Finally, do you hope to one day write a book that gains the popular attention that Harris has received or is that not important to you? Thanks!
Moral psychologists, at least so far, are studying how the human brain engages in moral reasoning, and what parts of the brain are involved and what they do. Same as psychologists who study reasoning: they are not trying to find out what is true in logic or what the correct way to reason is; they are just studying how the brain reasons, and why, whether that reasoning is sound or not. It’s a form of descriptive moral research, not prescriptive (like other descriptive sciences of morality, e.g. going on within anthropology and sociology).
There is some scientific and nonscientific but nevertheless empirical data that pertains, and can be used to rule out hypotheses and argue probabilities in the domain of moral knowledge. Basically, any philosopher who is actually paying attention to that data when reasoning about correct moral conclusions, is already doing what we should be doing; just not as rigorously or carefully as they could be in most cases (if they had the funding).
The data pertaining is not single-domain. On the internal side it comes from economics, psychology, sociology, anthropology, psychology (including cognitive science); on the external side it comes from medicine, law, political science, group psychology, and economics and sociology (again). Moral science would have to be a collaborative effort across disciplines.
Accordingly, the only people in a position to get it going are not scientists but rich people, i.e. people with the money to fund the interdisciplinary teams and research needed to do it.
On the attention, I’d rather Harris write a better book. Realistically, that has a better chance of getting attention and a ball rolling. I doubt there is anything I could do that would work any better, and realistically I’m in no position to acquire such an audience anyway. I do what I can already. And that’s limited by my resources and skills and reader base. I do mentor Ph.D. students who take up dissertation work in advancing this agenda. They might make more of a difference in the long run. But it could be decades or centuries before society wakes up to this and actually starts doing it.
Excellent response. Very helpful. Thanks so much!
Hi Richard, You wrote, “All moral judgments (that are capable of being true—in the sense of, correctly describing what an ideal agent would do)… are the combination of what an agent desires and what will actually happen. One, a fact about the psychology and thus neurology of the agent; the other, reductively, a fact of physics—e.g. it also involves facts about social systems, etc., but those all just reduce to the physical interaction of particles, including a plethora of computers, i.e. human brains.”
My questions are these:
1) How ideally has an “ideal agent” of morality with its “ideal desires” been modeled thus far (not to mention its ideal “psychology” and ideal “neurology”)?
2) Exactly which “facts” about which “social systems” are we talking about? Marvin Harris’s book, Our Kind, introduced me to a variety of “facts” about a variety of “social systems”
1) How ideally has an “ideal agent” of morality with its “ideal desires” been modeled thus far (not to mention its ideal “psychology” and ideal “neurology”)?
If I understand the question, you mean, if we assume everyone proposing moral systems (all the way to the point of actual moral advice) is attempting to model the system that would describe the behavior of an ideal agent (whether they realize that’s what they are trying to do or not), has any one of those people gotten closer to a fully correct model by that standard than everyone else?
Of course, scientifically, we don’t know, because no one is testing this to find out. But pre-scientifically, i.e. judging as empirically as we have access to, I’d say yes, but probably no single group or person (lots of good models, but each one might be wrong in a few different things). Generally, if we put a dial on a spectrum between 100% right-wing and 100% left-wing, the dial when set to about 80-90% left-wing is probably closest to a correct model that we’ve gotten to so far. To argue over that would be fine, as long as the arguments are empirical, as in, evidence-based. And both sides respect true information (and thus will back off claims they can’t actually back up strongly in evidence in a rationally valid way). So the debate always becomes whether the model has a fallacy in it or wrong or missing information. But that’s what science could help us with. If we invested in that.
As to the neurology, we are nowhere near that. It may be decades to a century before we can actually model neurologically a non-fallacious reasoning process (and it will be AI that does it before we do). And that process will always produce a more accurate “ideal agent” moral system in direct correlation to how much accurate information we provide it. (It will never be flawless, as correct outputs will always be probabilistic, because omniscience doesn’t exist and can’t be obtained.)
2) Exactly which “facts” about which “social systems” are we talking about? Marvin Harris’s book, Our Kind, introduced me to a variety of “facts” about a variety of “social systems”:
A social system is like a traffic system: we can have a science of traffic systems that explains why several different systems are equally effective (e.g. we can explain why UK and US traffic systems work equally well, even though they are mutually incompatible). The “facts of social systems” we need to know are like the “facts of ecosystems” we need to know to survive and feed ourselves and maintain a stable environment: being in a particular system is a fact of your circumstances that then affects what the optimal behavior will be in that system (just like “do I drive on the left here, or on the right?”). So we will need to know the facts of all social systems we are likely to find ourselves in, and especially of the social systems we spend most of our time in or could tinker up and build. Just as, if you will drive in England, you need to know UK traffic laws, even if you spend most of your time driving in the US.
Social systems are also like traffic systems in that a social system can be fucked, or badly designed, or worse than tinkering could make it. Just as many traffic systems around the world are fucked, or badly designed, or worse than tinkering could make them. And probably there is a lot we need to improve with our own social system to make it work better, and thus get closer to a “peak” on the moral landscape.
I don’t disagree in theory with your explanations as to how to discover the ideal moral landscape, and which, I assume, we can then use to help engineer a more moral just civilization, but I suspect that the moral landscape itself might not be as much like a still life painting one can study at one’s leisure, but more like rising and falling peaks on an ever moving ocean (since life remains in constant motion). On the other hand, there also remains a vast landscape beneath the sea that remains relatively calm compared with the waves on the surface. So mores remain at least partly in motion. There might also be moral changes akin to chaos theory that erupt unexpectedly and toss over the apple cart of scientifically determined ideal mores (like the eruption of a super volcano) that could reduce any Brave New World to rubble due to perhaps even minor fluctuations that soon get out of hand as in the Butterfly Effect. How might mores continue to change if technology allows us to literally share internal thoughts, visions, sensations with others simultaneously? How are people and their mores changing even today with the worldwide web? And how might the knowledge that human civilization could crumble in the near future influence moral decisions people are considering today?
Are the majority of humans wise enough to even know where to concentrate their energies for long range planning of their individual lives, let alone as a planetary whole? Can governments make long range plans for the sustainability of their own cities and nations, let alone for the planet? We see greedy power hungry dictators. Or we see democratically elected leaders who have to spend much time and effort ensuring their victory each electoral cycle, and who change their mind on issues based partly on the quarterly profits of corporations who sponsor their next election, and partly based on high paid lobbyists who lie to the representatives about how each corporation is doing nothing but good. Have you seen the book, When Corporations Rule the World? Third ed. has just come out. Not sure it takes a lot of research to come to some major depressing conclusions concerning nearly all forms of human governance.
I agree with you that studying the moral landscape in intimate fashion is a worthy enterprise, and that it will take quite a lot of study to finally understand and agree upon various “ideals” in particular fashion. Though most study consists of watching what people do rather than attempting to reinforce or change it. And how many studies must one perform in order to conclude that the majority of us identify with particular groups in which we are raised or become a part of later in life, or happen to live near, i.e., “herd thinking” rather than rational thinking?
I hope there are some conclusions we humans can reach and agree upon before civilization itself decays, but there are countless people who hold widely different views in religion and politics across this planet, who all claim to know what we “really must do” at this critical juncture in human history. So many different people who focus on different circles of concern: religious/non-religious, conservative/moderate/liberal, etc.
“It has often been said that, if the human species fails to make a go of it here on Earth, some other species will take over the running. In the sense of developing high intelligence this is not correct. We have, or soon will have, exhausted the necessary physical prerequisites so far as this planet is concerned. With coal gone, oil gone, high-grade metallic ores gone, no species however competent can make the long climb from primitive conditions to high-level technology. This is a one-shot affair. If we fail, this planetary system fails so far as intelligence is concerned. The same will be true of other planetary systems. On each of them there will be one chance, and one chance only.” (Hoyle, 1964) https://etb-creationism.blogspot.com/2012/03/old-earth-creationism-ideal-moment-in.html
Meanwhile, others are not the least concerned by Hoyle’s warning.
[could it be] more like rising and falling peaks on an ever moving ocean (since life remains in constant motion).
That misunderstands the landscape concept. The landscape is a map of all possible systems (and thus all possible conditions and circumstances). It is therefore always static. You can move from one peak on it to another, but the peaks never move. So when life is “in constant motion” that only means where it is on the landscape changes; it does not mean the landscape itself changes.
And indeed, that is in part because fundamental advances can change everything (like telepathy tech, etc.), knocking us off a peak into a valley, and requiring us to find another peak (like the highest peak that includes telepathy tech, etc.).
And the options also aren’t just peaks or valleys. There are hills, and mountainsides, which are better than valleys but worse than peaks. Thus, for example, our democracy is better than that of Imperial Rome or even Classical Athens. But it’s still not the best democratic system possible. And yet, it’s a lot better than those others. Even with all its flaws and corruptions and corruptibility. Those other systems remain far worse. And yes, we can say so, based on logically valid inferences from empirical evidence (we could say so even better, with more science behind it; though there is a lot of political science on it already).
And how many studies must one perform in order to conclude that the majority of us identify with particular groups in which we are raised or become a part of later in life, or happen to live near, i.e., “herd thinking” rather than rational thinking?
We already have studied that. That’s already settled science.
You might be confusing here two different things: how we know what’s true (“the earth is billions of years old”), and how we can convince someone to agree it’s true (“how do you get a young earth creationist to admit their beliefs are false”). Harris and I are talking about the first problem (how to know what’s true = how to know how old the earth really is). The second problem is a wholly different thing (how to persuade people to believe the truth = how to convince a YEC the earth is billions and not thousands of years old).
-:-
P.S. Finally, I get the sentiment, but Hoyle isn’t scientifically correct. And I find scientifically illiterate doomsday statements from guys like him who should know better very annoying. So I have to correct it…
Evolving civilization is in no way dependent on fossil fuels. Our entire civilization evolved without them. The entire Roman and Chinese Empires rose, grew, and flourished without them. Likewise, there is no such thing as “high-grade metallic ores” ever being “gone.” The metal never goes anywhere. It lasts forever (until it is literally elementally changed in the core of a star). We just have to process it. Processing iron out of rock as the Romans did is no easier than processing the rusted metals of a long dead civilization that will be littering the earth when we are gone. In fact, the Romans had it harder. The next species won’t have to dig below the water table and break mountain rocks to get iron ore (which is just oxidized iron). It will just be sitting around everywhere to be picked up, or easily dug out of earthen hills. Just as coal used to be in Roman England (it took a thousand years to use that all up, such that by the time the industrial revolution started, we had to dig below the water table again to get the coal for it…in fact, it was the need to do that, that eventually started the industrial revolution). And a civilization can easily progress to our stage on a planet with zero fossil fuels: it will just take longer (this has been analyzed a lot). Roman science, had it continued or been revived without a coal industry in place, could easily have developed eventually solar and wind power (they were already exploiting water power to run their industry), and the Romans already knew how to convert renewable timber and biomass into charcoal and oil. Their cities were not lit, nor their baths heated, with fossil fuels. And not a single rocket we send into space today uses fossil fuels to get there. Fossil fuels allow a highly rapid rate of advance. Lacking them, only slows the pace. It does not prevent a thing. And the earth has billions of years left in it. Humans evolved, and civilized themselves, on a scale of a mere millions of years.
For more on why doomsday thinking is almost always fact-challenged like Hoyle’s quote see my discussion in Are We Doomed?
Is “morality” a single thing qua thing, in and of itself? It seems more like the word and concept we refer to as “morality” is an enormous generalization and simplification of multiple influences that we sum up in that single word. Hence we have philosophies that claim to explain “morality” as being due to some single over-riding desire, need, or conscious recognition of consequences. But is it clear that moral values and behaviors are driven totally by oneʼs conscious mind, or totally due to genetic predispositions (including to some degree the behaviors the human species shares with its evolutionary ancestors), or totally due to repeated lessons from birth that eventually become ingrained behavior patterns requiring little to no thought? What we sum up with the word, “morality,” appears to be fed by multiple streams.
How much of what we call “morality” is due to parents continually telling their children to “do this, not that” and imprinting such lessons via repetition, example, or via rewards and punishments? Parents are annoyed or upset by many things children do, some of which includes a child’s behavior toward inanimate objects (destroying them), but also a child’s behavior toward other children and toward their parents. Hence, children are fed a diet of lessons that become part of who they grow up to be within their family and culture, which helps explain why “moral” behaviors come to feel so much a part of us, since such training begins before we are able to consciously weigh our own choices in a deep fashion. After the mind and body mature we learn to analyze consequences (not only of our interactions with objects that we as children are likely to be hurt by, or damage) but with other human beings.
A sense of what is “moral” is also built round or influenced by shared pain and pleasure receptors, and by shared reactions to similar psychological pains and pleasures, such that we intuit a shared connection with others, biologically and psychologically, and how they would like to be treated and not mistreated, which helps guide the development of basic agreements between us and others as well as guiding the making of laws. Incredibly few people like having their lives or belongings or health taken from them at the whims of another human or at the whims of nature. Incredibly few would disagree that having lots of friends is better than having lots of deadly enemies, including both human enemies and ones in the dangerous natural world.
Humans also grow to realize the widespread benefits of civilization over barbarism. With civilization we can as a group extend our curiosity and imagination beyond the stars, while with barbarism we can merely stick spears in each other and tremble in fear of what lies over the nearest darkened hillside.
Religion’s trump card that they throw out often is that shared basic values don’t instill a necessary obligation to follow through on them. We may acknowledge shared internal, intuitive and rational recognitions upon which to base rules of behavior, but seek to bend them to our own personal wants, whims, needs, and desires when given the chance. Religion may help instill the notion that someone is always watching one’s actions and either approving or disapproving of them, hence adding both positive and negative reinforcement. (In a sense, nationalistic politics also makes people believe someone is watching, namely the state, and that heroic actions performed on behalf of the state’s safety will be remembered for good or ill, and broadcast loudly by the state during days of remembrance or veterans, or enshrined in statues in the state capitol, or that the state will take care of one’s family if one should perish in such actions. Great humanitarians or authors may also be remembered via the Nobel Prize or other such prizes. It’s always nice to know other humans are watching and approving.) Hence religion boasts a magnified sense of moral obligation. Some religious believers feel so high a sense of obligation that they risk their lives (and/or their families lives) for their god’s sake, “all for Him,” or for the necessary spread (as they view matters) of their faith in their god(s). They may risk their lives to help others in need who are sick or in need of protection. However, such risk taking is not unknown among the non-religious, or those with unorthodox beliefs. Some people more than others have and do risk their personal safety to help protect or heal others, to keep them from harm. (Among non-theists, there’s Doctors Without Borders, a group that claims no religious affiliation that I know of, and many major charities claim no religious preference, and government still remains the single largest charitable re-distributor of wealth to help the impoverished and sick and to aid education, preserve the local and international peace, and help protect people from scams and poisonous food and drugs.)
Hume said there was a gap between “is” and “ought,” between noticing which behaviors humans found most valuable in each other, and feeling obligation to behave in that manner. Philosophers often distinguish between moral values and moral obligations, but in real life we grow to feel an obligation to protect/preserve the people or things we have come to value, so there is cross-over/feedback.
Is “morality” a single thing qua thing, in and of itself?
As I explain in the article, people can mean lots of different things by “moral.” And people can use it manipulatively (they can pretend to mean one thing by it, but really mean another). But as soon as you want to know, actually know, what you ought to do, then you can only ever mean one thing: what you truly ought to do above all else (an imperative that supersedes all imperatives). Which always reduces to: what would an ideal agent do in the same circumstances. There is no escaping this.
But is it clear that moral values and behaviors are driven totally by oneʼs conscious mind, or totally due to genetic predispositions…[etc.]?
This isn’t a relevant question. That is a question about what people do do, not a question about what people should do.
A descriptive moral science would simply ask, how do people answer moral questions? (Regardless of whether their answers are correct or defensible or in any meaningful sense true). And we already have that science. But that’s not useful for answering the other question (except insofar as it helps us identify where fallacies, e.g. cognitive errors, enter into moral reasoning; and likewise false beliefs, or the effect of missing information).
A prescriptive moral science asks instead, what really is it that we should be doing. That’s what Harris (and I, and philosophers down the ages) are asking.
Like what I said in comments here already, an analogy is the study of human reasoning:
We have a developed descriptive science of human reasoning that has accumulated a vast database of all the ways we naturally reason shittily. But we also know, that that reasoning is fallacious. So we discovered and now try to install a software patch to correct for all that bad design (hence: the cultural technologies of logic, mathematics, science, jurisprudence, etc.).
That humans reason a certain way is not a valid argument that they should reason that way. To the contrary, we know for a fact they shouldn’t. So we have different sciences devoted to determining how we should reason (e.g. science and logic, critical thinking skills, etc.).
Harris et al. are looking for the equivalent in the moral domain. Because that humans are genetically and culturally programmed to reason a certain way in morality, does not mean they should be reasoning that way in morality, any more than it means they should reason without the software patches of science and logic (etc.) in any other area of human knowledge.
So we need to identify the causal effect of fallacious reasoning (even when subconscious) and false beliefs or missing information in producing moral conclusions, and remove or correct them, and then see what gets output.
This also means there is a third science to develop wholly apart from those two: a pedagogical science of moral reasoning. Because how you convince someone to reason better, is not the same thing as simply telling them what we proved constitutes reasoning better (teaching a Christian logic, will not result in their reasoning logically; it takes a lot more to convince them to challenge and escape the delusional hall of mirrors they are in, and that’s true of all belief systems, godless and otherwise). Knowing the truth and how to prove it’s true is different from knowing what it takes to actually get someone to be persuaded by that proof to believe it. But that we already have a science of (the cognitive science of persuasion).
A sense of what is “moral” is also built round or influenced by shared pain and pleasure receptors, and by shared reactions to similar psychological pains and pleasures, such that we intuit a shared connection with others, biologically and psychologically, and how they would like to be treated and not mistreated, which helps guide the development of basic agreements between us and others as well as guiding the making of laws. Incredibly few people like having their lives or belongings or health taken from them at the whims of another human or at the whims of nature. Incredibly few would disagree that having lots of friends is better than having lots of deadly enemies, including both human enemies and ones in the dangerous natural world.
That’s all a crude version of empirical reasoning. But still rife with fallacies and false or missing data, which is why there remains so much disagreement about what is moral, and why we’ve been so wrong so often in our long history. We can do much better.
Hume said there was a gap between “is” and “ought” …
He actually didn’t. That’s a modern myth. See TEC, pp. 340-43.
Hume actually said exactly what I’m saying: an “ought” derives from an “is” about what people desire.
The only time he is quoted ever saying anything that sounds otherwise, he was speaking of religious moralists who don’t link the ought to an is about desire—he was not saying you can’t get an ought from an is, rather, he was saying you can, and that it is therefore a failure of those moralists that they don’t. He then goes on to explain what they should be doing to correctly derive an ought from an is. And his explanation is basically the same one I’m giving now. Kant in fact acknowledged Hume was right, and then tried to argue against him anyway by admitting hypothetical imperatives exist as empirical facts (as oughts that derive from is’s), but that morals had to be a different kind of imperative, which Kant called the “categorical” imperative. Kant ended up twisting himself around into just asserting another hypothetical imperative anyway (no categorical imperatives exist that are true for anyone, as in ought to be obeyed by anyone, that aren’t hypothetical imperatives and therefore fully derived from an is: see TEC, ibid).
So since it all comes down to what an ideal agent would want (the only way imperatives can be true of an agent), moral science should be looking for what an ideal agent would want.
Another excellent reply! Thanks Richard!
1) Thanks for clearing up Hume’s view for me!
2) What you labeled “rife with fallacies and false or missing data” was admittedly a list of very basic shared recognitions, ones that I had hoped even a religious reader might agree played an obvious natural role (rather than supernatural) in the origin of what we consider to be “morality.”
3) Could it be that your philosopher’s/mathematician’s view of the “ideal” may have convinced you that mapping the moral landscape seems as straightforward an enterprise as you present it to be? Especially since primatologists, psychologists and sociologists have seen human behavior both in groups and individually, veer off in more directions than a herd of cats? Let me rephrase… Can even a combination of the “laws of logic” and the “cognitive science of persuasion” lead to changes in people’s moral behavior? For instance, haven’t Christian apologists used logically presented arguments and cognitive science techniques to persuade people to become and remain Christians? (Not that Christians in the past studied cognitive science and employed it in a formal fashion, but Christianity did and does possess potent methods of influencing people to join and remain in the fold, utilizing basic human needs, fears, desires, offering assistance, music, repetition, rule-books, training, group-think, warm emotional connections linked to veneration, surprising anecdotes, etc.). Can a purist philosophical/mathematical perspective of “ethics” compete for human primate attention on an equal cognitive level with the offered by the world’s major religions?
The laws of logic have their limitations such that valid arguments do not necessarily prove that something truly exists since one also has to go out and see that for one’s self. With the right syllogism one can prove that pink unicorns always beat blue ones when it comes to goring the most Smurfs on Smurf goring day. So the laws of logic may provide a lower baseline in argumentation, but they don’t determine the coherency of one’s worldview as a whole. And multiple worldviews can be argued for in a coherent fashion via the addition of some nonfalsifiable generalizations in special cases when needed.
Even more perplexing, what is to keep someone from using both logic and the cognitive science of persuasion in a Machiavellian fashion and not grow more moral but more deceptive? I mean this seriously, because it appears like there are circles of concern/value from the circle that begins with the individual who is concerned about himself and his needs, desires and fears, up through concern for family, lovers, friends, extended family and culture. And there are circles of concern/value for the people/status/safety/growth/sustainability of one’s city, state, nation, the world. And economic circles of concern in the realm of one’s job and the company one works for and competing companies and government laws and regulations. And such circles can and do come into conflict, and the conflicts themselves keeps changing, because life keeps changing over time and also as each individual ages. The moral behavior of human beings also appears to be a mixture of angel and demon to use a common metaphor, or angel and ape, or Gandhi and Machiavelli, or tame and untame animal. Human civilization itself might be just three missed meals away from barbarism (especially if electrical transformers around the world are blown by a solar flare, and afterwards all the refrigerated and frozen food is lost, the air-conditioning malfunctions, water pumps cease to work).
Cool.
Theoretically, yes, an informed religious believer would also agree the common domain of moral assumptions, even in their own religion, is rife with fallacies and false information. Hopefully that inspires them to agree we should aim to root out (to find) and eliminate those fallacies or false beliefs, and see what moral conclusions follow then. Really, all moral debate really ends up doing exactly that (the two sides argue over what is factually true, or whether their moral conclusions logically follow from it). Whether the debating parties are religious or not.
Could it be that your philosopher’s/mathematician’s view of the “ideal” may have convinced you that mapping the moral landscape seems as straightforward an enterprise as you present it to be?
I don’t at all imagine it’s straightforward. It will be one of the most complex scientific tasks ever undertaken. So I don’t know how to answer that question.
Can even a combination of the “laws of logic” and the “cognitive science of persuasion” lead to changes in people’s moral behavior?
Note, again, this conflates two different things: knowing what’s true; and knowing how to persuade people it’s true.
We know the earth is billions of years old, because of science. That doesn’t lead to changes in YEC beliefs.
Likewise, moral science will tell us what’s true about morality. That doesn’t mean people will then start believing true things.
The science of getting an irrational brain to align its beliefs with reality is a completely different science. One we already have going. And it’s a good example of how complex a matter moral science will be; because the science of persuasion is already anything but straightforward or simple, and it’s not even done.
(And I say that not to single anyone out; all brains are irrational, including yours and mine. The difference, insofar as there even is any, is only in how much we work to correct for that defect: some people, like you and I, do that better than most people; no one does it perfectly. Hence we are always correctable.)
And yes, someone can use science to persuade people to have false beliefs. That’s already true. And therefore what we already must constantly be on guard against. It’s not some new problem moral science would create.
I respectfully disagree. For example, what is the moral status of eating animals?
There are many individuals who are very well informed about the relevant facts, and yet reach opposite conclusions. We have every bit of reason to expect that some of these are really happier, in the short and long run, if they eat animals. And others are happier not eating animals, and would be happier still if no one did. There is no realistic hope to discover any crucial facts that will change this.
Neither can we get out of this situation by proclaiming that the question is not a moral one. Most philosophers believe it is, and if we redefine morality to exclude it we will simply have to invent something else that takes its place.
In my opinion, the moral imperative “you ought to do X” really means “if you don’t do X, me and my buddies will make you regret it”. Over time, as we have become more civilized, our moral systems have evolved and become better at catering for the interests of a greater part of the population in order to maximize the combined outcome for all the participants. The group of “buddies” has grown and people have become convinced to make minor immediate sacrifices in exchange for greater but less obvious advantages.
I respectfully disagree. For example, what is the moral status of eating animals?
That’s an empirical question. One science could answer. If we applied it to the task. (We certainly can’t know in advance that it can’t.) In the meantime, we have to do our best to answer it by reasoning without fallacy from true information, as empirically as we can.
There are many individuals who are very well informed about the relevant facts, and yet reach opposite conclusions.
Fundamentalist Christian apologists are superbly well informed, indeed often perfectly informed, yet still embrace a completely false worldview. It is not enough to have correct information. One must also reason from it without fallacy.
It’s only the worse, that most people also lack correct information. Most people are missing or disregarding a lot of information, or are embracing false information, even people who are really well informed and should know better.
The human brain sucks at guessing what’s true about the world. That’s why we invented science. Look at medicine, as naturally inferred from the human brain; and then medicine, as arrived at scientifically. Night and day. Yet surely moral facts are more important than even medical facts; so we should be applying the same fix to morality as we did to medicine.
We have every bit of reason to expect that some of these are really happier, in the short and long run, if they eat animals.
Maybe. Personally, I think even probably (given that the killing or eating, is a different question from the treatment of the animals we kill and eat). But having “a” reason is not enough. Is it true that, after accounting for all information, and eliminating all fallacious steps of reasoning, that eating animals remains what an ideal agent would still want to do? That’s the question. We do our best to approximate the right answer to that question, and we are more likely to be right the more empirical and attentive to avoiding fallacies we are. But we can do a great deal better than that. Just as we did in every other domain of human knowledge a proper rigorous science could be applied to.
And it may even turn out to be that there is no moral fact of the matter. That whether or not you eat animals is simply an expression of an aesthetic preference. And then people mistake that for a moral intuition (as has been done countless times in history: an aesthetic feeling gets converted into a moral intuition; hence so much faulty moral reasoning has been guided by what “feels disgusting” to the moralizer rather than what actually does anyone any harm). Cognitive dissonance then drives the development of delusional belief systems to “protect” that conclusion from criticism (e.g. as when some vegans over-anthropomorphize animals to justify their strong feelings in the matter, and resist all scientific evidence that animals don’t think or feel that way).
That may in fact be actually the case—i.e. there may be no moral fact of the matter even to discover. Per above, a moral science might just end up proving it’s all aesthetics. It can show that once you remove all fallacies and false beliefs and input all correct information, there is no way to get from the facts of reality, to the conclusion that eating animals is immoral. That the only way to get there is by fallacious reasoning or factually inaccurate premises.
It could also turn out to be the other way around—a moral science might just end up proving that when you remove all fallacies and inaccurate beliefs, eating animals is not moral. And anyone who agrees they should do what an ideal version of themselves would do (a version that reasons non-fallaciously from only correct information), will then agree that that’s the case.
But getting there (whichever “there” it ends up being) requires a lot of empirical work. Which science is best at. Because that’s what we invented and designed it for.
In my opinion, the moral imperative “you ought to do X” really means “if you don’t do X, me and my buddies will make you regret it”.
We know that’s not true (as in: those two statements are not literally synonymous) because descriptive moral science has already refuted it. People are fully motivated to obey moral imperatives for many other reasons. For example, there are people who do X, even when they know with certainty no one will ever punish them for it. And in fact, we all feel more comfortable being around those kinds of people; and we feel very badly about ourselves if we ever truly believe we aren’t such a person—the exception being sociopaths, although they are insane (I discuss the medical science of sociopathy in this context in Sense and Goodness without God V.2.3.2, pp. 342-44).
The “you’ll get beat if you don’t” level of moral reasoning has been documented to be native in children. Adults (when emotionally mature) usually morally reason from internal premises instead. For them “I ought to do X” really means something closer to “I will think ill of myself if I don’t.” Children need to be threatened into being good. Adults are good because they just want to be. It makes them feel better about themselves, and about living generally. The scientific study of the psychology of this is pretty extensive. I discuss it some in my treatment of Divine Command Theory, regarding why Christian morality infantilizes moral reasoners by reducing their moral reasoning to that of a child.
There are many other motivations that operate moral reasoning besides those two (external and internal reward systems; Game Theory and risk management; etc.). So “you ought to do X” never reduces to any one single outcome consideration like whether someone will beat you; it always reduces to the general outcome consideration of: “will I be more satisfied with myself and with who I am or become and with the life I end up living, if I do X?” The role of getting punished by people plays into that; but isn’t decisive. It is neither necessary, nor even sufficient—e.g. Martin Luther King Jr. got beat to hell by “me and my buddies,” and we acknowledge he wasn’t the one acting immorally—to the contrary, he was acting more morally in taking the beating, than in those who sat quietly by and maintained their tacit acceptance of racism to avoid being punished for siding with “a negro.” Thus, one can be in a state whereby avoiding the beating will actually reduce the life satisfaction of an ideal agent (e.g. “I couldn’t live with myself knowing I was a moral coward the rest of my life,” an internal consequence that outweighs the external one); the exact opposite of your proposed model of moral semantics.
So the basis of moral reasoning is a lot more complex than you imply. And that we have already confirmed scientifically. So we’re actually quite ahead on that one.
“Per above, a moral science might just end up proving it’s all aesthetics.”
Or it may end up proving that *all* morality is ‘just’ aesthetics. And if we agree that aesthetics are not objective, that would mean there are no moral facts, either. For me, it is just like with gods and souls: many people claim that moral facts, gods and souls exist. Yet, there is nothing observable that requires their existence, that cannot be explained without them.
“People are fully motivated to obey moral imperatives for many other reasons.”
This, I think, relies on a fallacy of defining human self-interest far too narrowly. We are equipped with empathy as well as a social need for approval from our peers. As you observe, this is something that develops as we mature, even though children already have such instincts. We do not have to fear being beaten or punished in any physically tangible way, it is already a significant punishment to be met with disapproval from other people, and especially from those we like and admire. Even knowing that our friends would disapprove of an action, did they know of it, is sufficient to deter. There is nothing spooky about this, it is easily explained as a heuristic that helps us avoid trying to be too clever and end up becoming social outcasts. Sometimes it is indeed advantageous in some sense to be able to disobey such instincts – thus creating en evolutionary niche for psychopaths, people who are much harder for us and our buddies to keep in line. I believe it is wishful thinking to classify psychopaths/sociopaths as generally being insane, even though they often do not fulfill the general criteria of insanity as something that hinders a person or causes that person suffering. In reality, we rightly fear psychopathy and when we suspect it, we may employ means of punishment for unwanted behavior that we would normally consider overly drastic.
The result is that if we are not psychopaths, when our friends tell us that we ought to do X, meaning that they will make us regret it if we don’t, that affects us. Even if we are convinced that our friends would never find out, and even if we are correct in that conviction. And even if we disagree with our friends and believe them to be mistaken about the merits of X. When we break the moral code of the group we identify with, we become stressed and anxious. When we are convinced that our actions align with what our peers would approve of, we sleep well at night.
But nothing of this creates objective moral facts. The slave owner, convinced of white supremacy, will sleep well when he has done his duty and flogged the disobedient slave, just as his peers would expect him to do. Indeed, in that society, it may take a psychopath – or a child – to *not* punish a slave.
Objective facts are of course relevant for moral discussions. Racism can be objectively shown to create enmity and decrease human potential, incurring a massive drawback for society. But to reap the advantages of non-racism requires a level of civilization such that individuals instinctively trust that they are likely to benefit on a grand scale more than they will by the immediate advantage of being able to subjugate people lower in the racial hierarchy. And we are rightly grateful to individuals such as dr. King who have led us to improvements in our moral system that have turned out to our benefit, even though, we reluctantly realize, we may not have understood it ahead of time.
But – this is true even if there are no moral facts. If morality is always a combination of at least one ‘aesthetic’ preference and zero or more objective facts, we can only ever hope to reduce moral disagreements to these basic preferences, not to prove moral facts unconditionally.
Or it may end up proving that *all* morality is ‘just’ aesthetics.
Actually, that’s impossible.
Moral facts are imperatives that supersede all imperatives. It is logically necessarily the case that there are true imperatives that supersede all imperatives for any agent with desires. Since an agent by definition is an entity with desires, it is logically necessarily the case that moral facts exist for every agent that exists.
So it can never be the case that there are only aesthetic conclusions about what to prefer.
This, I think, relies on a fallacy of defining human self-interest far too narrowly.
That’s not a fallacy. If true moral facts just are true imperatives that supersede all imperatives, and since such imperatives follow necessarily from greatest overriding desires, all desires that actually will (on implementation) satisfy the greatest desire generate true moral imperatives. It is necessarily the case. Thus, since satisfaction matters more than being beaten, avoiding being beaten can never, in itself, generate a true moral imperative (because there can always be something you want more).
There is no avoiding the logical inevitability of this.
The only question is empirically which desires do this. On which we already have a lot of relevant data.
I believe it is wishful thinking to classify psychopaths/sociopaths as generally being insane.
See my discussion and citation of the medical science in SAG.
Sociopaths are incurably irrational (they act self-defeatingly, and can even be brought to admit they do, and even that they desire changes in themselves that they cannot effect, and that their lives are miserable in consequence). That’s insanity by definition.
I highly recommend studying the actual disease, not what Hollywood depicts. For example, a fundamental symptom of sociopathy is a severely diminished fear response, which is highly maladaptive. The mortality rate for sociopathy is actually among the highest of all mental disorders; and indeed, compared to normals is extremely high, with among the highest rates of death below 40.
Likewise, you can’t armchair the human psychology of normals, either. We have a ton of science on this. You need to study the science before pronouncing why people actually are motivated to do what they do. I’ve read the science. And my work is based on it. Entry sources to the science are cited in my TEC article.
The slave owner, convinced of white supremacy, will sleep well when he has done his duty and flogged the disobedient slave, just as his peers would expect him to do. Indeed, in that society, it may take a psychopath – or a child – to *not* punish a slave.
That’s false. (We have lots of documentation of non-sociopaths among the empowered class resisting the slave system, overtly and covertly.)
But more to the point:
Notice “convinced of white supremacy” is a false belief. True moral conclusions cannot follow from false beliefs. You are confusing an explanation for why people morally err, with how we determine what is morally true. That’s like citing a flat earther who fails to be convinced by any evidence the earth is round, as evidence the earth is flat. That’s not how it works.
And indeed, there is a lot more to the scenario you describe than you are aware. Read my slave owner example in TEC. There is a reason that’s there.
And we are rightly grateful to individuals such as dr. King who have led us to improvements in our moral system that have turned out to our benefit, even though, we reluctantly realize, we may not have understood it ahead of time.
Notice this statement is just restating everything I have just said: you admit moral systems can be improved (and therefore must be objectively true, otherwise there is no true sense in which one can be better than another: this is Harris’s moral landscape point in a nutshell), and that the metric is how well that system performs in the interests of what the moral agent wants (“turned out to our benefit”). You are also acknowledging we can be in error about what is morally true (“we may not have understood it ahead of time”) and that that error consists in not realizing the system is better for us (thus, not agreeing with an ideal version of ourselves, but acting in ignorance or by fallacy instead). That’s the whole shebang right there.
“[..]it is logically necessarily the case that moral facts exist for every agent that exists.”
If the “moral facts” would have to be true only for a certain individual, and could be different for each individual, most philosophers would probably deny that they would be “moral” facts at all. While it is a truism that there is an optimal course of action for every agent, I am not optimistic about using that fact to derive some objective moral truth. Perhaps, someone has an abnormal desire to eat human flesh, a desire so strong that he is rationally justified in risking a life sentence to satisfy that desire. Even the potential existence of one such person is enough to jeopardize a concept of “moral truth” built on optimal individual choice.
“The mortality rate for sociopathy is actually among the highest of all mental disorders[..]”
The mortality rate for exceptionally tall people is also extreme; it does not follow from this that above-average height is generally a disadvantage. Obviously, someone with psychopathic traits so far out on the bell curve that they are immediately recognized as criminal will not fare well. Researchers who have considered the broader aspects of psychopathy have theorized it as a parasitic behavior to game the surrounding moral system. It should also be noted that as an evolutionary strategy it must be considered as high risk/high reward.
“We have lots of documentation of non-sociopaths among the empowered class resisting the slave system[..]”
Of course. But we also have lots of documentation of non-sociopaths engaging in clearly inhumane and cruel behavior, evidently caused by the desire to follow the prevailing morality of the surrounding society.
“Notice ‘convinced of white supremacy’ is a false belief.”
That is the entire point of the example. The influence of morality on an individual is not in any way conditioned on that morality being in any sense objectively true. Even a morality that no one believes to be objectively true has these effects. Therefore, the existence of such an effect cannot be used to demonstrate that there is such a thing as objective moral truth.
If the “moral facts” would have to be true only for a certain individual, and could be different for each individual, most philosophers would probably deny that they would be “moral” facts at all.
To the contrary. It’s called moral relativism. A well established theory in moral philosophy. (As is ethical egoism.)
Facts are facts. Anytime it is factually true you ought to do something, that’s a fact. And if it’s true you ought most to do a thing (above all other things), that’s by definition what morality is true for you. (Your opinion cannot trump it. You can have a false belief about what you ought to do.)
That said, note, it’s unlikely there will be a different morality for every individual. That’s simply a logical possibility. It’s even unlikely that there will be a different morality for different human sub-groups (as I explain over several pages in TEC). But “unlikely” is not “impossible.” Thus, this is still an empirical question.
Perhaps, someone has an abnormal desire to eat human flesh, a desire so strong that he is rationally justified in risking a life sentence to satisfy that desire.
Logical possibility doesn’t get us anywhere. If it turns out to be true, it’s true. What you don’t like has nothing to do with it. I have a whole section in TEC titled “Cave Man Say Science Scary” that explains why this cannot be an objection. For all you know, cannibalism is moral. You don’t know otherwise, until you check.
You can’t decide from the armchair what’s true about the world. You have to go out and actually look and find out what’s true about the world.
This is as true of morality as of any other fact of life.
You need to read the chapter in TEC. Peer review already covered objections like yours. They are already answered there. With answers that passed the peer review of multiple professors of philosophy.
(But just FYI, even apart from the semantics, almost certainly no such person could exist. And if they did, we’d be morally obligated to kill or imprison them. This is just another iteration of the sociopath problem.)
The mortality rate for exceptionally tall people is also extreme…
You argued sociopathy was adaptive. I showed you the evidence is to the contrary. Yes, extreme height is maladaptive. So is sociopathy. Both are classified as a disorder. For that very reason.
But what classifies sociopathy as a mental illness is that it is maladaptive even for the individual’s reasoning (they are not happy, act self-defeatingly, and cannot change their behavior even after admitting both facts: that is what defines a mental illness). Height is not a mental property and therefore cannot be a mental illness.
(And BTW, yes, early death is by definition maladaptive. It’s a standard function in differential reproductive success calculations in evolution theory.)
Researchers who have considered the broader aspects of psychopathy have theorized it as a parasitic behavior to game the surrounding moral system. It should also be noted that as an evolutionary strategy it must be considered as high risk/high reward.
Risk management is a function of moral reasoning. If doing X gains the same satisfaction output as Y but at a higher risk of death, you ought to do Y.
Of course. But we also have lots of documentation of non-sociopaths engaging in clearly inhumane and cruel behavior, evidently caused by the desire to follow the prevailing morality of the surrounding society.
Not relevant to your point.
Harris and I fully accept the existence of irrational and ignorant people, who thereby follow false moral systems.
The influence of morality on an individual is not in any way conditioned on that morality being in any sense objectively true.
That’s true of all knowledge. Even beliefs about the age and shape of the earth. And that being the case, has nothing to do with what is nevertheless true.
Even a morality that no one believes to be objectively true has these effects. Therefore, the existence of such an effect cannot be used to demonstrate that there is such a thing as objective moral truth.
I don’t know what you are talking about with this statement. You seem to have lost track of the argument. You said X proves Y. I showed X does not prove Y. That in no way means I said either X or ~X proves Z.
I forgot to answer this:
“You are also acknowledging we can be in error about what is morally true[..]”
No, because I do not believe that it is possible to link individual “ideal” preferences to objective moral truth. Frequently, we will find that enough people prefer – rationally or not – a certain morality that they are able to uphold that morality in a society. In the case of racism, I believe it may be possible to show that a large majority would ideally prefer a society without racism. But it would still not be possible to show that the entire minority would be objectively wrong or irrational. That depends on their preferences. The majority would – I hope – be strong enough to be able to keep the racist minority in check. And that is, I believe, the best we can ever hope for.
No, because I do not believe that it is possible to link individual “ideal” preferences to objective moral truth.
That doesn’t make sense, though. If you don’t think you ought most to do what you would do when not fallacious or misinformed, then you are saying you believe you should act illogically and uninformedly. And I doubt that’s what you really believe. And if you don’t think what you ought most to do when not fallacious or misinformed is moral, then you are not using “moral” in any sense capable of being relevantly true.
Remember, we are trying to get at what is true. And that limits your options. We can invent countless moral systems. And then fail to show anyone should follow them. We cannot then claim any of them are true.
The only way to get a moral system that is relevantly true, is to get one that you indeed will follow, when you aren’t reasoning fallaciously or misinformedly. There is no other way to get anything to be true about morality.
Can you think of any other truth conditions by which you could say a moral system is true, and should (in actual fact) be obeyed?
(This is a trick question. Because it’s logically impossible to answer. But maybe you have to try, in order to discover why.)
That depends on their preferences.
The effect of racism on the social system we all have to live in is not a function of preferences. You can’t “desire” the world be a certain way, and expect it to be that way. If racism increases violence, then that increases your risk of being a victim or increasing costs on you to police that violence (for example; I’m not saying this is the only effect of racism on everyone, I’m just picking one). What you “prefer” can have no effect on that fact of the world. If it’s a fact, it’s a fact.
Thus the only way “preferences” can matter here is if you are seriously arguing that there is someone who, even when completely rational (reasoning without a single fallacy) and fully informed (knowing all true things and being ignorant of nothing and having no false beliefs about anything), prefers a society that is more expensive and dangerous for them.
I would propose that empirically you will never find such a person.
That’s our hypothesis.
And even if you do find such a person, we will be morally obligated to restrain them to save the rest of us from them. (Which fact being a fact makes it hard to continue to think they’d still agree they should persist in that preference—because, as stipulated, they know this will be the outcome.)
Science also can’t answer the question, “are consequences actually relevant to morality?” Which is the real reason Harris’ moral science is contentious. There are good arguments entailing that morality is deontological rather than consequentialist.
Deontology is consequentialist. See my discussion of Kant (linked in the article).
You can only make deontology non consequentialist, by making it not true. And false moralities are irrelevant.
As I posted in your other thread with a reference to Brown’s paper, there exist valid deontological theories that resist consequentialization.
If you deny Brown’s formalization of consequentialism, then consequentialism becomes vacuous, and your reduction is meaningless. This leaves the various disparate moral approaches intact as the only valid avenues to pursue moral knowledge.
If you accept Brown’s formalization, then some moral theories are simply not subsumed by consequentialism, and your reduction also fails.
Finally, your claim that moral theories that resist consequentialization in this manner are simply false is dubious. Firstly, I don’t believe you’ve actually read this paper when you wrote these posts, so until you do, I won’t bother reading further.
The only way out would be for you to claim that Brown’s formalization of consequentialism is not faithful, but then I expect to first read an equally rigourous formalization of consequentialism that you agree with, before I spend time reading your reductions.
None of that is true.
If an ideal agent chooses what to do according to what the outcomes will be, that’s consequentialism. And yet no moral theory can be true, that is not what an ideal agent would choose. Since all choice is driven by a desire for an outcome, all outcome-driven decisions are consequentialism.
Every other definition of consequentialism is a sham, the tool of a semantic shell game to try and hide consequentialism.
No, inconsistent moral reasoning is not ethics. Consequentialism must be formalizable and consistent if you want it to be an ethics, otherwise it’s vacuous as I said.
“Outcome-driven decision” is not a meaningful formalization.
Inconsistent? What’s inconsistent about moral facts being true because of the desired outcomes?
You aren’t making any sense.
If what is morally true is true because of the consequences of the agent’s actions, that’s consequentialism. No sham formalization can get you out of that fact.
Richard,
I’m having a problem understanding what you mean by science-based morality. Are you referring to the descriptive aspects, the normative aspects, the prescriptive aspects, one or more of these? It seems the descriptive approach to morality is already well underway with works by Jonathan Baron, Mark Johnson, Jonathan Haidt, and many others. This is based in cognitive science and fully empirical. There is also a lot of work already being done on cross-cultural studies of personality, culture, and values, which are all directly relevant to a science of morality. So it’s hard to know exactly what you’re referring to.
I think you’re reduction of the three moral frameworks (virtue, deontological, consequential) to consequentialism is broadly correct. But I also think that rationality, which underlies reasoning and logic of all types, including moral reasoning, can be reduced from epistemic and instrumental to just instrumental rationality. When you combine that with consequentialist ethics as the essential form of moral framework, we can see that all we’re really talking about is goal-seeking thinking, communicating, and action. The descriptive approach can tell us how people actually value and moralize, but it can’t tell what to value. The prescriptive aspect (here’s how you should value and moralize, given X, Y, Z, …) depends on the normative aspect.
Who can say who’s norms are correct, without looking at the broader, social, political, economic, etc., aspects of decisions. To that extent, we can talk about “moral/ethical engineering” or technique, but that still doesn’t adjudicate what should be valued. This must be negotiated, weighed and judged by moral agents working individually and collectively. They can decided together how to adjudicate their differences at the level of goals. This necessarily requires the correct scientific-empirical understanding of potential consequences (which are only ever uncertain, i.e., probabilistic) through the description of actual moralizing, thinking, emoting, feeling, and valuing/evaluating. As soon as one group tries to impose their views of what is the “ideal” moral agent and values, then this will create conflict. A “science” of morals can only help if it recognizes its inherent limits and remits decisions, choices, and actions to more humanistic spheres of endeavour (politics, philosophy, economics, etc.).
I would take religious toleration as it has evolved since the early Enlightenment as the model for how these kinds of ethical decisions, reasoning, and valuation should occur. At its best, religious toleration takes religious beliefs out of the public sphere and removes domination, constraint, and violence from the equation. Historically based common sense, experience, and political lessons learned taught communities that trying to impose one’s religion and its attendants morality and values on people who don’t want them leads to civil and international war, suffering, etc. To the extent that the community (however defined) wishes to avoid that, then its members agree to disagree and put it out of the common sphere of action, leaving it for the personal sphere. I’m not saying it’s perfect, but that is one particular model to follow, where humanistic and scientific discovery and description allow the community to reach a reasonable solution that can be revised, improved, and adjusted over time as knowledge and values evolve.
If that’s what you mean by a “science of morals,” then I agree with your goal. But if you mean to determine “scientifically” what people, should believe, value, and act on, then that is just as despotic as religiously or ideologically based moralities that are imposed without regard for personal wants, needs, and wishes.
Richard Martin
I’m having a problem understanding what you mean by science-based morality. Are you referring to the descriptive aspects, the normative aspects, the prescriptive aspects, one or more of these?
Prescriptive.
We already have a descriptive moral science. As you rightly note.
I also think that rationality, which underlies reasoning and logic of all types, including moral reasoning, can be reduced from epistemic and instrumental to just instrumental rationality.
Correct. See my discussion of that point in Epistemological End Game.
When you combine that with consequentialist ethics as the essential form of moral framework, we can see that all we’re really talking about is goal-seeking thinking, communicating, and action. The descriptive approach can tell us how people actually value and moralize, but it can’t tell what to value.
Those are the same thing. Except at the point of core value: all other values are derivative.
This is what Aristotle demonstrated 2300 years ago: all desires are instrumental, until you get to a desire that is a desire for a thing in itself, and not for some other reason. He argued that that was eudaimonia. He was sort of vaguely on the right track. I argue it’s some form of satisfaction state (self-satisfaction and satisfaction with life). But whatever it is, what it is, is an empirical question. Science has to discover it. And it could differ from agent to agent (and that’s a legitimate finding if it’s the case: objective moral facts then follow agent-by-agent), though that’s highly unlikely to be the case (see my discussion empirically of why in TEC, pp. 351-56).
Thus, what a prescriptive moral science would do is discover what it is that people want for itself (and not for some other reason), and then everything that empirically follows (what derivative values then best serve that end; what actions best serve that end; etc.). It would not try to tell people their core value “should be different,” but rather what their core value actually is (because people are easily and frequently confused or mistaken about that; hence all the self-defeating behaviors that fill the offices of psychotherapists and our prisons). It would then tell them if any other values they have conflict with it, etc. These would all be factually and empirically true statements. And anyone who did not align themselves with them, would be like the young earth creationist who refuses to believe the earth is billions of years old even after seeing all the evidence that it is.
Who can say who’s norms are correct, without looking at the broader, social, political, economic, etc., aspects of decisions.
This is fully factored into the Harris thesis. And mine. Moral systems are properties of social systems (which are also, by definition, economic systems). Thus, this is all relevant data.
Likewise the facts of interaction (what we should pursue, affects others, which in result then affects us, and therefore re-affects what we should pursue: Game Theory already demonstrated this).
As soon as one group tries to impose their views of what is the “ideal” moral agent and values, then this will create conflict.
To the contrary, conflict is created by not imposing a correct view of an ideal agent. Just as conflict is created by false beliefs in every other respect.
The ideal agent is simply and only one who is not using logical fallacies or false or missing beliefs to reach conclusions.
You have to accept that that is an ideal agent. Anyone who disagrees, is dangerous and incapable of ever reaching any coherent rapport with. If someone actually is saying we should do what is irrational (illogical, fallacious) and that we should do what is ignorant and misinformed, you cannot accept that as an equal or sound basis for negotiating anything with them. You have no choice but to tell them they are full wrong. Just as the Young Earth Creationist is about the age of the earth. And in fact, in both cases, it would be as much an objective fact of the world that they are wrong (as in: their saying we should do what even they admit is illogical or ignorant or misinformed actually contradicts their own greatest desires; they are therefore not only acting against our interests, they are acting against their own interests; and that they are doing so, is an objective scientific fact about them, not an opinion).
A “science” of morals can only help if it recognizes its inherent limits and remits decisions, choices, and actions to more humanistic spheres of endeavour (politics, philosophy, economics, etc.).
Indeed. That’s always true. And that’s exactly the point I make in my critique of Shermer’s attempt at promoting a Harris-style moral science (see link in the article).
At its best, religious toleration takes religious beliefs out of the public sphere and removes domination, constraint, and violence from the equation.
Indeed. That’s called maintaining the civil society. It’s the purpose of political systems. I fully explain in my book Sense and Goodness without God VII, pp. 367-407. That certain political systems are morally better because they more safely accommodate and treat irrationality and ignorance is indeed a fully accepted feature of Harris’s moral landscape theory. And likewise my own. This also produces the distinction between politics and morality: politics should maximize the ability of persons to live by their own moral conscience, and to safely promote others doing so; precisely because populations do not consist of ideal agents. But that doesn’t change the fact that there will be objective truths about what moral system an ideal agent would live by. And politically we need the right to demonstrate and say so. Only that makes moral progress peacefully possible.
But if you mean to determine “scientifically” what people, should believe, value, and act on, then that is just as despotic as religiously or ideologically based moralities that are imposed without regard for personal wants, needs, and wishes.
It’s no more despotic than determining “scientifically” what people should believe about the age of the earth, the operation of economies, or their own brains.
“Imposing” is not a moral question, but a political one (I assume you don’t mean by that word merely making assertions and telling people things; otherwise all science does is impose on people what they should believe, about everything science securely determines, like that the earth is round; that is, in fact, the purpose of science, and the one reason it’s valuable and should be pursued). We can accept the company of delusional fools, who believe irrational and false things, if they leave us (and others) alone, if they don’t threaten to disrupt a civil society. And in fact, we empirically know that attempting to force delusional fools to believe certain things or do certain things is of extremely limited utility and in fact very easily produces a negative utility. That’s why we have the first amendment: it’s the end product of learning that lesson empirically (and hence the hard way) over the previous hundreds of years. But it does not change the fact that religious people are delusional. It does not change the fact that their beliefs are false. And science is fully within its rights to say so. As is prescientific empiricism, on all facts science hasn’t yet settled directly (e.g. that souls or gods exist).
What Harris and I are proposing is not a new political regime. But simply to know, and thus say, what is scientifically true.
Just as with the age of the earth, so in moral facts.
Richard wrote: “What a prescriptive moral science would do is discover what it is that people want for itself (and not for some other reason), and then everything that empirically follows (what derivative values then best serve that end; what actions best serve that end; etc.).”
Is it that simple? Maslow listed a hierarchy of basic things we “want” or need for themselves, starting at the bottom with food, shelter, and higher up you find companionship, love, education, etc. But sticking to a strict hierarchy can also lead to boredom and repetition that people will eventually rebel against. (For instance, Erasmus loved learning so much that when he had some money he bought books, and only if he had a little money left over he would buy food. I think Maslow himself or his critics mentioned exceptions to his hierarchy.)
Mary Midgley argued that humans have multiple and sometimes conflicting “wants.”
And don’t our “wants” also change depending on our circumstances, or even our age?
And don’t our “wants” also include multiple widening circles of concern (as I mentioned in another comment) and such widening circles of concern also are sometimes in conflict with one another?
You mentioned exceptions or outliers when it came to developing a prescriptive moral science, i.e., socio/psychopaths. But there are also a remarkably wide range of people with obsessive addictive “wants” who are not merely psycho/sociopaths. I am speaking of people who grow addicted to a wide range of chemicals, behaviors, practices, games, beliefs, such that they cannot get beyond the heavy imprinting or heavy chemical addiction to literal drugs, or drugs naturally produced by their brains whenever they see, say, do or touch, or feel inside, concerning certain things/people/places. Whatever makes the neurotransmitters light up as they say. To mention just one instance, tell a person like C. S. Lewis that he doesn’t really want God for God himself/itself, or truth for truth itself, and he will not agree with you.
Maslow listed…
That was not based on any scientific research. It’s about as accurate as anything Aristotle said.
We need to do real science on this.
(Also, to be fair to Maslow and his (unproven) hypothesis, he wasn’t trying to say our core value was to breathe and eat; he fully acknowledged those were derivative values, and thus instrumental. They take priority only in an instrumental way, e.g. if you want to enjoy loving someone, you have to eat and breathe, so you eat and breathe in order to love someone—this actually puts love on top of the hierarchy of values, not the bottom. That’s closer to exactly what Aristotle argued.)
Mary Midgley argued that humans have multiple and sometimes conflicting “wants.”
That’s either because they do not have correct information or reasoning—in which case idealizing their reasoning and information would resolve the conflict, precisely the function of a Harris-style moral science—or because there is no moral fact of the matter which to prefer (I discuss exactly that condition in TEC, p. 425 n. 33)—which is also something a Harris-style moral science could discover. In other words, that’s exactly what it would be for.
And don’t our “wants” also change depending on our circumstances, or even our age?
Fully accounted for. Moral facts, like all imperatives, are situational facts. That’s true in all moral systems that have any prospect of being true.
And don’t our “wants” also include multiple widening circles of concern (as I mentioned in another comment) and such widening circles of concern also are sometimes in conflict with one another?
That’s a theory. I think it’s a good theory (i.e. that a fully developed moral science would empirically verify it). And as such, it would be fully a component of any developed moral science.
(Note also that I mean here prescriptively, i.e. that in fact we ought to have that concentric system of concern distribution; that we do have that is already a scientific fact—but that doesn’t resolve the question of whether we should have that, i.e. whether that’s a moral error.)
You mentioned exceptions or outliers when it came to developing a prescriptive moral science, i.e., socio/psychopaths. But there are also a remarkably wide range of people with obsessive addictive “wants” who are not merely psycho/sociopaths.
They are either insane (i.e. their wants are self-defeating and thus making their life worse) or this is no different than “I like cheese” not being the same thing as “I ought to eat.” The latter is imperative. But it can be fulfilled different ways (I am allergic to cheese, so “I ought to eat cheese” is false for me; but “I ought to eat” is true for you and me). Similarly “you ought to provide for yourself” is true for everyone (outliers aside) yet still does not entail everyone should have exactly the same job. See my discussion of exactly this distinction in TEC, pp. 354-56.
To mention just one instance, tell a person like C. S. Lewis that he doesn’t really want God for God himself/itself, or truth for truth itself, and he will not agree with you.
And I predict he will be wrong (about the God thing anyway). As objectively, empirically wrong as thinking the earth is only six thousand years old. That he is ignorant of the truth, and refuses to believe it (just like a young earth creationist does), does not make it stop being true.
Many a Christian insists they’d be miserable without God. Then they become an atheist and realize that belief was false. Often, it ends up being the opposite of true (they end up happier as an atheist). Lewis went the other way, but he was never really an intellectual atheist (he wrote not a single thing ever in defense of atheism; it was simply something he inherited via apatheism and never thought correctly about). He simply failed to find a livable atheism. Which does not mean there isn’t one; it simply means he did not search for it properly.
Likewise, many a Christian tells themselves (and everyone) that they want “God for Himself” and not for something else; yet everything they then say makes clear that’s not even true, that God serves instrumental needs for them, and those other needs are the only reason they cling to God. This is as true of the delusion of C.S. Lewis as of any other Christian or Muslim or supernaturalist of any kind.
Thanks for that response Richard, as well as the detailed responses to other questions and comments.
I’ve reread your chapter in TEC very closely, including the appendix. I’m trying to really grok what you’re arguing, so I have a few questions.
Arg 1: By definition, a moral system is that which supersedes all other systems of imperatives. Is that understanding correct?
Arg 2: What is the meaning of want(sub)p? I’m assuming it means want as in preferring?
Arg 4:
By U = approximately a moral system, do you mean to imply that there are likely to be individuals for whom it is not operative?
CH and VNA are very similar. I’m not getting why you make the distinction?
Between the definitions of VNB and VNA, why do you switch from “human species” to “human race”?
M(sub)VNA and M(sub)H are identical save for “human race” in the former and “human species” in the latter. What is that about?
I don’t understand the point of 4.13 and 4.14, specifically, where you say that if D obtains, then if BD, then there is U. In English, to me this says, that the fact that L’s fundamental biology differs from the rest of humans, then that still means there is an approximate moral system. Is that correct?
Thanks,
Richard Martin
Arg 1: By definition, a moral system is that which supersedes all other systems of imperatives. Is that understanding correct?
Yes.
As in, that’s the only moral system you (or anyone) have sufficient reason to obey. And thus, the only moral system worth talking about.
Arg 2: What is the meaning of want(sub)p? I’m assuming it means want as in preferring?
Per Premise 2.1.
Arg 4: By U = approximately a moral system, do you mean to imply that there are likely to be individuals for whom it is not operative?
That’s not likely; but logically possible. See section on psychopathic aliens and AI.
CH and VNA are very similar. I’m not getting why you make the distinction?
“Very nearly every” is different “from very nearly any”; and indicative “wants” differs from subjunctive “would want.” We need to infer the indicative from the subjunctive. VNA is also conclusory (what x is) while CH is situational (x is y in situation z). VNA thus derives from an answer to CH, but is not synonymous with it.
Between the definitions of VNB and VNA, why do you switch from “human species” to “human race”?
No logical reason. They are synonyms. Just didn’t catch the revision at all stages.
M(sub)VNA and M(sub)H are identical save for “human race” in the former and “human species” in the latter. What is that about?
One is about “every,” the other is about “any” (i.e. they are not identical save for the race/species; and it’s interesting that you didn’t notice that).
I don’t understand the point of 4.13 and 4.14, specifically, where you say that if D obtains, then if BD, then there is U. In English, to me this says, that the fact that L’s fundamental biology differs from the rest of humans, then that still means there is an approximate moral system. Is that correct?
Yes. More specifically, I am proving that a U exists even if U differs by L-group, i.e. the condition whereby several moral systems are literally true, but differ by group (TEC, pp. 347-51); and U exists also if U does not differ but is truly universal, i.e. the condition whereby only one moral system is true for all L (TEC, pp. 351-54).
In other words, relativism is irrelevant to whether true moral facts exist. Relativism can be true and true moral facts exist (they are then just relative to the agent as much as the agent’s velocity is).
Though I believe that won’t be the case for humans (per TEC, pp. 351-54), it is logically possibly the case (and will likely be the case for aliens or AI: TEC, pp. 354-56).
Typo: “Where they founder is on the notion that science can answer..” ==> flounder
Oh no. I meant founder: as a ship, to strike bottom, sink, fail.
This is coming a bit out of the left field for the discussion at hand and may sound ridiculous for a philosopher, but for the question of how science may be able to approach this, it might be well worth your time to look into various fields of economics. Economics deals with very similar questions in nature and sometimes even with moral questions themselves. What do people want? How do I deal with complex systems? How do I optimize things? How can I identify variables and test for them? How can I maximize something for the greatest number of people? How to make decisions in situations of uncertainty? The image of a moral landscape is something that looks familiar to economists, or is at least easy to understand as a concept. However, economists probably will tell you, that to attempt this is futile from the start, because there are no clear cut answers to be found. But I might be wrong about that. I would suggest trying to talk to someone like Russ Roberts (Econtalk), who is interested in those kinds of questions himself.
Yep.
Sociology and psychology and political science have all been doing things like that, too.
But they haven’t developed much in the way of methods yet, for finding out things like what an ideal agent would want once they are cured of irrationality and ignorance. For instance, an economist doesn’t need to know that; they only need to know what people actually want, even if it’s totally irrational and ill-informed.
Thus, it’s a descriptive science (it can make predictions about systems) but it’s not a prescriptive science (it doesn’t even ask what people should want).
The prescriptive science would be more difficult, and requires somewhat different methods. But it certainly can benefit from interdisciplinary knowledge (methodological and factual) already gained in these other sciences.
Richard Carrier wrote:
“It’s called moral relativism.”
Aha! That is why we have in some ways been talking past each other. That is of course, as you know, a path that most philosophers have rejected as hopeless, which is not to say you cannot try, but that is why I did not expect you to take that position.
“And if [cannibals existed], we’d be morally obligated to kill or imprison them.”
That is a possible conclusion if you can find objective moral truths, but you cannot use that conclusion to help you prove those truths.
It seems to me that your method is something like this:
1) Find out what people really “should want”, given perfect knowledge and reasoning.
2) Hypothesize that everyone or nearly everyone would want the same things.
3) But if they don’t, declare them to be insane if at all possible, so that you can ignore them.
4) And if that is not possible (eating animals or not?), declare the issue to not be a moral issue.
I hypothesize that everyone will want different things, even given perfect knowledge and reasoning, and that even though these differences will often be subtle, there will be no way to find a pareto efficient morality that can cover even a majority.
But I also see another problem, which is that I’m not sure how your hypothesis could be falsified. If we did our best to find out people’s “true” desires, but we still find that they diverge, we can always resort to the possibility that if we had just done a better job at finding the relevant knowledge or reasoning, we would get a convergent morality.
A third problem is that even examining this seems to be practically impossible without introducing certain moral assumptions into the process. That could still lead to useful results, but then we no longer have a system based on objective moral truth, but rather on some moral axioms (which is a common and useful approach).
“You argued sociopathy was adaptive.”
I did not, I argued it was not *necessarily* maladaptive. Sociopathy/psychopathy are not classified as mental illnesses, they are colloquial terms used to describe certain character traits (lack of remorse, lack of empathy etc), which in extreme cases are associated with diagnosed mental illnesses. However, this is not a fundamental problem for a system based on moral relativism (although that brings on other, probably fatal problems).
“Risk management is a function of moral reasoning.”
It is certainly not only a function of reasoning, but also of temperament. A moral system that assumes that we can be or would want to be machines would be useless, because we are not and never could be.
“The only way to get a moral system that is relevantly true, is to get one that you indeed will follow, when you aren’t reasoning fallaciously or misinformedly. There is no other way to get anything to be true about morality.
Can you think of any other truth conditions by which you could say a moral system is true, and should (in actual fact) be obeyed?”
No, and I do not believe there is such a thing as objective moral truth. Broadly, relativist morality is not morality at all in any meaningful sense, it is not possible to find a universal morality based on human traits that would also be objective, and there is no possible method to examine morality that does not involve humans.
Rather than objective truth, I believe we can only strive to find something as close as possible to a consensus. It cannot ever be objectively true or final, but if the consensus-making process is based on openness of discussion and a spirit of free inquiry, we can hopefully avoid it becoming a “might makes right” or “tyranny of the majority”. Hopefully as in what I personally prefer, and what I am hopeful can be widely agreed on.
“If racism increases violence[..]”
Then that is a drawback for anyone who desires decreased violence, but for some individuals that drawback could (and, per my hypothesis, probably would) be overcome by other advantages.
“And if [cannibals existed], we’d be morally obligated to kill or imprison them.” That is a possible conclusion if you can find objective moral truths, but you cannot use that conclusion to help you prove those truths.
Right. That is a proposition to be proved. It follows from what we want (not to be eaten or have people we care about eaten) and what we have to do to get what we want (police cannibals).
It seems to me that your method is something like this:
1) Find out what people really “should want”, given perfect knowledge and reasoning.
More exactly, what they will want when.
In other words, you can avoid the circularity of normative language (“should”). The normative (“should”) is simply a restatement of “what you will do when rational and informed.”
The meta-question exists as to why you “should” do now what you will do when rational and informed (and thus of what you will do when irrational and uninformed, but have true beliefs at least of what you would do if you weren’t irrational and uninformed), but that resolves the same way (hence note 36, pp. 426-27 in TEC). We simply define as insane people who prefer to be irrational and uninformed (and deal with them accordingly). And if we aren’t them, what they would want is irrelevant to what we should do; what we should do derives from what we want. And if it’s not to act irrationally and uninformedly, it’s to act rationally and informedly. QED.
2) Hypothesize that everyone or nearly everyone would want the same things.
Not just that. Any more than in any other science. A mere hypothesis is just a hypothesis. It needs to be tested to become a scientifically confirmed theory.
And this hypothesis is testable; and testing it will be a component of any fully developed moral science. As explained in TEC, pp. 351-56.
3) But if they don’t, declare them to be insane if at all possible, so that you can ignore them.
No. Moral science could find a coherent system of moral relativism true (as laid out in TEC, pp. 347-51). So the options are not “universal morality” or “insanity.” Empirically it may be, but that would have to be discovered. It can’t be known in advance (for humans anyway).
Insanity only gets declared when someone is implacably preferring to be irrational and misinformed, even to the danger of themselves and others around them. And that’s not me saying this. That’s the entire definition of insanity in the diagnostic standards of psychological science.
4) And if that is not possible (eating animals or not?), declare the issue to not be a moral issue.
Rather, if we empirically look, and empirically find no imperative is true in the matter, then by definition no imperative is true in the matter (to whatever probability of certainty our method entails). That is a possible outcome. But it can’t be declared from the armchair. It has to follow from evidence. In other words, it has to actually be true. And known (more or less) to be true.
I hypothesize that everyone will want different things…
Then you need to engage with my fact-based case to the contrary in TEC: pp. 351-56.
You can’t know this in advance any more than you can “know” faeries exist. If you don’t look, and look by a methodologically sound way, you don’t know. I present the scientific evidence that already makes a prima facie case against you. And a proper moral science could shore that up (or refute it) simply by engaging a more rigorous factual investigation than I already have. But if you want to gainsay the prima facie case, you have to actually read it, and address it. With facts. Because only facts can counter facts.
But I also see another problem, which is that I’m not sure how your hypothesis could be falsified.
A definition can’t be falsified. Definitions are tautologies. They are always necessarily true.
You are confusing a definition of what we are looking for (“what you will do above else when not in error”), with what we will find when we look (“what actually is it that you will do above all else when not in error?”)
The latter entails only falsifiable hypotheses. That’s what makes it science.
If we did our best to find out people’s “true” desires, but we still find that they diverge, we can always resort to the possibility that if we had just done a better job at finding the relevant knowledge or reasoning, we would get a convergent morality.
If that’s true, it’s true.
This is already the case. For example, in Young Earth Creationism vs. the facts of the earth’s age. We can do our best to find out what people will believe when reasoning informedly and rationally (that the earth is billions of years old). And that is what we decide to call the scientific truth of the matter. But that doesn’t make everyone believe that. “We still find that [YECs] diverge” and we are forced to conclude that “if we had just done a better job at finding the relevant knowledge or reasoning” they’d change their mind. Does that make them right? Does that make it not true that the earth is billions of years old? Does what they think have anything to do with what’s true?
Moral science is no different than the rest of science here.
So if this is a problem for moral science, it’s a problem for all scientific truth of any kind whatever.
A third problem is that even examining this seems to be practically impossible without introducing certain moral assumptions into the process.
No, it does not.
You are doing that, by making assumptions about what the results will be (“surely we can’t approve of cannibalism!”). And then using those unevidenced assumptions as “evidence” against what I’m saying may be morally true. This is a common philosophical error: always philosophers will say some metaethical theory must be false because it gets results they don’t like and are thus “sure” can’t be moral. But how do they know? Maybe they are the ones who are wrong about what is and isn’t moral. If you don’t look, you don’t know. Hence see TEC, pp. 343-47, “The Moral Worry.”
I’m the one who isn’t making assumptions about what science will find. I have evidence-based guesses, my own hypotheses I can make prima facie empirical cases for, but I admit the facts may turn out differently on a rigorous inquiry than they appear on present inquiry. And that is precisely in fact why we need morality to become a science.
So this is a problem not with the moral science I propose. To the contrary, it is precisely the problem my proposed moral science would solve.
Sociopathy/psychopathy are not classified as mental illnesses…
Yes they are.
…they are colloquial terms used to describe certain character traits (lack of remorse, lack of empathy etc)…
Actually, sociopathy’s primary pathology is diminished fear response. Which causes all the other symptoms (and those symptoms are used to diagnose the disorder; among them are symptoms exhibiting an absent fear response). I cite the science on this in SAG, p. 344. More has been published since.
A moral system that assumes that we can be or would want to be machines would be useless, because we are not and never could be.
We actually are machines. So you seem to be sneaking some other assumption in here (some particular “kind” of machine that you are arbitrarily deeming to be “better” than some other).
I want to make this empirical, and you keep sneaking in unempirical, unexamined, undefended assumptions—can you not see why you have the wrong end of the stick here? Your entire methodology is not defensible.
The only way to talk about what is true, is to talk about the facts. There is no more reliable access to the facts than science.
So why would you be against more science on the facts that determine what’s true about morality?
Broadly, relativist morality is not morality at all in any meaningful sense…
All philosophy disagrees with you.
And here you seem to have some talismanic superstitious belief about the word “moral.”
And I’d like to take that talisman away so we can talk about what’s true instead.
So I heretofore ban you from ever using the word “moral” or its cognates anywhere further in this conversation. I will do the same.
We will heretofore only talk about “zoral” and cognates, which consists entirely and solely of what you ought to do above all else.
That’s the only thing that matters. Whatever you think the word “moral” means, it is either that, or it is not that. And if it is not that, then “we ought to do above all else” what isn’t moral. In which case what is or isn’t moral is irrelevant to this entire conversation and to the entire science I propose.
(Hopefully this will start to jog your mind into realizing what’s actually going on here; and what we have actually been talking about from beginning to end.)