I’ve been sent two links of responses to my article last week, “What Exactly Is Objective Moral Truth?” Technically they are responses to Harris. But insofar as I am defending the same core thesis, and the links were sent to me, and both are by authors whose opinions I respect (even if I don’t always agree with them), they warrant a response here. These responses I think should be read by everyone, since they are common mistakes and misunderstandings, and my responses will clarify things you might need clarified…especially in the closing epilogue of this post.
First of the replies is Ed Babinski, who posted his own entry for the Harris contest on Facebook. Second is John Shook, who posted a reply on his blog at CFI.
In both cases, I must first reiterate the whole gist of my article:
One reason Harris is not the best one to use as your straw man in this debate is that doing that is lazy. It allows talking past each other far too easily. To avoid that I created a formal deductive proof of his core thesis (all the way back in 2011…and that was in development well before that, even before I read his book or even knew he was writing it–which means it is only a proof of “his thesis” in retrospect, since I had been developing the same thesis independently since 2004). What I asked people to do is find a logical invalidity or a non-demonstrable premise in my syllogism. Because that will prevent vagueries and misunderstandings and get right to the heart of who is correct. To do that, I told everyone to read my chapter “Moral Facts Naturally Exist” in The End of Christianity (indeed I said in last week’s article, quote, “the syllogisms you have to prove invalid or unsound are on pp. 359-64″). Hereafter I shall refer to that as TEC.
To keep avoiding this is to just lazily act like armchair problem solvers who can’t be bothered to actually look up the best version of the argument they are criticizing. Stop that. No more straw man fallacies. Address the best and most rigorous form of the argument. And do it correctly, i.e., actually identify an actual fallacy in those syllogisms or identify a premise in them that is false (or which you can prove we do not know is true).
Apart from simply not doing that (which is the biggest flaw in these replies, reducing them both to a classic straw man fallacy), here is also what’s wrong with the Babinski and Shook rebuttals…
The Babinski Rebuttal
Numbers are my own addition:
[1] “Science cannot determine human values because humans interact in competing spheres of rival concerns…”
This statement is factually false. It would entail sociology is not scientific for the same reasons. And economics. And cultural anthropology. And social psychology. And indeed the entirety of Game Theory. Obviously science can empirically determine such things as what spheres of rival concern exist and their effect on optimal human behavior (“best practices”), given even the most complex social systems.
I formally address the question of competing interests (e.g. competing imperatives, as would derive from competing spheres of concern) in TEC, pp. 425-26 n. 33. That directly responds to this point. Three years before it was even made. This illustrates what’s so often wrong with rebuttals like this: they are ignoring the fact they we dealt with these objections years ago. No progress can be made if you won’t even engage with what has been said.
To be fair, Babinski was writing in response to Harris’s book, not mine. So I am not so much criticizing Bababinski as anyone who thinks this is the way to go about answering the actual thesis Harris defends (rather than answering how Harris defends it). Maybe Harris doesn’t adequately deal with what Babinski says (though there is no evidence Babinski checked; e.g. he never quotes Harris or cites page numbers in Moral Landscape where Harris touched on any same or similar question, or should have but didn’t). But what is the use of sending me your rebuttal of a straw man of my argument?
That we have complex systems of value conflicts is in no way an objection to the core Harris thesis. It’s just a statement that the facts (that his core thesis is saying exist) are complex. Which Harris has never denied.
[2] “[T]he consequences people choose are based on values people or groups assign to each of those spheres of concern, and different people and groups assign different values to each of those spheres.”
First, that is a claim to scientific fact (it can therefore be proved or disproved empirically). It is therefore not an objection to the core Harris thesis. It is entirely possible that we will find multiple systems of morality, different ones for every individual or type of individual. That would not refute the core Harris thesis. It would in fact verify a form of it. So this rebuttal demonstrates a failure even to understand what Harris has written, much less what I have written.
I discuss this explicitly in TEC, pp. pp. 351-54, which must be understood in the context of the last paragraph on p. 350, which will make sense when you compare what follows with what preceded on pp. 347-50. See also my comments here and here and here (also relevant are my comments here and here). My discussion on TEC was again published three years before Babinski’s rebuttal. There, the section “That There Are Moral Facts to Discover” (pp. 347-50) ends with a paragraph in which I said (p. 350):
…this only establishes a realist version of moral relativism: there must necessarily be a factually true morality at the very least for every individual, which may yet differ from individual to individual (or group to group). In such an event, moral truth is relative to the individual (or the group of individuals possessed of the same relevant properties). Nevertheless, this does not change the fact that for any individual there must necessarily be a factually true morality that is not the mere product of their opinion or belief (therefore it is not merely subjective, and certainly not antirealist), but is entirely the product of natural facts (their innate desires and the facts of the world that must be accommodated to realize those desires, which are both real objective facts).
Babinski’s second point simply doesn’t respond to this conclusion. It in no way undermines or contradicts what I said. Indeed, it practically just repeats what I said. Of course, I go on to prove that there is probably a universal morality besides (pp. 351-54). But Babinski doesn’t respond to my argument there, either.
Again, Babinski is only responding to Harris. But what’s the point in rebutting a straw man? That was what my article was about. Don’t send me rebuttals of Harris, least of all in comments on my article explaining why you shouldn’t do that.
Second, there remains a valid question whether people are correctly assigning their values. The assignment of values and value hierarchies is entirely determined by two factors: unchangeable core values and the facts of the world. False beliefs about either can result in false value assignments. Thus the value assignments people “just happen” to make can be wrong, and science can prove it: by proving their assignments are based on false assumptions about the world (e.g. “fetuses have souls”) or false assumptions about what is valuable even to the person claiming it has value (e.g. “souls are valuable,” where the person in question cannot even explain why souls are valuable even to themselves, yet they just assume they are, or that they are more valuable than what that very same person in actual fact values more). People often do not get their own value hierarchies factually correct.
For example, “a drug high is the most important thing to me” is often asserted but easily refuted Socratically by showing that a person who says this actually in fact does value some things more–indeed the only reason they would ever value this thing at all is that they value other things more (e.g. “a greater satisfaction with life”). This becomes obvious when such people realize they can be more satisfied with their lives when they aren’t in thrall to a drug addiction, and for that reason take steps to end their addiction. Thus empirically demonstrating the fact of it: not only the fact of which life is more satisfying to them, but also the fact that they wanted something more than a drug high all along: a satisfying life, which even they did not correctly know (despite this being their own frakkin’ value system). Notably, the entire science of psychotherapy (e.g. Rational Emotive Behavior Therapy) is based on helping people realize empirically that their value hierarchies are factually false, and to realign them into a logically coherent, fact-based system. Exactly what Harris and I are talking about.
For both reasons (the first and the second), this statement fails as an objection to the core Harris thesis.
Finally…
[3] All the rest of the questions Babinski raises are empirical questions for science to answer.
Such as whether moral facts are utilitarian–I have given extensive reasons to conclude they will not be, or at least will not be in a classical sense. They will likely be only in the sense of Fyfe-style desire utilitarianism, but the questions Babinski raised are mooted by desire utilitarianism because they don’t arise in it (thus, again, we have someone trying to rebut a philosophy who actually doesn’t know the philosophy he is rebutting; the entire Carrier-McKay debate I referred everyone to in last week’s article concerned desire utilitarianism).
Or such as how we should program AI so as to avoid what may in fact be its alien moral values (I have specifically addressed this problem on my blog before: see Ten Years to the Robot Apocalypse). Again, in this and every other case, I have already answered Babinski-style concerns. In fact, my answers show that Harris’s core thesis provides a better answer to these concerns than any philosophical system yet proposed. (My discussion of what we need to do about AI is the easiest demonstration of that to quickly digest; that McKay and I already agree with desire utilitarianism, and its superiority to classical utilitarianism, means if you don’t know what desire utilitarianism is, you have an additional learning curve to get to where we are on that debate.)
Babinski has communicated to me privately that what he was really concerned about is how we would effect change, i.e. convince people that any morality discovered by science is the true morality they should abide by. But that is not a relevant concern here. The core Harris thesis is not that we can convince all delusional, stubborn, ignorant, or irrational people that a certain moral system is true (it’s entirely possible there will always be unpersuadable people…and thus always creationists, racists, sexists, and so on…but that has nothing to do with what is true, and I wrote several paragraphs on exactly this point in my essay last week). The core Harris thesis is that there is something true about morality and we can discover it empirically, just as in every other domain of knowledge.
The question of how to convince people to accept scientific facts is thus a wholly separate question, and ultimately the same question that plagues geology, biology, cosmology, neurology, and every other science. But before you can start working out how to sell the truth, you have to work out what the truth is that you should be selling. And the core Harris thesis is about the latter, not the former.
The Shook Rebuttal
The Shook rebuttal contains six errors.
[1] Shook conflates my argument with Harris’s. He says “Carrier’s argument” is what he then presents (with numbered premises), when in fact what he presents was my attempt to make a clearer version of Harris’s argument, which I then proceeded to find fault with myself. Thus, Shook is misleadingly attributing my rewording of Harris’s argument to me, as if it were my argument. He has thus missed the point of my article, which was that he should not do that, but instead address my actual argument, because it avoids the problems with Harris’s.
As I said:
That makes it essentially a straw man (of Harris’s own making). But that does not mean there is no sound and sufficient answer to that question. I charge that if you really want to prove there isn’t one, you will have to respond to my answer to it in my own chapter on the subject, which, unlike Harris’s, went through several serious critiques by expert philosophers and was developed from extensive (and not contemptuous) research in the relevant philosophical literature, and with a concern for carefully laying out its formal logic (the syllogisms you have to prove invalid or unsound are on pp. 359-64; they formalize what is explained in the text).
So where is Shook’s “confutation” of my argument, the one on pp. 359-64 of TEC?
He doesn’t provide one. Instead he produces a refutation of Harris and assumes he has refuted me, the very thing my article explains is impossible. He can only refute my argument by refuting my argument, not Harris’s.
[2] Shook says his “confutation is basically this: That it can be proven that truths ‘exist’ does not necessarily, by itself, either constitute a method for specifically learning those truths, or supplying the grounds for deducing those truths so that they can be known.”
This is not even a response to the core Harris thesis.
The question of building methods is what must follow conceding the core Harris thesis, which is simply that there is something to find. I actually address the kind of methods that would be involved in both my treatises on the subject (e.g. SaG V.2.2.5-6, pp. 331-37, and TEC, pp. 340-56). Science already has developed some of them (see, again, REBT; likewise economics and social psychology and other behavioral sciences). But in general no one would say “I can refute the thesis that there is another planet in our solar system by pointing out that there is no established method yet for finding out.” That would be ridiculous. Obviously the latter, even if true, has no bearing on the former. And in practice science almost always innovatively solves the latter question when it seriously undertakes to answer any question like the former.
Only a creationist would say “a natural origin of life is impossible, because you have no method for proving which theory of natural origins is true.” That’s both false (we might well develop such a method, and we already have working methods for getting started) and fallacious (the conclusion doesn’t even follow from the premise). Shook’s argument is similarly false (in my printed work I have shown that we already have applicable methods in the fields of psychology, economics, sociology, Game Theory, cognitive science, and so on, and I explain that science could easily develop more and better ones) and fallacious (even if science lacks a method for finding x, that does not mean there is no x to find, and the core Harris thesis asserts the latter).
(See my concluding epilogue for an additional problem with Shook’s assumption that we have no means to find the necessary facts.)
[3] Shook says “moral truths there may be, but moral knowledge there may never be, if we rely only upon the sciences (broadly understood) alone. ” But he never explains what “other way of knowing” is supposed to fill the gap (a third eye? a crystal ball? prayer?). All claims to fact are empirical. All empirical claims are best studied scientifically (“broadly understood”). What’s left? Only sub-par approximations to science.
I address the problem of moral ignorance in TEC, pp. 343-47(see also my exact wording on p. 364, and analysis in n. 28, p. 424, and n. 35, p. 426). On the whole point in general see my comment here. That there may be unanswerable questions in a science in no way argues that that science is not to be pursued.
Shook also conflates situational directions with covering laws. See my discussion of the distinction, and how science can deal with it, in TEC, pp. 351-54. On which see my comment here. We solve this problem in tech fields all the time: engineers frequently must adapt general scientific rules to particular and unusual, even unique, situations. If we can do it in the science of engineering, we can do it in moral science, too. Indeed, what would you prefer? An engineer who has a large database of scientific results to work with when tackling unique situations? Or an engineer who has no database of scientific results to work with at all? Shook is like someone arguing for the latter, simply because engineers can’t expect to have a scientific paper addressing every single specific situation they will find themselves in. That’s illogical. Let’s build a big database of moral science for people to work from. That’s the Harris thesis. It’s obviously correct.
[4] Illustrating the peril of misattributing Harris’s arguments to me, and thus ignoring my actual published arguments, Shook gets wrong the distinction of what a fact is in the imperative domain. He prefers “What will maximize the satisfaction of any human being in any particular set of circumstances, as those circumstances are understood by that human being, is an empirical fact that science can discover” to “What will maximize the satisfaction of any human being in any particular set of circumstances, as those circumstances are understood by us science-minded observers, is an empirical fact that science can discover.” Yet he doesn’t realize this makes no sense in any other imperative science.
For example, would we really say that the scientific fact of what the best surgical practice is is a question of “What will maximize the success of that surgery in any particular set of circumstances, as those circumstances are understood by the surgeon” and not a question of “What will maximize the success of that surgery in any particular set of circumstances, as those circumstances are understood by science…”? Obviously the actual fact of the matter is the latter. Because the surgeon could be wrong. That’s why we do science, and why surgeons defer to the science that has been done.
The community process of scientific discovery is far more reliable than isolated individual perceptions, even of experts. Of course, the scientific community can also be wrong. But that’s true in all sciences. We don’t say therefore science is impossible because it’s sometimes going to be wrong. We say science is probably right about most things, and is more probably right about them than any other way of knowing the matter (barring specific demonstrations of a specific study’s flaws or invalidity, for example, but we’re talking about sound science, not bad science). Surgeons defer to what the scientific community has established. And the only sound time they don’t, is when they can prove thereby that the scientific community was wrong. Which would then become a part of the database of scientific facts.
This is how it works in every other science there is or ever will be. This is how it will work in moral science. So unless Shook is making an argument against all of science, he can’t be making any valid argument against moral science.
But possibly what Shook means is something else, that by “understood by [the acting] human being” he means “as would be understood by that human being if they were reasoning non-fallaciously from only true beliefs about themselves and the world,” but that is exactly what we mean (Harris and myself). That is what science can help us with. All moral arguments reduce to appeals to either (or both) of two claims to fact: the actual consequences of the available choices (not what we mistakenly think they will be), and which of those consequences the agent would most actually prefer in the long run (not what they mistakenly think they would). Both are objective, empirical facts.
But I don’t think this is what Shook meant. He meant the other thing, which makes even less sense. Because…
[5] Shook weirdly says “Suppose you instead claim, ‘No, it matters nothing whether a person grasps much of a connection between their circumstances and their moral duties’. This claim is unethical in the extreme. It denies a fundamental right to a person: to control their sense of moral duty.” In this statement Shook just said all moral condemnation of others is itself immoral. That is illogical. It’s like saying telling a surgeon he is doing it wrong is immoral because it violates his autonomy as a human being. Come on, Dr. Shook. You should have erased this sentence the moment you wrote it. Autonomy does not mean you get to ignore the facts, or treat irrational decisions (i.e. decisions based on fallacious reasoning) as on par with rational ones (i.e. decisions made without any fallacy). The facts tell you what is true, not the other way around.
Now, perhaps this was just a disastrously worded attempt to say that we might sometimes be ignorant of the objective facts and we can only operate on the facts as we know them at the time. But if that’s all he means, that is fully compatible with the Harris thesis. This again is just another facet of the issue of moral ignorance, which I address in TEC, pp. 343-47(and with the exact wording on p. 364, and analysis in n. 28, p. 424, and n. 35, p. 426). See, again, my comment here.
[6] Finally, every remaining point Shook makes is a question for science to answer. Not armchair philosophers. For example, a surgeon faced with multiple different situations in which the best procedures for carrying out a particular surgery will be different, would never claim the answer as to which was best in each unusual situation was not a question of scientific fact. He might say science hasn’t studied the question yet. But he wouldn’t say it couldn’t, or shouldn’t, or that its merely not doing so means there is no scientific fact of the matter (as if science creates facts rather than discovering them). As for medicine, so for morality.
Likewise, no one says that the word “life” is too vague, therefore biology is impossible. The question of defining terms and looking for distinctions are problems for science to tackle. Hence Shook’s remaining complaints completely ignore and fail to interact with any of the examples I discuss in TEC of how science might study specific questions in this domain. He claims such examples are impossible. That I have many therefore refutes him.
Even in general, the most dubious thing any philosopher can ever say is that “science can’t study that.” Science always proves them wrong. Time and again. For centuries. You’d think philosophers would learn their lesson after a while.
Ultimately, whether science “can” study something can only be determined by science. Philosophers cannot predict what science can and can’t do from the armchair. Not least because they aren’t scientists, and thus have no expertise in building research instruments in sociology, psychology, anthropology, economics, or cognitive science (the fields most directly relevant to a moral science). The question of whether they can do it is up to them to answer. The core Harris thesis is simply nothing more than that they should finally roll up their sleeves and look into precisely this.
Which all brings me back to my original charge: find the fallacy or false premise in the syllogisms of pp. 359-64. If you can’t, you simply have no refutation of my thesis. Or Harris’s.
Epilogue
I must close with an important point that seems to get lost in this debate. Everyone all too rapidly assumes that the only significance of the Harris thesis is that morality should become a science, just as nearly every other subject of philosophy has gradually become. That misses the most important point of the Harris thesis.
To correct Shook’s statement earlier, what we are saying is that “what will maximize the satisfaction of any human being in any particular set of circumstances, as those circumstances would be understood by that human being if they were reasoning non-fallaciously from only true beliefs about themselves and the world, is an empirical fact that science can discover.” Notably Shook never correctly articulated our actual thesis. Thus he cannot have confuted it. He may have confuted Harris’s badly worded attempts at explaining it (I don’t know; like Babinski, Shook never quotes Harris or cites any relevant page numbers, and thus shows no actual sign of even having read Harris, much less what Harris already says about the things Shook tries to take him to task for), but even then, that reduces Shook’s entire argument to a straw man fallacy. Which is a waste of everyone’s time.
Look at the correct wording of our thesis (items in bold above are the corrections I made to Shook’s misarticulated version of it). Remember what I said: that is what science can help us with. All moral arguments reduce to appeals to either (or both) of two claims to fact: the actual consequences of the available choices (not what we mistakenly think they will be), and which of those consequences the agent would most actually prefer in the long run (not what they mistakenly think they would). Both are objective, empirical facts. And importantly, neither are such that only a full-blown science can discover anything about them.
Let me repeat that last sentence, because it is absolutely crucial: neither facts are such that only a full-blown science can discover anything about them. Science is only better at doing that than available alternatives. It is, in fact, the best at doing that. So when we say morality should become a science, we are not saying no moral facts can be known until we make it a science. We are saying we would know more moral facts, with greater certainty and confidence, if we did. Therefore we should. Indeed, so far as we are able (and we are able), we are morally obligated to. Otherwise we are wallowing in our own willful ignorance of facts we could know more about but refuse to. Of course, individuals like myself don’t have this option (I’m not a scientist or a billionaire, so I can’t make this science happen, so my resulting ignorance is not willful), but the same cannot be said of our society as a whole, if no one steps up to start learning the facts we have the means to learn.
In the meantime, there are lots of moral facts we can know to some degree of certainty, even if it’s not all we could know, and even if it’s not known to a scientific degree of certainty in every case. Because the immediate consequence of the Harris thesis is that all claims to moral fact are empirical claims–even now. And that allows us to prove or disprove some of them even now, with already existing science and pre-scientific empirical observations. I’ve addressed this point in comments on my earlier article (see here, here, here, here, and here). Once we understand that all moral propositions are propositions about what the actual consequences of available options are and which of those consequences the moral agent would actually prefer if they were rationally aware, we can already begin to identify which moral propositions are baseless, which contradict known facts, which are testable, which have some evidence to back them up, and which have a lot of evidence to back them up, even before we begin testing them scientifically.
Ceteris paribus, for a decision to be more moral rather than less moral, the rest of your life in consequence of the choice you make must be more satisfying than it would have been if you chose differently. And that is a causal claim. Which is an empirical claim. Which is a question of scientific fact (whether science has tested it yet or not), which science can always better inform (even when it hasn’t tested it directly). Thus all moral claims are attempted approximations to scientific fact, “best guesses” as to what science would conclude if it actually carried out the required study, and like all “best guesses” (e.g. like as to how life began, how the universe began, whether a war will have the outcomes you expect, whether a political policy will work or not, whether there is a Loch Ness monster, whether God exists, whether there is an afterlife, whether naturalism is the most likely worldview), the certainty of the guess will be a function of how much science already has answered that relates to it, and how much can be directly and reliably observed empirically to fill in the gaps. As for all other best guesses at empirical facts science hasn’t yet directly resolved, so for best guesses at moral facts.
Thus, that a moral science has not yet begun doesn’t change the fact that the Harris thesis clears the decks of all bullshit and lets us know just what it is we are actually trying to guess at when making moral claims, and how to improve the accuracy of those guesses until we have an actual science doing it for us.
As I wrote in one of the comments I linked to above:
[E]ven before we engage such a science, all talk about moral truth simply is talk about what people really want and whether their actions really will produce it.
Analogously, before the 20th century there was no scientific psychology. But all talk about psychology was still talk about what things produce mental phenomena and how they work, which is all talk about empirical facts. Thus all propositions about psychology then were in-principle empirically testable hypotheses about mental phenomena and their causes, some of which even then were more likely to be true (e.g. the brain produces a mind; there is a localization of mental functions across the anatomy of the brain; etc.) than others (e.g. psychic powers; disembodied minds surviving destruction of the brain; etc.).
The same is true now: even before we have a science of morality, if our thesis is correct, then it is still the case that all propositions about morality are in-principle empirically testable hypotheses about shared human desires and the cause-effect relations between human choices and outcomes, some of which even now are more likely to be true (e.g. honest and compassionate people, ceteris paribus, more reliably live more satisfying lives than heartless liars do) than others (e.g. abortion and homosexuality are always immoral; eating pork and answering a telephone on Saturday are immoral; men ought to treat women as inferior and subservient; killing people who criticize religion is a moral good; etc.).
Likewise in another of those comments, I pointed out that:
The only way you could argue against [a conclusion in moral science] is to produce evidence that the consequences of accepting that [conclusion] would be unacceptable (and the latter is a covert reference to human desires: in this case, we would mean the consequences to ourselves [as moral agents], e.g. our consciences, directly and/or the consequences to others [or the society as a whole] in turn causing their behavior to then have consequences upon us that we would not like).
But if you could do that, then “what science found” would then be empirically falsified by the evidence you then produced (which the study you are confuting must have missed), and you would have thus produced the correct scientific conclusion in the matter.
And in another:
Moral facts follow from actual values–which does not mean the values you think you have or just happen to have, but the values you would have if you derived your values non-fallaciously from true facts (and not false beliefs) about yourself and the world. Hence, your actual values (what you really would value if you were right about everything).
As to questions like how we would measure satisfaction comparatively, that’s an empirical question for scientists to work out. It is not relevant to the underlying facts. That we didn’t have telescopes would not mean there are no mountains on Mars. Regardless of what instruments we presently have, the facts of the world remain objective facts of the world. Thus, a person’s greatest available satisfaction state is an actual objective fact about them. Whether we presently have instruments to detect it or not. And all moral discourse is covertly appealing to that and thus already making an objective fact claim about it–whether people realize it or not.
Hence I revisited the psychology analogy in yet another of those comments, like so:
Our situation now is comparable to psychology in the 19th century: there were scientific facts bearing on questions of psychology, but it was hardly a science. So one then could either draw inferences about psychology in as scientific a way as was available to you (and thus admit that what you are doing is attempting to predict what science would confirm if it checked, whether it could or not, and thus all your inferences are actually proto-scientific hypotheses, and thus would be revised the more scientific facts had been ascertained) or you could draw inferences about psychology in some other way (like getting it from the bible, or armchair speculation, or folklore and tradition, or ideology, or the whimsy of imagination, [or what we just “feel” is true], and so on). Obviously the former is the more correct way to do it. And so, too, morality now, even before we get it on a proper scientific footing.
Thus, we are not just saying scientists should start studying this, and stop giving excuses not to. We are also saying that all discourse about morality even now is approximating to a scientific understanding, which thus dispels most moral discourse as nonsense, and improves what remains in what sense it has, and in what ways it could be empirically confirmed in increasing degrees (only the gold standard of which would be a full-blown scientific study).
…
[Which is philosophy, but] not all philosophy is equal. There is bullshit (e.g. theism, supernaturalism, astrology, afterlife studies). And there is rigorous, scientifically informed philosophy that aims to develop better, more accurate empirical hypotheses about the world until we have enough information to start moving it into a science (e.g. naturalism, protobiology, immortality studies). We can start that process now. Because not all knowledge begins “scientifically certain” out of the gate (e.g. propositions about psychology in 1850); nor is what isn’t yet scientific “just as likely” as every other proposition on the matter (e.g. propositions about psychology in 1850).
…
[R]emember that this isn’t just about getting scientists to stop making excuses for not studying this. It’s also about getting all rational people to chuck in the bin all the baloney moral discourse (that is akin to theology, supernaturalism, astrology, afterlife studies) and learn how to be able to identify moral discourse that is actually science-based and at least starting to approximate something testable. That’s philosophy. But it’s the right kind of philosophy. Because it’s philosophy on the right track and capable of getting somewhere (that somewhere only being ultimately, in the long run, actual scientific research).
This is the lesson Shook doesn’t seem to get. In the end, any claim he ever makes about morality is a claim about either or both empirical facts (what will really happen, and what we really want). Otherwise he’s just talking rot. All his claims about morality would just be his own personal complaints about how people aren’t behaving the way he wants them to, and not true statements about how they actually ought to behave.
Until we realize this, we should never speak on morality as if we know any true statement about it. Because other than statements about what will really happen, and what an agent will really want, there are no true statements about it.
Why don’t you just post the deductive version of your argument? Not everyone wants to buy the book, nor does everyone have access to it. Post it. Jeesh.
Because (a) I don’t own the copyright (I may be able to independently release that chapter in fourteen years or so, but even that depends on whether Prometheus allows it) and (b) the syllogisms will evoke questions which the main text was written specifically to answer, so you really are going to want to read the whole chapter (or will have to). The book is barely even $13 US (less on kindle). So it’s not like it’s a hardship.
1. Moral truth must be based on the truth.
Agreed, though with the caveat that we don’t of course know the whole truth about everything, moral or otherwise.
2. The moral is that which you ought to do above all else.
I’m not certain that I even agree with this. Firstly, it seems to be defining the moral as the necessary, which may be a conflation. Secondly, your “ought” here is I take it a hypothetical (ought to do y in order to get outcome x), but this won’t help us to resolve conflicts between competing x’s in cases where it is not obvious which x to seek… however I think these objections should be dealt with in the later heads.
A point I would like to make is that I think a fundamental difference between our _moral_ preferences and mere matters of personal taste is that morality concerns how we want others to behave as well as ourselves.
3. All imperatives (all ‘ought’ statements) are hypothetical imperatives.
Agreed entirely. It’s always “ought to do y in order to get outcome x”. It’s the question of what outcomes x to seek that is so vexed.
4. All human decisions are made in the hopes of being as satisfied with one’s life as one can be in the circumstances they find themselves in.
I’m not certain I agree with this either. It smacks a little of homo economicus. I think some decisions are made in despair, some are made at random, and some are made out of spite. From your exposition of this point, I think we might both agree that all _rational_ decisions are made in the hopes etc., but I think putting “all decisions are made” into the definition is going too far.
5. What will maximize the satisfaction of any human being in any particular set of circumstances is an empirical fact that science can discover.
I really don’t think this is true – certainly not the “science can discover” part. Since this pursuit of satisfaction has an aspect of futurity to it – we don’t do everything for the sake of instant gratification, we have our eyes on future outcomes too- our knowledge is limited by our capacity for prediction, which cannot even in theory be made perfect. We don’t, for one thing, have perfect knowledge of the past. Even if we did, many events which will have causal influence on your future are currently outside your past light cone, speaking relativistically, and so you cannot possibly know about them yet. And then even if we had that knowledge, we’d have to be able to predict how it would make you feel, which is a brain state question that we’re really not in a position to predict with scientific certainty. A moral science that can only be applied by a god with perfect total universal transtemporal knowledge is not a useful moral science for humans.
There’s also, of course, the issue that _your_ satisfaction and _my_ satisfaction may be mutually exclusive, which leads us on to the next point…
6. There are many fundamentals of our biology, neurology, psychology, and environment that are the same for all human beings.
Yes, true, but not really helpful. There are a lot of _important details_ about all of the above that are _different_ for all human beings. And even if we grant that we all have the same fundamental needs, that doesn’t mean that everyone can get those needs met, which is exactly where moral dilemmas creep in.
For example in comments you say this:
“Therefore it is immoral to lock someone outside in the snow without warm clothes.
Therefore it is moral to give someone locked outside in the snow warm clothes.
…Yes, being an exothermic mammal is of moral concern.”
That only follows if you think it’s immoral to let people freeze to death when you could save them. Is it? We could say that we wouldn’t want it to be done to us, golden rule – but it isn’t being done to us, so what’s the problem? We could say that it lowers the sum of human happiness – but does it, if I’m _really happy_ about locking that guy out in the cold? We could say that “let’s all let each other freeze” would not make a good universal law, categorical imperative – but is the categorical imperative a moral truth? And what if we only have enough supplies to get half of us through the arctic winter, so either _somebody_ gets left out to freeze or _everybody_ dies – what’s the moral way to choose who gets to be Oates? I think that you’re a nice person and find it obvious that letting people freeze to death is nasty, and as it happens I agree, but I think we’re fooling ourselves if we mistake that for a rigorous logical conclusion rather than a preference.
I don’t see that your point 6 can ever get you all the way to a set of moral imperatives that all rational actors must share; and even if it did, I don’t see that that set of imperatives would then be exhaustive and cover all moral dilemmas. I suspect that you may be conflating “rational” with “rational and benevolent and empathetic and far-sighted”.
I think the importance of hypothetical imperatives is fundamental and is a point we can agree on. What I take away from that, however, is that the label “good” isn’t doing any useful work once we are discussing specific goals and outcomes. Any time I say “It would be good to do Y”, I mean that we should do Y in order to get outcome X, and I think X is good. Science is dead handy at matching our Y’s to our X’s, but not particularly useful for helping us pick our X’s. I think the best we can do in public discourse is to be explicit about our X’s, run them up the flagpole and see who salutes.
Well then by “moral” you are talking about something we ought not do (because we will have this other thing, which we ought more to do). And if that’s the case, what you mean by “moral” is a useless topic. I’m talking about what we actually ought to do. I discuss this point more in TEC.
Or a reduction. QED.
Although most people mean by “necessary” something else (e.g. something you have to do or can’t fail to do, e.g. to survive). If you mean by “necessary” simply “that which we must do to maximize our actual goals” then you are just making it into a synonym for moral (a fact more obvious in some other languages, like Latin, where indeed many of the same words and grammar have both meanings: ought do and necessary).
You are conflating two different problems, both easily solved (just as they are in every imperative science like medicine or engineering). The first is what to do when there are conflicting imperatives (see my discussion of this point raised by Babinski in the very article you are commenting on here). The second is what to do when we are faced with significant ignorance about what to do (ditto, under Shook this time, in re: “moral ignorance”).
Basically, this was resolved years ago. Details in TEC.
This is a common conflation: mistakenly assuming that how you want others to behave is how they actually ought to behave. But those are neither synonymous nor likely to always align. Morality can only consist in the latter. Calling the former “morality” is just a dishonest way to try and convince people they ought to do what you want, even though in fact they shouldn’t, or have no valid reason to.
However, there is a more valid demarcation criteria close to what you may instead have meant, which is morality as affects the self and morality as affects others. In Western democracies we have developed an increasing tendency to only use the word “morality” for decisions that affect others (I point this out, and call it “a defining convention,” in Sense and Goodness without God V.2.1.2, pp. 316-20), although conservative religions cling to using “morality” for both. In this respect, we are just talking semantics. As I explain in SaG, it doesn’t matter what you call it, it’s still what you ought most to do (and thus there is no separate “morality” that can override it–not in actual fact, at any rate).
Not so much vexed, as overlooked and under-studied. This was my point in my critique of the Shermer-Pigliucci exchange. It’s also a major element of my discussion in SaG.
Correct. You started by conflating the decisions people actually make, with the decisions they would make if they were thinking rationally and correctly informed. I spent several paragraphs in the previous article to this one, the one Shook was commenting on, explaining why this distinction is crucial.
Nevertheless, even irrational and misinformed decisions are seeking the same goal (e.g. a decision made in despair is based on an irrationally-reached and factually-invalid assumption that it would more satisfy them to do that, indeed that’s precisely why they do it–to end or alleviate or mollify or compensate for their despair–that they are wrong as to the effects is irrelevant to the fact of what they were trying to achieve).
See my reply on this point in the very article you are commenting on here.
This is just as true of surgery and pharmacology and engineering and agriculture, yet that does not prevent us determining empirically what are the best practices available or presently known to us in those domains. So this is a moot objection. See again my reference re: moral ignorance above.
The “you can’t have perfect knowledge” objection is anti-scientific (since if correct it would undermine all of science). It is also obviously impractical (we conduct our lives all the time on optimal, not perfect knowledge, and the lack of the latter has no bearing on how best to acquire the former).
Psychology, sociology, marketing science, and cognitive science already make scientifically successful predictions in these domains. So the science is already way ahead of you. And we could do far better, too.
Indeed. But since neither I nor Harris nor anyone has ever proposed pursuing such a Quixotic objective, it’s a moot point here.
Our aims, as in all imperative sciences, are improved knowledge, not perfect knowledge.
Game Theory already accounts for that. It is not an obstacle. It’s just another empirical fact that affects what will turn out to be best practices.
Irrelevant to the point. The similarities still entail certain similar conclusions. Variances for individual diversity then affect application of general shared principles, not the general shared principles. I discuss this specifically, with examples, in TEC. But even in general, “we must eat” and “I prefer to eat broccoli” are not contradictory. The one is a general rule true for us all, the other an individual application of it.
This is true for all moral systems (i.e. all entail dilemmas). It is therefore not an objection to any one system (that there are moral dilemmas does not mean there are no moral facts).
That’s precisely the question. I am talking about how to answer it–and in such a way that that answer can legitimately be said to be true.
The only way forward on that is empirical.
That’s the sum point. The rest is just debating what the empirical facts are.
Game Theory, for one. Conscience, for another (e.g. the emotional effects of certain acts and attitudes on our own ability to be at peace with ourselves rather than dissatisfied or unfulfilled, etc.). I get specific in SaG. The consequences to ourselves of not caring about that are numerous, both directly (in our own personal psychology, in direct emotional experience as well as in the self-defeating behaviors it then tends to produce in us) and indirectly (in the consequences to us from others’ reactions, and from how the social system would adjust to people in general not caring about that). And this can be scientifically documented (in many respects it already has). In TEC I use the example of slave-owners to draw out many of these consequences, of both types, and how they could be scientifically verified (and are already empirically observed at the pre-scientific level).
Then you need to read my chapter in TEC. It will walk you through it. Along the entire causal chain.
Irrelevant. We don’t have to know everything about medicine and engineering to know something about medicine and engineering, and to continue learning more. The rest is simply what we don’t know (again, reference “moral ignorance” above).
“Benevolent” and “empathetic” and “far-sighted” are empirically entailed conclusions from “rational,” via Game Theory and the way human psychology and social systems actually work (as a matter of empirical fact). Thus, if you aren’t cultivating “benevolence” and “empathy” and “far-sightedness,” you are being irrational, and this can be proved empirically (science already has a lot to say on this, and could say even more with properly directed studies, some of which I describe in SaG).
You are contradicting yourself. If what you think is good is empirically determinable (and it is: you just claimed to be able to determine it empirically, and we certainly can externally, since it is a neurophysical fact about you), then what anyone else thinks is good is empirically determinable as well (in exactly the same ways). Therefore science is handy in everyone’s case, not just yours.
Indeed, the more so, as you can be wrong about your own preferences. See my explanation of this point in the very article you are commenting on here.
Except morality has nothing to so with satisfaction.
Except that it turns out it does. As I proved in my chapter on this in TEC.
You are more awesome than I have words to describe, and I am having a marvelous time learning about objective morality, as well as gain some grasp of philosophy generally. Thank you for the enormous effort you expend, for free, on behalf of those less learned (e.g me). I have enjoyed SaG (twice), am enjoying PH and have EoC on deck.
Well, thank you. But though I do do this technically for free, I do hope people will help support me financially so I can keep doing it (and maybe be able to still survive when I can’t do it anymore). Buying my books is just one way people can pay me what they think my efforts are worth. Some other ways are suggested here.
Well, I’ve already done numbers 1, 2 and 3; and you sort of discourage number 5. I guess should add number 4 and make a clean sweep of it.
I think these criticisms still manage to bring out the weak points of the enterprise. They’re not refutations, per se, because they’re more “in practice” than “in principle” criticisms — but that doesn’t make them useless to think about, if the goal is to make scientific moral judgements someday.
We currently lack a method to measure satisfaction, or to determine the action for a particular individual in a particular circumstance that would lead to the greatest satisfaction. Since we lack those methods (and we don’t really have an idea how they would be applied prior to an action being performed), we don’t know how useful its conclusions will be.
It may turn out that the “greatest satisfaction” constraint doesn’t result in anything more generalizable or determined than tastes in food or art. It may be that people vary extremely in how rival areas of concern (national, world, political, personal, weather) are weighted in terms of personal satisfaction.
But it could be that we don’t care about finding general moral principles that apply across a large swath of people and situations. In that case, we’re faced with the problem of *predicting* satisfaction in very specific circumstances (a massively large sample space). We know, based on the nature of complex systems, that prediction in situations like that is going to be extremely difficult. It may turn out that gathering and analyzing the requisite data will take an unpractical amount of time or will fail to yield precise enough results.
If I had to pick out a weak point in your argument, I’d focus on why you felt it was important (in your previous blog post — I haven’t read your books) to say, “There are many fundamentals of our biology, neurology, psychology, and environment that are the same for all human beings”. Is it essential that we can generalize moral principles? If so, then we don’t know “in principle” whether our hypothetical scientific methods will get us there.
Harris’ arguments about sociopaths are not completely convincing to me. Also, it seems quite possible that the mechanisms that lead to satisfaction for humans could, biologically, become pretty far out-of-sync with our environments. Those are just a couple of things that could make it difficult to generalize.
My thoughts exactly. Which is why I do this in TEC and SaG.
That’s not strictly true. We have some such measures and have already started scientifically using them; they just could be improved. And even before we do that, we already have empirically-based ideas about these things–all moral discourse is already discourse about what is the most satisfying life for someone and when and why and how to best achieve it. It has its limits and ambiguities and is pre-scientific (the point of the article you are commenting on), but it’s not just fiction either.
Knowing that moral discourse just is such discourse will tune us to actually start working more toward refining our understanding of human satisfaction, and its related psychology and sociology. It will also help us immediately eliminate all moral discourse that isn’t even attempting to talk about these things, much less gauge their reality empirically to whatever extent we presently can.
First, for individuals or type-sets, that’s either moot or false (by definition; I explain in TEC, in regards realist moral relativism–basically, regardless of whether it’s all “just tastes,” whatever that means, there will still be some moral fact of the matter, i.e. “what you ought most to do,” even if it’s not universal); and even for universality (all humans), that’s questionable from the start (there probably are some universals: again I explain why in TEC).
For the distinction between moral imperatives and aesthetic opinions, compare SaG Part V with Part VI (after completing Part II.2.2.3-4, pp. 37-40).
This is very unlikely, once all fallacies and false beliefs are corrected. Purged of fallaciously-reached and misinformed value weights, what we will have left will very likely be a very similar if not identical universal value weighting (again, for reasons I explain in TEC). The diversity of value weighting in the world today (in the moral domain) is demonstrably almost entirely (if not entirely) a product of fallaciously-reached and misinformed value weights (e.g. “fetuses have souls” and “keeping souls in their bodies is the most valuable thing in the universe to do”).
This ignores the epilogue of the article you are commenting on. As well as a great deal else I’ve written in this article and the one preceding it.
It also seems to make the same mistake of another commenter here, of thinking that scientific knowledge must be perfect or else is impossible. That’s illogical.
I’m only interested in what happens to be true. My argument does not even proceed from what is essential. It proceeds simply from what is.
Hence my chapter in TEC has a whole section on the alternative possibility that science will confirm realist moral relativism, and not universalism. I just can adduce enough evidence to show that that’s unlikely. But it’s being possible is not a problem for me.
You’d have to explain what you mean. And why you are using that straw man (Harris) rather than what I’ve said about psychopaths (references here)–one of the key points of the article you are commenting on here.
You aren’t making sense to me here. Optimizing with respect to our environments is inherent in the empirical facts determining best practices. So if you have a behavior that is “pretty far out-of-sync with our environments,” it cannot be normative at all, much less moral.
Perhaps you meant there are better states we could achieve if our environment were different (e.g. that our environments block us from even higher peaks of satisfaction than those presently available to us), but that has no effect on what is moral for us to do now, since you can never be compelled to do the impossible. Morality, like all imperatives, is situational. The moral is only that which you ought to do among the options available to you. Unavailable options are moot.
Until they become available, of course. Which is why we transform our environment. Some such transformations, once available, may be morally compelled (e.g. community sanitation and pollution controls). But that’s again still dependent on the transformation being achievable. The unachievable cannot be morally compulsory. I discuss this in TEC, pp. 424-26, ns. 28 and 34.
It seems that most of the justifications I’m looking for are in TEC and SaG. I’m still deciding whether it’s worthwhile to pursue reading those.
That’s not what I’m saying at all. The predictions science would need to make would be based on systems with a level of complexity somewhere near (or greater than) what macro-economics tries to deal with. Our success so far in predicting systems like that is not anywhere near good enough to make sure decisions. In order to determine what will result in maximum satisfaction, we have to know what will result. We’re still super lousy at that. Maybe we’ll hit some kind of goldmine and suddenly be able to predict simple things like CA rule 110. Who knows.
My question is why the posited universality of moral facts is important to your argument. If science can discover in each individual circumstance what will bring maximum satisfaction to the individual, and if that individual’s satisfaction is all that is needed to discover the moral fact, why does generalizability even matter?
I didn’t see what you said about psychopaths. But I find that problematic too. A person with a mental disease is a system whose satisfaction is physically determined. Why should that person’s true conclusions about his or her own satisfaction be considered any less normatively valid than anyone else’s? Maladaptivity is irrelevant. That person’s moral facts are simply different than most other people’s. If there’s only a small chance of being caught and punished in a particular situation, then the awful thing that the sociopath wants to do is a morally good action.
Yeah, I don’t think you’re getting what I’m saying. Imagine that science could determine something like an ontological IPD payoff table. That payoff table would have different values based on the current environment. In environment A, let’s say that defect-cooperate is 2,0 and cooperate-cooperate is 1,1. And let’s say that in environment B, defect-cooperate is 1,0 and cooperate-cooperate is 2,2. An organism that evolved to be adaptive in environment A would be structured in such a way that it would get more satisfaction from defecting. Put that organism in environment B, and it would still gain more satisfaction from defecting, even if the payoff would be better for cooperating.
You seem to be saying that to be rational, the organism should adjust to the payoff table in environment B. But the physical reality of that organism’s satisfaction is based on A. It is objectively less satisfied to cooperate, even though that action is out-of-sync with its environment.
A person can develop such that he or she gets satisfaction from cheating, revenge, or whatever. In an environment where cheating doesn’t have negative consequences for the cheater, cheating is a morally good action. In an environment where it does have consequences, the person still gains more satisfaction from cheating if not punished, but has a higher probability of getting punished. There are three things that seem important to me about this: 1) the determined satisfaction of the organism is what it is, regardless of whether it’s in-sync with its environment; 2) cheating would be considered morally good in an environment that supports it; and 3) In large systems of many organisms, there’s room for a subset of organisms whose satisfaction constraints contradict those of other organisms.
That’s why I wrote them.
(“them” meaning SaG and the one chapter in TEC.)
We don’t need to make sure decisions. We just need to make better ones.
It’s the other way around. The right thing to do is the maximum satisfaction state available to us at the time of a decision. Things unknown at that time are not available to us.
So we aren’t concerned about what we don’t know. We are concerned about what we can know and whether we know it correctly.
It isn’t.
As I explained (it seems you weren’t paying close attention to what I said here), I show in TEC that there are two possible results: relativism is proved or universalism is proved. Either is compatible with my thesis.
I then show that it’s probably going to be universal. Not that it needs to be. Just that it just so happens that it probably will be (and I explain why).
Because in that case you need general rules for yourself.
But the idea that there will be a completely different maximal system for every individual with zero overlap is massively improbable. That would be like there having to be a completely different medical system for every individual.
A far more plausible relativism would be group-typed, i.e. there will be general rules for all people relevantly like yourself, and another set for people of a different type, and so on. But for relativism to be true, there would have to be zero overlap among all those group-types in terms of general rules for maximization. Which is very improbable. Game Theory alone has been proved to be universally generalizable (as in, any individual who ignores Game Theory will be making sub-optimal decisions for themselves). So we already know there are universals. The question is what they all are.
This does not change the fact, though, that there will be individualization of general rules, and so some general rules for yourself that differ from everyone else or by group-type, but those will probably be applications of universal moral rules (e.g., again, needing to eat vs. preferring to eat broccoli). Science (and certainly at least pre-scientific empirical reasoning) can help with those rules, too. But generally when we say “x is moral” we are making a claim on all people (or at least some ramified category of people), not for ourselves alone. So when you want to know whether “x is moral” is true, you are asking whether it is true for most or all people. The question “should I myself do y rather than z, when y and z are both an x” is still an empirical question, but is not normally what you would call the moral question (once you’ve already concluded x is moral). That’s semantics, though. The difference is irrelevant as far as methodology (i.e. what you have to do to verify the proposition is probably true).
Disabled thought cannot be normative. See my previous article here on that point (the one Shook is responding to).
But you should I think read all I’ve said on the psychopath question specifically.
Ultimately, what is normative for a psychopath cannot be normative for us, as we are not psychopaths. And if what is normative for a psychopath actually does differ from us, then what is normative for us will include that very fact. Game Theory: if psychopaths are rightly amoral, then as a matter of moral fact we ought to treat them accordingly. Which then affects what is normative for them (as “best practices” for them must account for how we will react to what those are, otherwise they won’t be “best practices” for them).
In reality, we have scientific evidence that psychopaths who sufficiently and rationally reflect on their conditions would prefer not to be psychopaths (they admit they would be more satisfied if they were not). I discuss this in SaG (with citation of the literature). Those that rationally understand this can actually maximize their satisfaction by living pro-socially. It’s just that psychopaths tend to be rationally impaired in a number of different ways. Which we (and they) have to account for.
If the chance is small. But the more you take small risks, they add up to a high probability. Thus anyone who treats individual actions in isolation is being irrational.
Moreover, you cannot hide actions from yourself (psychological effects). Nor can you enjoy the benefits of actions you never take (benefits of prosociality, not just reciprocation but also direct psychological benefits in yourself).
And so on.
I catalog all the consequences even psychopaths must consider (to be acting rationally and informedly) in my discussions of the subject. Among them are that a psychopath tends to live in a constant state of frustration and dissatisfaction. Because they don’t realize that if they adjusted their behavior to be reliably prosocial they would be far less frustrated and dissatisfied.
That’s self-contradictory. You can’t have x = more satisfying and ~x = higher payoff when “more satisfying” = “higher payoff.” So I’m not sure what you were actually trying to say here. I assume you are trying to say that different people might be more satisfied doing different things in what are otherwise the same circumstances. But that’s fully accounted for in my thesis. See TEC, e.g. p. 355, for specific examples and analysis.
Morality is situational. How one person differs from another person is part of the situation.
So, when it comes to different people being under different moral directives under otherwise the same external circumstances, then if that were actually the case, then that would be the correct thing to do. For example, sometimes non-cooperation is the moral thing to do. And sometimes it is so for person A and not person B, even when A and B are in otherwise the same circumstances. (This really should be obvious; I use the example in TEC of the moral demands in a rescue situation where C is drowning and A knows how to swim and B does not.)
The question, though, is whether (or when) that is actually the case.
And intuitive armchair thinking about that is often wrong. Hence non-fallacious, empirically informed thinking is required. Which is the core Harris thesis.
But will that be more satisfying than all alternatives available to them? That is extremely unlikely. And we can prove that empirically (the more so with scientific data, but even without we can prove it to sub-scientific levels of certainty with already empirically observable facts).
That is not necessarily true. Because the social system then would know that.
For example, we could all agree that stealing when you won’t be caught is legal and ethical. But now think through how the entire social system would change to adjust for that fact. The social system would become less free, more paranoid, and more draconian. It would therefore reduce everyone’s satisfaction. It is therefore better to not create or even take any action to foster such a social system adjustment. And that requires refraining from stealing even when you wouldn’t get caught.
This plays out empirically in the difference, for example, between satisfaction levels in the US vs. Mexico: in Mexico universal police corruption leads to a miserable social system in which satisfaction levels are suppressed by unreliable police, bribe swindling, and high rates of crime; conversely, the relatively high professionalism of US police (e.g. we do not have to bribe them to do things, and generally cannot do so even if we tried) leads to nearly the exact opposite.
Exceptions aside (i.e. speaking statistically), in Mexico, police act on the principle that they demand and take bribes if they won’t get caught or punished for it; in the US police act on the principle that they won’t demand or take bribes even if they won’t get caught or punished for it. Notice the resulting difference in the societies. Which the police themselves enjoy the benefit of. The Mexican police are therefore acting irrationally. If they stopped doing that, their social system would substantially improve, and along with it their own lives (and the lives of those people and things and ideals they care about).
The same proves out in the US where police corruption occurs (the resulting neighborhoods and precincts tend to become more miserable even for the corrupt police). I outline a large number of consequences one must account for that relate to behavior choices like this in SaG.
You were therefore, evidently, not taking into account a large number of consequences. Which proves my point: only when we look at the actual facts (of how social systems and human psychology actually work, not how we think they do) do we get correct results. That’s the Harris thesis in a nutshell.
Does morality necessarily have a social component? If I’m on a space ship and I find myself outside the hubble volume of every other living thing, then does morality exist for me any more? It might be “unwise” not to maximize my own satisfaction, but would it be immoral? I don’t mean to argue definitions, but it seems to me that when some people talk about morality they are talking about a subset of personal satisfaction maximizing as it relates to their interactions with other people.
Logically necessary? No. Empirically necessary? Yes.
Social systems are a part of the causal system we are acting in, and a part of what is empirically necessary for us to maximize our satisfaction with life (that is far harder to do when wholly isolated from a society than fully integrated into one), so even if we didn’t have one, it would be in our best interests to create one.
Imagining weird scenarios like lost spaceships may be fun but are of little use when discussing morality in the real world, and the real decisions real people have to really make.
In any event, the answer to your question is semantic, since it depends on what meaning of the word “moral” you employ. But it won’t matter, because whether you call it moral or not, there is always something you ought most to do, and what that is is always an empirical fact about you and the universe. Even when you are completely isolated from any social system.
For more on this point see my related comment above.
An excellent article by Steven Pinker on the issue of morality and scientific worldview…
http://www.newrepublic.com/article/114127/science-not-enemy-humanities
Not exactly on a Harris-style thesis, though. But one paragraph there pertains (numbers added now by myself):
Claim [6] is true and defines what is happening, but not whether it’s actually an improvement toward what is objectively better. The previous claims are loose attempts at doing that. Claim [5] is true but does not in itself explain why science entails humanism and humanism science (though one can easily come up with good reasons why that is). Claim [4] is problematically vague (in the same way I found Harris’s statements to be in the previous article) but it hints at something correct that (better worded) does follow, as he suspects, from claim [3], “that all of us value our own welfare” and “that we are social beings who impinge on each other and can negotiate codes of conduct” (these are among the significant objective facts that empirically constrain right and wrong, and both are empirical claims science can verify), among other things, and claims [1] and [2] and every similar thing they exemplify, that science eliminates false facts (and thus all moralities based on them) and leaves us with true facts (which we must confront and accept and construct any true morality from).