What worldview is better for the world? That’s a question I debated with Joel McDurmon of American Vision just the other day in Houston. I’ll announce the video when it goes live. But one of the matters that came up centrally in that debate was moral theory. What worldview will “cause people to behave”? As one might put it. Here I’ll explore that question, and in the process outline all my past work in moral theory you can then dive into deeper, wherever you need.
First: The Meta-Problem
If you frame the question as “Which worldview will better get people to behave,” of course, one might then say it doesn’t even matter if the worldview is true. This was Plato’s idea, spelled out and argued in his treatise on The Republic: sell the public on a false worldview that will get them to behave. The perfect enactment of the entire blueprint he then laid out for how to do this was the Vatican. And for thousands of years now, we’ve all seen how that worked out.
In reality—as in, out here, where real things happen and don’t conform to our fantasies of how we wish or just “in our hearts” know things will happen—Plato’s project is self-defeating. It leads to misery and tyranny. You cannot compel people to believe false things; and you can’t trick them into doing it, without eventually resorting to compelling them to do it. Because you must suppress—which means, terrorize or kill—anyone who starts noticing what’s up. Which eventually becomes nearly everyone. The resulting system is a nightmare, one that will totally fail to “get people to behave.” Because it inevitably compels all in power…to stop behaving. Simply to try and force everyone else to behave.
That’s the Catch-22 that guarantees any such plan will always fail. The last thing it will ever accomplish is getting everyone to behave. Or producing any society conducive to human satisfaction and fulfillment, either, which is the only end that “getting people to behave” served any purpose for in the first place.
Worse, any system of false beliefs is doomed also to have many side effects that are damaging or even ruinous of human satisfaction, bringing about unexamined or unexpected harms and failures. Because it is impossible to design any epistemology that only conveniently ever discovers harmless or helpful false beliefs. Which means, while you are deploying the epistemology you need to get people to believe what you suppose to be harmless or helpful false beliefs, you and they will also be accumulating with that same epistemology many other false beliefs, which won’t just conveniently be harmless or helpful. “Ideological pollution,” as it were. You need a cleaner source of ideas. Otherwise you just make things worse and worse. Whereas any epistemology that will protect you from harmful false beliefs, will inevitably expose even the helpful and harmless ones as false (a fact I more thoroughly explore in What’s the Harm).
And all that is on top of an even more fundamental problem: what do you even mean by “getting people to behave” in the first place? Deciding what behaviors are actually better for human happiness, rather than ruinous of it, is a doomed project if you don’t do it based on evidence and reason. Because otherwise, you won’t end up with the best behavioral program, but one that sucks to some degree. Because you won’t be choosing based on what truly does conduce to that end, but based on some other, uninformed misconception of it. Which won’t by random chance just happen to be right. You will thus be defending a bad system.
But here’s a Catch-22 again: any process you engage that will reliably discover the behavioral system that actually does maximize everyone’s personal fulfillment and satisfaction with life, will get that same result for anyone else. You thus no longer need any false belief system. You can just promote the true one. And give everyone the skills needed to verify for themselves that it’s true. No oppression. No bad epistemologies. No damaging side effects.
Thus, the answer to “which worldview is best?” is always “the one that’s true.” So you can’t bypass the question of which worldview is true, with a misplaced hope in thinking you can find and promote a better worldview that’s false. The latter can never actually be better in practice. In the real world, it will always make things worse.
“But it won’t solve every problem” is not a valid objection to the truth, either. The truth will leave us with unresolvable problems, because we, and the world, are imperfect, and in practice governed by no perfect being. There is no “perfect solution,” because there is no perfection. All we can do is minimize the defects of the universe. We can never remove them all. Not even the most beautiful false worldview can do that. All it can do is try to hide them. And it will fail at even that.
Second: Getting God Out of It
As I concluded in my section on the Moral Argument in Bayesian Counter-Apologetics, “the evidence of human morality (it’s starting abysmal and being slowly improved by humans over thousands of years in the direction that would make their societies better for them) is evidence against God, not evidence for God.”
I then noted there that humans have three reasons to develop and adhere to improved moral systems: (1) their desire to live in safer, more cooperative societies; (2) their need to live in and thus maintain safer, more cooperative societies; and (3) due to the psychology of sentient, social animals, they will live more satisfied and fulfilled lives, the more they become in their actions and character the kind of person they admire, and not the kind of person they loathe. Only by self delusion and false belief can someone continue to be immoral and not despise themselves as the hollow and cowardly villain they’ve become.
That’s why we’ve abandoned nearly everything gods supposedly told us, discovering that in fact it’s immoral, because it ruins human happiness and conduces to no benefit we really want: from slavery (Leviticus 25:44-46) and the subordination of women (1 Timothy 2:11-15), even their legalized rape (Deuteronomy 21:10-12), to the use of murder to suppress freedom of speech and religion (Leviticus 24:11-16 and Deuteronomy 12:1-13:16), or killing people for violating primitive taboos like picking up sticks on Saturday (Numbers 15:32-36) or having sex (e.g. Deuteronomy 22:13-30 and Leviticus 20:13) or shunning people who eat bacon and shrimp tacos (Leviticus 11) or cheeseburgers or lamb chowder (Exodus 23:19). See The Will of God for some Old Testament examples; see The Real War on Christmas for some New Testament examples; and see The Skeptic’s Annotated Bible for more.
In fact, the United States’ Bill of Rights abolished the first three of the Ten Commandments, condemning them by literally outlawing their enforcement; and subsequent legislation has condemned and abolished four more as violating human rights (criminalized adultery, thought crime, compelling Sabbath observance, and forcing dues to one’s parents), leaving only the three principles all religions and cultures had empirically discovered were needed for us to enjoy the benefits of a good society long before the Bible was even written: honesty and respect for life and property. (On all this, see my article That Christian Nation Nonsense.)
Always, we’ve realized that what makes us miserable, what makes society disfunctional, we should no more do. We should condemn it, as bad for everyone else to endure, and abandon it, as bad even for ourselves to undertake. This has always been a human, empirical discovery, and never revealed from on high (nor even claimed to be, until conveniently long after the discovery was already empirically made). And we’ve been continually looking at the evidence of what actually happens to us as persons, and to society, when we push or abide by certain principles, and then deciding what to abandon and adopt as principles according to the real facts of the world, the actual consequences we observe. Thus we produce continual progress as we abandon false beliefs and adopt what the evidence shows us works. No gods needed. No gods even helping.
What “Is” Morality?
In all cultures, today and throughout history, “morals” have always meant “what we ought to do above all else.” In other words imperative statements that supersede all other imperatives. To date, despite much assertion to the contrary, we have only discovered one kind of imperative statement capable of having relevant truth conditions, and hence the only kind of “ought” statement that can be reduced to a relevant “is” statement: the hypothetical imperative. “If you want X, you ought to Y.” These statements are routinely empirically confirmed by science, e.g. we can demonstrate as empirically true what you ought to do to save a patient, build a bridge, grow an edible crop, etc.
The “is” form of these statements is something to the effect of “when you want X, know Y best achieves it, and seek the best means to achieve your goals, you will do Y.” That is basically what the information is that we are claiming to convey to you when we tell you you ought to do something. Even if our only implied motive is “we’ll beat you if you don’t comply,” we are still just stating a fact, a simple “is”: that if you don’t do Y we’ll beat you; and if you reason soundly about it, you will not want to get beaten.
But usually moral propositions are not meant to be appeals to oppressive force anymore. Because we know that doesn’t work; it always leads to everyone’s misery, as I just noted at the start of this article. Though Christians often do end up defaulting to that mode (“Do X or burn in hell; and if you reason soundly about it, you will not want to burn in hell”), the smarter ones do quickly become ashamed of that, realizing how bankrupt and repugnant it is. So they try to deny that’s what they mean, attempting to come up with something else. But no matter what they come up with, it’s always the same basic thing: “Doing Y gets you X; and if you reason soundly about it, you will realize that you really do want X.”
Whether X is heaven, or the support and comfort of God, or a contented conscience, or the benefits of a good society, or whatever. Doesn’t matter. There’s always an X. And X is always something. Because it always has to be—for any statement about what we ought to do ever to be true (and just not some disguised expression of what we merely want people to do, although even that reduces an ought to an is: the mere “is” of how we wish people would behave). But that means, moral statements are always statements of fact, and thus testable as such. They can be verified or falsified.
But moral imperatives are by definition imperatives that supersede all others. Which means, moral imperatives are only ever true, if there is no other imperative that supersedes them (as then, the imperative that supersedes them is actually what’s moral). But it is logically necessarily the case that, in any given situation, some imperative must always supersede all others. In other words, there is always something you “ought most do.” Which means moral facts always necessarily exist. And would exist, in some form, in every possible world, whether any gods are in that world or not. It’s literally logically impossible to build a world with people in it, that doesn’t have true moral facts applicable to them.
Attempts have been made to deny or avoid this for centuries, because it makes people uncomfortable to know that the only reason any moral facts can ever be true, is that following those directives will maximize the chances of our own personal satisfaction—with ourselves and our lives and the world that we thus, by our own behavior, help create. That sounds selfish. But that confuses “selfishness” (the absence of generosity and concern for others) with “self-interest” (which in fact warrants generosity and concern for others). In fact all moral systems are based on self interest. Literally, all of them. Including every Christian moral system ever conceived. It’s always only ever true, because in some way adhering to the designated commandments will ultimately make things turn out better for us, in some way or other. Even if not right away, or not as obviously as will readily convince. But that’s always what the argument is. “Look, you should do X, because things will likely go better for you in the long run if they do, trust me.”
Even Kant’s attempt to dodge this consequence by inventing what he called “categorical imperatives,” imperatives that are somehow always true “categorically,” regardless of human desires or outcomes, failed. Because he could produce no reason to believe any of his categorical imperatives were true—as in, what anyone actually ought to do (rather than what they mistakenly think they should do or what he merely wanted them to do). Except a hypothetical imperative he snuck in, about what will make people feel better about themselves, about what sort of person they become by so behaving.
Which means Kant invented no categorical imperative at all. All his imperatives were simply another variety of hypothetical imperative, just one focused on the internal satisfaction of the acting party, rather than disconnected wishes about bettering the effects of their behavior on others. Which really just reduced his whole ethics to what Aristotle had already empirically discovered thousands of years before: we will be more satisfied with ourselves, and hence our lives, if we live a certain way. And as Aristotle correctly observed, we will only reliably live that way, if we cultivate psychological habits—virtues of character—that regularly cause us to. (If you doubt any of this, see my article All Your Moral Theories Are the Same.)
Which has since been verified by science…
Science Gets Right, What Bibles Get Wrong
Whether it’s the nature of human beings, physically and mentally, or the origin and physics of the world and its contents, or the reality of magic and ghosts, or what makes governments or communities function better, or cures or alleviates illness, or pretty much anything else, science has consistently always corrected the gross and often harmful errors of Scripture. What Scripture said, has turned out to be false, a primitive and ignorant superstition. We found the evidence shows nearly everything is different than the Scriptures claimed. We should stick with the evidence. Because as evidence itself shows, it always goes better for us when we do.
Sciences already study morality descriptively, of course. For example, anthropology, sociology, and history (which, yes, is also a science, albeit often with much worse data and thus more ambiguous results: see Proving History, pp. 45-49) all study as empirical facts the many different moral systems cultures have created and believe true, and how they’ve changed those systems over time and why. But only descriptively, as a matter of plain fact, without verifying or testing if any of those systems are in any sense true. But science could do more than that, and in some cases already is. For example, psychology, sociology, economics, and political science can also investigate which moral systems actually are true. As in, which behaviors, when adopted, actually maximize the odds of individual life satisfaction and societal functionality.
Notice this does not mean “what moral inclinations we evolved to have.” That we can also study, and have studied. But that is just another descriptive science. That we evolved to behave a certain way, in no way entails that’s the way we ought to behave. Thus neuroscience, genetics, evolutionary psychology and biology, when they study human morality, they are all doing descriptive, not prescriptive science. A prescriptive science of morality requires determining, as a matter of fact, what people want above all else (when they reason without logical fallacy from only true facts, and not false beliefs); and, as a matter of fact, what actions or habits have the best chance of achieving that. The findings of this prescriptive science are not likely to be identical to the findings of the descriptive science. Because evolution is unconcerned with human happiness, and not intelligent. It therefore produces a lot of “bad code.”
Thus, before we invented better ways of doing things, when we acted simply as we evolved to be, as savages and ignorant primitives, we invented a Biblical God to tell us all sorts of things were right and good, like slavery, that we later realized were not. Since then we have empirically discovered we are not happy endorsing or allowing slavery, or women’s inequality, that we need democracy and human rights to reduce our personal risk of conflict and misery, that things go better for everyone when we cultivate respect for personal autonomy and individualism, and pursue the minimization of harm, all to generating good will, and contented neighbors. That’s all an empirical fact. And remains a fact whether gods exist or not.
The role of science in determining moral truth becomes obvious when you start thinking through how we would ansewer the most fundamental questions in moral theory. Such as, “Why be moral?” This is a question in psychology, reducing to cognitive science and neurophysics. “Why be ‘moral’ as in following behavioral system X, rather being ‘moral’ as in following behavioral system Y?” Likewise. Both questions reduce to what humans want most out of life (a fact of psychology), and what most likely obtains it (a fact of how the world and societies work). Otherwise, if your moral theory does not come with a true answer to the question “Why obey this particular moral system?” it cannot claim to be “true” in any relevant sense. And yet this is a question of relative psychological motivation. Which is a factual question of human psychology.
Hypothetical imperatives have always been a proper object of scientific inquiry. We test and verify them experimentally and observationally in medicine, engineering, agriculture, and every other field. Moral imperatives are not relevantly different. What things people want most out of life when reasoning soundly from true beliefs is a physical fact of psychology that only science can reliably determine. And what behaviors will most likely get people that outcome for themselves is a physical fact of psychological and social systems that again only science can reliably determine. We should be deploying our best scientific methods on answering these very questions. Meanwhile, we can make do with the evidence so far accumulated.
The Neuroscience of Morality
Brain science has determined that we have several conflicting parts of the brain dedicated to moral reasoning, including parts dedicated to resolving those conflicts. It’s a hodge podge of ad hoc systems that perform well below perfect reliability, demonstrating that we were definitely not intelligently designed. As social animals—as we can tell from observing other social animals all the way up the ladder of cognitive system development—natural selection has grabbed onto various randomly arrived-at ways of improving our prosociality, so we can work together and rely on each other and are motivated to do so by the pleasure and contentment it brings us. These innate evaluators and drives are imperfect and a mess, but better than being without them altogether.
There are two overall systems, which we know the location and performance of because we can deactivate them with magnetism, and they can be lost from injury, surgery, or disease.
- One of those main two systems judges based on the motives of the acting agent (ignoring consequences). This brain center asks questions like “Did you have a good excuse?” Which was surely selected for because it helps us predict future behavior, without erroneously relying solely on outcomes.
- Another part of the brain judges based simply on the consequences of an action (ignoring motives). This brain center asks questions like “Did you follow the rules we agreed to?” And “Did that turn out well?” And this is also helpful to maintain equity and functionality in the social system, which are essential for it to function well for everyone in it.
Because sexual gene mixing ensures wide variations across any population, some brains more strongly generate results from one system over another, so some people are more inclined to care about motives, while others are more inclined to care only about what the outcome was. But neither alone is correct. Only a synthesis of both can be—as we can confirm by observing which concerns are necessary, and what balance is needed between them, for social systems to function well—as in, to serve well the needs of every member of the system. As even Kant would put it, “What happens to the whole social system when you universalize the rule you just enacted? Is that really the result you want?”
Meanwhile, cognitive self-awareness, which accidentally evolved for many other advantages it conveys, inevitably causes us to see ourselves as we see others. So our brain centers that judge the behavior of others, also judge ourselves by those very same measures. Which is why people are so strongly inclined to rationalize their own bad behavior, even with false beliefs and lies they tell themselves, because they cannot avoid judging their own intentions, and their feelings about themselves unavoidably derive from how they see themselves, what sort of person they’ve become. Similarly, we care about the consequences of our own actions for exactly the same reason we evolved to care about the consequences of others’ actions: regardless of who caused the consequences we are looking at, they have the same effect on the social system we depend on for our welfare and contentment.
That’s how our brains evolved to “see” moral facts, which are really just social facts (facts about how social systems composed of cognitively self-aware agents work, and don’t work), and how we evolved to have the mechanisms to care about those facts. Of course, whether it’s moral facts or material facts, our biologically innate faculties are poorly designed, hence generate error and superstition as often as correct beliefs; our invented tools of logic, math, and science greatly improved on our innate faculties and thus can resolve those errors and discover even more facts (material or moral or any other), with effort and evidence over time.
I’ve already discussed this fact elsewhere (see Why Plantinga’s Tiger Is Pseudoscience). But as that analogy shows, humans also evolved a number of different reasoning abilities in our brain. All are defective and flawed. They were selected for only because they were blindly stumbled upon, and are better than their absence, generating differential reproductive success. But humans eventually used these defective abilities to invent “software patches” that fix their defects, whenever this new “software” is installed and run, via culture and education. Especially formal logic and mathematics, critical thinking, and the scientific method. All are counter-intuitive—evincing the fact that we did not evolve to naturally employ them. But all are more successful in determining the truth than our evolved reasoning abilities. As indeed we have abundantly confirmed in observation.
Therefore, we should not obey our evolved abilities, but our discovered improvements on them—in morality every bit as much as in reasoning generally. Because our evolved moral reasoning is likewise flawed and merely better for differential reproductive success; it was not selected for improving our life satisfaction, nor selected intelligently at all. So if humans want to be satisfied with living (and they must, as otherwise living ceases to have any desirable point), they also need improved moral reasoning. Just as they needed all that other improved reasoning. They need the tested and continually improved technologies. They can’t just rest on the flawed biological systems they were given. Otherwise all the self-defeating failures of those badly-designed ad hoc systems will accumulate.
This is why morality has progressed over time, rather than being perfected the moment we evolved (or, as theists would say, were imbued with) any moral faculties at all. Just as happened with our ability to reason and discover the facts of the universe in general. Evolution is not a reliable guide—to the facts of morality any more than the facts of physics—because it is not intelligently guided. But it points in correct directions, it gets part of the way there, because it is selecting for what works, among the random things tried. We now can see where it fell short, and fix the bugs, glitches, and defects in our cognitive systems, using cultural technology as a tool. Our brains evolved some useful mechanisms for this. But ultimately, reason and learning have to carry us the rest of the way.
This is how we know God had nothing to do with human morality. He did not build it effectively into our brains. And he did not teach us anything correct about how to improve on the faulty systems in our brains to discover morality reliably. We had to figure that all out on our own, and deploy it on our own, taking thousands of years to slowly fix all our mistakes and errors through trial and observation.
The Psychology of Morality
We know the most about moral truth, and in particular why people actually care about being moral, from the science of psychology, in particular child development studies and life satisfaction studies (“happiness” research). In the latter case, strong or substantive correlations exist between positive personality traits, which overlap common moral virtues, and happiness (see, for example, “Correlation of Personality Traits with Happiness among University Students“ in the Journal of Clinical and Diagnostic Research; my bibliography on the correlation between happiness and moral virtues in The End of Christianity, p. 425 n. 31; and related works in the bibliography below). Meanwhile, what we’ve learned, and confirmed with abundant scientific facts, is that moral behavior in children starts as a fear-based conforming to authority. It is at that most childish stage driven simply by a desire to avoid being punished. If child development is allowed to proceed effectively (and isn’t thwarted by such things as mental disease or toxic parenting), this fear-based reasoning gradually develops into a sense of empathy, which begins to self-motivate. We start to care about the opinions of others, more than about merely whether we will be punished. This then develops into agent-directed self-realization: we learn to care most about the sort of person we want to be. In other words, no longer the opinions of others, but our opinion of ourselves matters most. You then start being moral because you like being a moral person.
So when we ask the question “Why be moral?” science has already answered that question. To paraphrase Roger Bergman (see the closing bibliography), when we develop into fully realized, healthy adults, we all actually answer the question “Why be moral?” the same way: “Because I can do no other and remain (or become) the person I want to be.” In other words, we must be moral, to avoid the self-loathing (or delusional avoidance of it) that entails our personal dissatisfaction. And when our sense of ourselves, and of what our actions cause in the world and its significance, is freed of ignorance or false belief, when it is constructed of a true and accurate understanding of what actually is the case, what we conclude in that state is the best behavior, will actually in fact be the best behavior.
This is because we need that behavior in the social system, to receive the benefits we need from that social system. And when we see in ourselves what we see in others, what we see in ourselves will be what is, in actual material fact, either conducive or destructive of human happiness—whether directly, or by propagating or preventing disfunctionalities in society. This is how human psychology not only developed to assist us in building cooperative societies to benefit from, but how it necessarily must develop to have that result. No social system will reliably work to anyone’s benefit, without such psychological systems in an individual’s brain. And no individual without those systems, will ever find satisfaction in a social system. Or, really, anywhere.
Note that we don’t need any god to exist, for this fact to be true. It’s always true. In every possible universe. Once you have a self-aware animal dependent on social interaction for its welfare, statistically, that animal will always benefit from this kind of psychology, and, statistically, will always suffer to the degree that it lacks it.
Game Theory, For Real
We have even confirmed all this mathematically.
Game Theory was developed to mathematically model all possible arrangements of social interaction, allowing us to test the outcomes of different strategies when pitted against any others. One should not let the name mislead; that Game Theory describes all human social interaction does not mean human social interaction is “merely a game.” But rather, that no matter what metric we choose to judge by, interactions either have no consequence, or help or hurt the agent deciding what to do. And this can be modeled in the same fashion as a game, with a score, and winners or losers (and no zero sum is entailed by this—as anyone who has played cooperative games knows, some games can make everyone a winner).
When social systems are modeled this way, in the 1970s one particular strategy was found to be the most successful against all competitors. It was called Tit for Tat. The basic strategy it entailed was to “Default to Cooperate” (in other words, always start out being kind, and revert to being kind after any alternative action), but always “Punish Defectors” (in other words, be “vengeful,” in the sense of punishing anyone who takes advantage of your kindness to harm you). Then revert to kindness when the other does.
As Wikipedia’s editors have so far put it:
In the case of conflict resolution, the tit-for-tat strategy is effective for several reasons: the technique is recognized as clear, nice, provocable, and forgiving. Firstly, It is a clear and recognizable strategy. Those using it quickly recognize its contingencies and adjust their behavior accordingly. Moreover, it is considered to be nice as it begins with cooperation and only defects in following [a] competitive move. The strategy is also provocable because it provides immediate retaliation for those who compete. Finally, it is forgiving as it immediately produces cooperation should the competitor make a cooperative move.
Iterated computer models have measured the long term accumulated gains and losses for agents who follow all different strategies, even extremely elaborate ones, when faced with agents following any other strategies. And Tit for Tat always produced the highest probability of a good outcome for agents adhering to that strategy. No other strategy could get better odds. And this finding must necessarily hold for all real-world systems that match the model. Because the gains and losses modeled are an analog to any kind of gain or loss. It doesn’t matter what the actual gain or loss is. So these computer models will necessarily match real world behavioral outcomes, no matter what metric you decide to use for success. And indeed, we’ve tested this repeatedly in observation, and that has proven to be the case.
But there’s a twist. This simple Tit for Tat strategy (“Default to Cooperate,” but “Punish Defectors”) has since been found to have flaws. Small tweaks to the strategy have then been proved to eliminate those flaws.
What we’ve found is twofold so far:
- First, we need to add more forgiveness, but not too much, to forestall “death spirals,” what we would recognize as unending feuds, where punishment causes a returned punishment, which causes a returned punishment, and so on, forever. The interacting agents just keep punishing each other and never learn. (You can probably think of some prominent examples of this playing out in the real world.) To prevent that defect, one must adopt a limited amount of proactive (as opposed to responsive) forgiveness. In other words, someone at some point has to stop punishing the other one, and just reset back to kindness-mode. This means, rather than always punishing defectors, sometimes we should trust and cooperate instead of retaliating. Sometimes we should meet hostility with kindness.
- Second, to prevent that feature from being exploited, we need to add some spitefulness, but not too much, to defeat would-be manipulators of forgiveness, by switching back to a never-cooperate strategy with repeated defectors. In other words, at some point, you have to stop forgiving. Once someone burns a neighbor again after they were kind to them, you no longer forgive them. Which encourages people to not do that in the first place, thus eliminating an obvious exploit.
The interesting thing here is that this was all demonstrated mathematically with iterated computer models, measuring the statistical outcomes of countless competing strategies of human interaction. And yet it ends up aligning with what humans have empirically discovered actually works. Thus verifying why these are the best behaviors to adopt. That fact we thus know is an inevitable emergent property of any social system. In any possible universe. No god needed.
Morality Is Risk Management
People want ultimate guarantees. But there are none. There is no access to a perfectly just world. This world was not made to be one. And there are no entities around capable of producing one. And this is why no behavior absolutely guarantees the best outcome. All we have is risk management. Moral action will not always lead to the best outcome for you; immoral action sometimes might instead. But the probabilities won’t match. Moral behavior is what gives you the highest probability of a good outcome. Immoral behavior, by definition, is what doesn’t.
It’s like driving while drunk: sure, you may well get home safely, harming neither self nor others; but the risk that you won’t is high. And it’s the risk of that bad outcome you ought to be avoiding. Eventually, if you keep doing it, it’s going to go badly. And statistically, it has a good chance of going badly even on the very first try. That’s why we shouldn’t do it. Not because it guarantees a bad outcome. But because the risk of a bad outcome is higher than is worth the deed. There are alternatives that risk and cost less in the long run.
Or like in the early stages of the science of vaccines: a vaccine may have had a small probability of causing a bad reaction, but the probability of acquiring the disease without the vaccine is higher. Thus you have to choose between two bad outcomes: a small probability of being hurt, or a higher probability of being hurt. It makes no logical sense to say that because you can get hurt by the vaccine, that we should not take the vaccine. Because if we don’t, we will have an even greater chance of being hurt by the disease it defends us against. The argument against taking the vaccine, entails an even stronger argument for taking the vaccine. So, too, all potentially-exploitable moral action.
Risk management also means we ought to maximize the differential every chance we get. This is the responsibility of society, to create a more just world, precisely because there is no God who did so, or ever will. Hence we should make vaccines safer—so we did. Now, the probability of bad outcomes is trivial, and the outcomes even when bad, are minimal compared to the effects of acquiring the disease. Moreover, we administer vaccines in medical settings and with medical advice that ensures bad reactions are quickly caught and well treated. This exemplifies a basic principle: a moral society should be engineered so that the society has your back when moral action goes badly for you. Just as we do with medical interventions. Comparably for drunk driving: obviously a better social system is one that rewards people who don’t drive drunk by helping them and their vehicles get safely home, thus reducing or eliminating even the incentive to drive drunk in the first place. A principle that can be iterated to every moral decision humans ever have to make.
In other words, though “good actions have a small chance of bad effects” and “bad actions have a small chance of solely good effects,” we can continue to engineer the social system so that these become less and less the case: making it rarer and harder to profit from misbehavior, and rarer and harder to suffer from acting well. We ought to do this, because our own life satisfaction will be easier to obtain in a system built that way. And rationally, there can be nothing we want more than that.
And note what this means…
We have confirmed quite abundantly that all there are are natural objects and forces—and us. That’s all we have to work with. No superman is coming to save us. We will not live forever. There is no second life. There is no future perfect justice. Our fallible brains and nervous systems, are our only source of information and information processing. And only evidence and what we really want out of life can determine what is right and wrong for us, both morally and politically. Because our clumsily built social systems, the systems we have to make and maintain on our own initiative and intelligence, and by our own mutual cooperation and consent, are our only means of reliably improving justice and well being. For ourselves as much as anyone else.
This is why no other worldview can compete with ethical naturalism. Ethical naturalism is simply the only worldview that can make any reliable progress toward increasing everyone’s ability to live a satisfying life, a life worth living, a life more enjoyable and pleasant to live. And life simply has no other desirable point than that.
So, What Then?
To really understand where I’m coming from in moral philosophy, you’d do best to just read up on my whole worldview, which all builds toward this fact, in Sense and Goodness without God. And if you want the highly technical, peer reviewed version of my take on moral theory, you need to read my chapter on it in The End of Christianity.
But if you want to start instead with briefer, simpler outlines, you can follow my Just the Basics thread online; or if you want to dive even deeper into the questions and technicalities, you can follow my Deeper Dive thread online. Both are laid out below in what I think is the best order to proceed in. Following that, is a bibliography on the best and latest science and philosophy of moral reasoning (on which all my points above are based).
Moral Theory: Just the Basics
- My Debate with Ray Comfort: Currently the best, brief survey of what I believe and why; and of how Christianity offers no valid alternative to it, but has only a moral theory based on false beliefs and primitive superstitions instead.
- Darla the She-Goat: Using a simple parable and colloquial approach, this essay explains why moral facts are not simply what natural evolution left us with, but are facts about what does and doesn’t work in social systems of cognitively self-aware animals, facts “evolution” is blind to and unconcerned with, though can still stumble around the edges of insofar as these things are useful for survival.
- Response to Timothy Keller: Here I explain, in response to denials by Christian apologists like Keller, how human rights and moral systems are just technologies humans invented to better their lives, and need for their own personal life satisfaction and fulfillment, simply because of what we are, and how the world works.
- Objective Moral Facts: A good primer on what it means for moral facts to be “objective” vs. subjective, absolute vs. relative, and so on, and what this all means for conclusions about morality when we admit there is no God.
- How Can Morals Be Both Invented and True: If morality is just a technology, something we just invented, how can it then be “true”? What does “being true” mean of anything we invent to better obtain some goal. Understanding that is key to understanding what moral facts are.
- Your Own Moral Reasoning: What you can do with all the above information, and all we’ve learned from philosophy and science so far, to develop the best moral system for yourself, based on evidence and reason, and the discovery of what actually maximizes your own life satisfaction, fulfillment, and contentment.
Moral Theory: Deeper Dive
- Moral Ontology: This goes into what the material facts of the universe are that moral facts correspond to. If moral facts exist, what are they? What are they made of? In what way do they exist in a purely physical world?
- Goal Theory Update: This will lead you to the Carrier-McCray debate, where two atheists debated what the underlying basis of moral truth is; while in the process tying up every single technical thread that debate left unresolved by the time the clock ran out.
- Rosenberg on Naturalism: This goes into how my findings in moral philosophy defeat all more nihilistic versions of atheism, and how the latter are not based on scientific fact or valid logic.
- All Your Moral Theories Are the Same: Demonstrates how all moral theories ever proposed in philosophy or theology, are really the same moral theory, just looking at it from a different angle. It’s consequentialism all the way down. It’s hypothetical imperatives all the way up. And there simply is nothing else.
- The Moral Bankruptcy of Divine Command Theory: Detailed demonstration of the failure of Christianity to develop any coherent or defensible moral philosophy, but only the facade of one, a facade that merely conceals what is in fact just ethical naturalism mixed in with false superstitions.
- Shermer vs. Pigliucci on Moral Science: Many atheists, like Michael Shermer and Sam Harris and myself, have argued moral philosophy should be transformed into scientific research, because moral facts are empirically discoverable by science. But often atheists who argue this, forget to address a key part of the necessary research program. It’s not just the science of consequences we must develop further. But also the science of ultimate motivation, discovering what actually maximizes human life satisfaction (and what doesn’t, or is only falsely believed to).
- What Exactly Is Objective Moral Truth? This goes into more detail on why discovering moral facts makes sense as a scientific research program, against many common objections, and in particular explaining what it means for anything to be “objectively” true in a scientific, factual sense. I then expanded on this further in my responses on the same point to Babinski and Shook. And even further in response to Born, the critic Sam Harris deemed his best opponent on the point.
- Are Moral Facts Not Natural Facts? Response to esoteric atheist theories of morality that try to claim moral facts are not facts of the natural world but some other spooky something or others that they clearly are not, in the process exposing how moral philosophy is often choked by semantic gaffes and word games.
- Plantinga’s Moral Arguments for God: Refutation of several arguments Christian apologist Alvin Plantinga attempted in order to show that moral facts being true is evidence a God exists. In fact, it’s just evidence social animals exist with cognitively evolved self-awareness, giving them the power to think about, and thus work out, what actually makes their life better and more satisfying to live.
External Bibliography
Don’t just take my word for it. Check out what the most science-based experts are already saying and have already demonstrated:
- Roger Bergman, “Why Be Moral? A Conceptual Model from Developmental Psychology” in Human Development 45 (2002): 104-124
- Mark Fedyk, The Social Turn in Moral Psychology (MIT Press)
- Patricia Churchland, Braintrust: What Neuroscience Tells Us about Morality (Princeton University Press)
- Darwall, Gibbard, and Railton, Moral Discourse and Practice (Oxford University Press)
- Darcia Narvaez & Daniel Lapsley, Personality, Identity, and Character (Cambridge University Press)
- Robert Axelrod, The Complexity of Cooperation (Princeton University Press) which is the sequel to his 1984 treatise The Evolution of Cooperation (revised in 2006).
And especially the Moral Psychology series published by MIT Press and edited by ethical naturalist Walter Sinnott-Armstrong:
- Volume 1: Evolution of Morality
- Volume 2: The Cognitive Science of Morality
- Volume 3: The Neuroscience of Morality
- Volume 4: Free Will and Moral Responsibility
- Volume 5: Virtue and Character
-:-
When I was much younger, freshly free of religion, another liberated one told me that there was in reality no such thing as good nor evil… for many years my mind balked at that thought… until I discovered the hidden ideal within. Most people look out into the world and see good and evil as plainly as they see beauty and ugliness… they simply see it. it seems to be a property of the universe, just there! But in the 1980s Omni magazine, the science arm of Penthouse magazine did a worldwide study to find out if beauty was really just in the eye of the beholder… they discovered there is an algorithm within our brains that detects symmetry and proportion in faces and assigns beauty and ugliness accordingly and it was pretty universal. Many have theorized it refers to genetic health for reproductive purposes. (the invention of beer goggles messed that up royally). I began to see that good and evil were something similar… not based on an algorithm but rather on a buried ideal fantasy alternative world that we each construct out of our deep immersion in religion culture tradition and direct experiences… it sits behind our conscious mind much as our beauty detector does. we don’t question its judgment until someone makes us aware of it… and this is, I believe, the main reason why you cannot convince people, easily, to abandon their personal views of morality for a sane view of rational behavior and thought… people don’t see their hidden ideal and you just seem to be nuts or weird for seeing reality askew. I have told people… any judgment you make, ask yourself, compared to what? and eventually you will bring this hidden ideal to light and along with it the bucket list it gave you… once visible you can see if that bucket list actually make sense or if it’s just a carrier over of ancient myths and legends unquestioned for millenia.
Not that relevant a concern here, but I should caution that the bioscience of aesthetics has evolved a lot since then and is no longer so simple. But the overall gist remains true: there are some aesthetic reactions that are genetically innate and were selected for by evolution. But these get modified and expanded upon by cultural and idiosyncratic environmental effects as well. So beauty does remain in the eye of the beholder. There is no normative aesthetics. Not even symmetry.
I really wanted to stop reading your article at the outset when you clearly misrepresented 1 Timothy. Clearly.
It’s interesting that I was an atheist for 30 years before I began freely thinking for myself and became a Christian. I look forward to watching this debate. Saving it for the Thanksgiving weekend.
Sounds like you’ve deluded yourself. Not letting women teach or have authority over a man is by definition the subordination of women. As all real, mainstream experts in the text and language concur. To go against plain language and all objective expertise is not freedom. It’s a prison you’ve chosen to lock yourself in.
It seems to me that “But it is logically necessarily the case that, in any given situation, some imperative must always supersede all others. In other words, there is always something you “ought most do.” is not correct – or at least needs an argument. What if the ordering on the merits of actions is a partial order? Or, in ethical terms, surely genuine moral dilemmas are possible? Note that this is purely a formal remark about the structure of the ordering.
Moral dilemmas entail neither option is more imperative than the other. So that makes no difference to the conclusion. Even if all moral decisions were dilemmas (and that would be weird), that would only mean you have two or three options, for example, superseding all other options. It would be indifferent which you then chose, of the supreme imperatives available. But that’s still superseding all other options. Thus there is always a moral fact of the matter.
Just seen the debate. McDurmon’s main contention seems morality can’t aride in an amoral universe.
Yours is that morality is invented. Is that what J L Mackie said? Moral properties don’t exist in themselves ? thanks
No. Mackie failed to grasp that morals are hypothetical imperatives and thus as real as all other hypothetical imperatives. They exist as properties of (hence, true statements about) social-cognitive systems. See my article How Can Morals Be Both Invented and True? (And for further elucidation after that, read the article here that you are commenting on. It actually thoroughly answers your question already.)
That wasn’t however McDurmon’s main contention; he was trying to argue that an invented morality leads to decline in human welfare. He never quite landed that argument however. Nor did he defend any workable alternative. As Christianity instead appears to lead to decline in human welfare; and he had to admit that and argue only his rare fringe version of Christianity avoided that outcome, but then never presented any evidence even his exception claim was true, or that his worldview could ever be implemented—since it required the operation of the Holy Spirit that has demonstrably failed to effect the outcome he claimed it should have. Whereas we have abundant evidence that improved education and parenting and social system design do effect the better outcome with a secular moral system.
PS authors like Baggett/Walls
Good God: The Theistic Foundations of Morality
cite Sidgwick who points out morality dusn’t always conduce to our happiness.
Read the article you are commenting on here. In particular the “risk management” section. It already answers that point. There is only one kind of morality anyone has any valid reason to care about: one that increases the likelihood of good outcomes. Every other system we have sufficient reason to reject. It therefore cannot claim to be “true” in any sense relevant to human life.
Hi, Dr. Carrier. I was wondering what you thought about Wikipedia’s page on metaethics.
Q1. Do they do a good job of describing what metaethics is about? I think that they’re clearer than Stanford Encyclopedia and Internet Encyclopedia. The only problem is that… it’s Wikipedia.
Also, I’d appreciate if you answered as many of these follow-up questions as you’d like:
Q2. Is it fair to call your Goal Theory of Moral Value a form of Ideal Observer Theory? If so, wouldn’t that make you an Ethical Subjectivist? (FYI, I mostly agree with Goal Theory, I just have this little nitpick.)
Q3. What is the difference between Moral Epistemology vs Normative Ethics? To me, they both seem to answer the question of “How do I figure out right from wrong?”
Looking forward to your reply.
I haven’t vetted that Wikipedia page thoroughly but it looks decent enough, albeit it makes some mistakes (see below). It also lacks some things (e.g. it should have a section on the internalism/externalism debate).
But in general I find philosophical labels badly wrought (that’s a criticism of the field, not Wikipedia). So questions about “what labels apply to x” are usually a waste of time; there is no coherent or consistent definition of terms in the field itself, so labels more often lead to failures of understanding rather than actual understanding (through what I call “the baggage fallacy”).
Moral Epistemology is exactly that: the epistemology of moral knowledge (“How” do you know what’s normative); Normative Ethics is the moral knowledge itself (“What” is normative). You need the one to get the other; but they involve answering different questions (same as the relationship between the epistemology of physics, and just physics).
So with that understood, you appear to be conflating epistemology with ontology. Ideal observer theory is an epistemology; as such it can be the epistemology of any moral ontology, not just subjectivism. So those two are unrelated; and thus where Goal Theory falls in either case is two separate questions.
And here we get to the problem with labels in general: Wikipedia represents definitions that do exist in the field; but so many other definitions exist in the field that you really can’t say “objectively” that any theory falls under any label. All you can ask is: does a theory fall under a given label based on what this source has defined that label to be. You will immediately fall into error if you assume the label is being used the same way somewhere else. For example, concluding Goal Theory is X according to Y (let’s say, Wikipedia), then seeing that Z (let’s say, Wielenberg or the IEP) attaches a bunch of things to X that Y does not, and thus falsely concluding Goal Theory is also those other things that Z attaches to that label, which it’s not; hence, baggage fallacy: one incorrectly attaches the “baggage” someone else throws onto that labeling cart, after having used a different baggage cart to label it by.
So I don’t find terms like “Ideal Observer Theory” or “Subjectivism” to be in any way useful.
Wikipedia describes IOT as a rationalist theory in distinction to an empiricist theory; and yes, some philosophers do that. But others don’t. Goal Theory is empiricist. So insofar as we “decide” to say IOT is rationalist and not empiricist, Goal Theory is not IOT. And yet Goal Theory uses IOT to formulate the hypotheses it tests empirically. Wikipedia simply does not discuss that option. And IMO, most philosophers have never properly thought this through so as to realize hypothetical imperativism is not rationalist (any more than science is, despite science itself being an IOT, if you think about it: it forms hypotheses from an IOT perspective, then tests them empirically).
Likewise, Wikipedia categorizes Subjectivism as anti-realist. As many philosophers do. But not all do. Still, insofar as we “decide” that we shall mean by “Subjectivism” a form of anti-realism, Goal Theory is not Subjectivist, because it is a form of moral realism. This is a very common mistake philosophers make, so you can’t fault Wikipedia for it. Wikipedia defines moral realism as the view that moral imperatives are “mind-independent facts, that is, not facts about any person or group’s subjective opinion, but about objective features of the world.” This is a common definition in use. Notice the word “opinion.” Philosophers will often “baggage fallacy” their way from “opinion” to “anything whatever to do with the mind” and thus confuse numerous realist theories as anti-realist. Goal Theory is not based on opinions; to the contrary, it asserts opinions can be objectively false (the most universally correct definition of moral realism), and proven so with evidence. It just so happens some of that evidence includes objectively physical facts about an agent’s mind, including their values and desires (which one ought not confuse with “opinions,” yet nevertheless many philosophers do). Since you can have a false belief about what your values are (and thus, e.g., falsely believe you value money more than joy, say, when as a psychological fact of yourself that actually isn’t true), values are not opinions. They are real facts about yourself independent of what you believe.
So here I notice Wikipedia has unintelligibly placed IOT as a sub-category of conventionalism, when in fact they are exact opposites; and weirdly puts divine command theory under anti-realism, when in fact it is usually categorized as realist, because one can be “wrong” in one’s opinion or belief or conventional assumptions about what the divine commands. I get the “trick” the editor is aiming at, classifying morality from the POV of the divine (and thus reducing objective moral facts to the mere opinions of an authority), but that is the incorrect POV: moral agents are humans, not God. Yes, maybe morality is for God arbitrary opinion (this is indeed a serious problem with divine command theory, albeit there are attempted solutions, and Wikipedia doesn’t discuss them here); but for us, it’s objective fact (it is a real, objective fact what God commands).
That kind of mistake is, however, typical across the whole field of moral philosophy. There is hardly a philosopher who doesn’t fuck things up into a confusion like this about one thing or another. Hence my problem with labels.
For further perspective see my articles:
Open Letter to Academic Philosophy: All Your Moral Theories Are the Same
Objective Moral Facts
The Moral Bankruptcy of Divine Command Theory
Yet again, the response is well worth the wait. Thank you.
Hi, Dr. Carrier. I have more questions to ask. Be as brief or as thorough as you’d like.
Q1. What is your view on antinatalism? I’ve seen you discuss it in a random comment somewhere but I forget where it is. Where do you think antinatalists go wrong, if wrong at all?
Q2. Do you have an article (besides this one, of course) where you argue against cultural relativism? Both why it’s false and why if it were true, it would be problematic?
Q3. In Sense and Goodness, you answer Moore’s Open Question with ultimate human happiness/satisfaction. I’m paraphrasing but you say that it is the foundation of morality because without it, life becomes meaningless. The question, I think, then becomes “why SHOULD life have meaning, regardless of whether or not it does?”
Looking forward to your reply.
Q1: The comment you refer to is here. It answers your question.
Q2: See the subsection on “relativism” in Will AI Be Our Moses, Point [2] of my Babinski rebuttal, and my article on Objective Moral Facts.
Q3: Your question is itself meaningless. There is no sense in which things “should” have meaning; they either do or they don’t. This is a question of fact. This fact is what gives “should” propositions truth value; not the other way around.
That life has meaning, or is capable of having meaning, to a person aware of living it—and that life only has meaning if it is or can be sufficiently satisfying to them—are observed facts. Absent which, a person cannot answer the question why they should continue living or doing anything at all, because there would be no genuinely motivating reason to. At most you can debate the factual accuracy of a person’s beliefs about all this (e.g. a suicidal person whose lack of a reason to go on living is based on false beliefs about themselves or the world; cf. SAG, index “suicide”). But that’s again a dispute over is, not ought. Ought then follows, once you’ve established what actually is.
Insofar as you mean to ask “Why do we need meaning as a reason to go on living rather than something else?” the question is nonsensical; a thing’s having meaning is by definition a reason to acquire or maintain it, so your question would be like asking “Why do we need a reason to go on living rather than a reason to go on living?” The meaning of life is by definition everything you live life for.
I’m actually composing an article now that just happens to address many of these issues. It will be out in a week or two (if I don’t finish it tomorrow). It will incorporate what I’ve already said…
Q1: See my comment here.
Q2: If you actually mean moral relativism, then see Objective Moral Facts. If you instead mean cultural differences vis-a-vis epistemology (science, logic, math) or its concomitant effect on industry (leveraging up a civilization’s capabilities and productivity and “energy return on investment”), I don’t have anything comprehensive, but you can get an idea from No, Tom Holland, It Wasn’t Christian Values That Saved the West and Why Plantinga’s Tiger Is Pseudoscience.
Q3: The question itself is a category error. Meaning isn’t the sort of thing anything “should” have—apart from conditions entailing it. Hence it only either exists or it doesn’t.
Perhaps what you meant to ask is something like “can’t we have morality even if nothing has any meaning,” to which the answer is, “Actually, no, since if nothing means anything, then nothing is worthwhile, and if nothing is worthwhile, there is no reason to do anything, and if there is no reason to do anything, then there is no reason to do ‘morality’ either.”
Thus it is not that life “should” have meaning; it’s that life does have meaning, and therefore there are things that are worth doing, and therefore necessarily there are moral facts. Since those are just “that which you ought most do,” and it is logically necessarily the case that if there is anything you ought do, then there is something you ought most do (just as “if there is a quantity of apples in each of ten baskets, there is a largest quantity of apples in at least one of those baskets”).
But you may be tripping yourself up on what “meaning” means here. To have meaning in this sense just means to have value, and a value worth pursuing. Ergo, if nothing has meaning, then nothing has value, much less a value worth pursuing. And all that’s required for life to have any meaning for you is anything that has value to you that can only be obtained or facilitated while you are alive. See my discussion and links in §9 here.
Hi, Dr. Carrier. I recently just found out that Wikipedia actually does have a section on Internalism vs Externalism. It has its own dedicated page, although you were right that it doesn’t have its own section in the metaethics page, and I’m not 100% sure if it’s the same kind of Internalism that you’re talking about. But it’s still there, I suppose
Hi, Dr. Carrier. I’d like another (hopefully) quick clarification if you don’t mind. As is clear from your above comments, you’re not a big fan of labels (understandably) but please bear with me.
My question is: What label would you use to describe your moral worldview/theory/outlook?
You use terms such as ‘Ethical Naturalism (EN)’, ‘Goal Theory (GT)’ and ‘Secular Humanism (SH)’, but which of these is what you would call your view of morality as a whole? EN (I think) is your position on moral semantics/moral ontology, but regarding the others, I don’t quite know. EN deals with metaethics, but GT and SH confuse me. As far as Normative Ethics goes, you’re an Ethical Egoist (I think). So where would GT and SH fall into? Would SH be your general approach to Applied Ethics? But then what would GT be, Ethics in general?
Sorry if this is a lot to unpack.
I call my distinct moral theory Goal Theory. This prevents anyone mis-assigning claims to it by choosing any other label for it. There is only one Goal Theory, so no one (other than the lazy) can confuse me as advocating something I didn’t: they simply have to read what my position is. The label cannot mislead.
For instance, Goal Theory is technically a form of Desire Utilitarianism, but saying that is misleading because people assume a dozen things are entailed by desire utilitarianism that are not, and thus immediately mistake my theory for saying what it doesn’t: hence producing an entire debate between Goal Theory and Desire Utilitarianism. Thus illustrating the complete uselessness of labels.
The same happens with “Ethical Egoism,” rendering that phrase even more useless. It’s like saying “a mammal lives in my house” and the one thing they never think to realize is that I mean me; because people mistake “mammal” as meaning “nonhuman.” This is why labels are so useless in philosophy; philosophers rarely use them in any disciplined way, but simply stumble all over baggage fallacies, like someone who constantly forgets that we are mammals.
Meanwhile, Ethical Naturalism refers to any naturalist moral theory, not just mine.
And you are right, Secular Humanism refers to the ethical content of Goal Theory, not to Goal Theory itself.
This is the important distinction between ethics and metaethics: Kantianism and Utilitarianism are metaethical theories; but a Kantian or a Utilitarian can use those metaethical theories to defend everything from anarcho-capitalist to marxist ethical systems, or from Christian to Secular Humanist ethical systems, so knowing they are a Kantian or a Utilitarian actually tells you nothing about their ethics, as in what specific things they will conclude are moral or not.
Moreover, my ethical system is a kind of secular humanism; it is not coterminious with all kinds of secular humanism. It is thus not “Secular Humanism,” it is secular humanist.
So I think you may be confusing mammals with species of mammal here. Another reason labels are useless. Look how astray they have led you already? Attempting to peg a label to something is usually a bad sign in philosophy: it means you want to force some position into some other position. Better to ask why you want to do that, rather than make any attempt to actually do it. This shouldn’t be the case (science gets along fine labeling things without generating endless baggage fallacies); but alas, philosophy has failed to establish any disciplined behavior in employing labels properly. So it is best to try never to use them when you can do as well without them.
Per your suggestion, I picked Moral Psychology: Free Will and Moral Responsibility. After finishing it, I decided to check out some of the reviews. One thing led to another, and I came across this paper by Stephen Kershnar & Robert Kelly (https://philarchive.org/archive/KERRSN). In their conclusion, they claim that nobody is morally responsible because there is no such thing as a “responsibility-maker.” Here are two quotes from the conclusion of their paper:
What is the responsibility-maker of your Goal Theory? Does it have one?
Knowledge and intent.
That is the responsibility-maker everywhere for everyone in all real world situations regardless of their philosophy.
These philosophers are standard ivory tower composers of useless nonsense. They are disregarding what words like “responsibility” even mean, like in a court of law for example, and so their opinions have no relevance to reality. They can be dismissed as simply non-responsive.
See my article Free Will in the Real World … and Why It Matters.
I came up with an observation that seems to contradict with moral realism. Consider this pair of examples:
1) Rob is deceived into thinking that the red button in front of him cures cancer. However, in reality, this button actually kills 1000 people. Being deceived, Rob presses the button and kills 1000 people. It seems like in this case we don’t think that Rob is blameworthy for his action.
2) Rob is deceived into thinking that killing 1000 random people is morally good. He knows that the button will kill 1000 people and still decides to press it. In this case, it seems, some people (at least those that I asked like my coworkers) are hesitant to absolve Rob from moral blame.
If moral facts actually exist, then it seems possible to not know them, like we don’t know some non-moral facts. However, I am interested, why then does it seem (to some people at least) that not knowing moral facts absolves less of the blameworthiness than not knowing some non-moral facts? I am very interested in your response, because you said that moral facts are facts of the natural world, so it seems like they should act just like non-moral facts.
1) Indeed, Rob legally lacks knowledge and intent and thus would be acquitted in a court of law. That’s why no one is held accountable for these things. Only choosing (actus reus) with knowledge and intent (mens rea) is morally (or in many cases legally) wrong.
There are actual real world scenarios like this. For example, a contract killer conceals a live but unconscious person in an industrial trash compactor and the morning shift completes the compaction unaware of the fact that their regular duties were killing a person. No court will convict them of even wrongful death.
In terms of moral reasoning, moral knowledge is always tautologically limited to what is known. You can only ever know what the right thing to do is given what you know. This is based on a basic ethical principle of force majeure: you cannot be liable for not doing something that was impossible to do. If it’s impossible to know the actual consequences of pushing a button, then the one who pushes it cannot be liable for it. In philosophical terms, any imperative proposition of the form “you ought to do what is impossible to do” is always false (moral action must be possible in order to be imperative).
As this is already standard in world legal systems and all practiced moral systems, it poses no problem for mine.
2) Here more detail is needed to asses. For example, what is meant by “killing 1000 random people is morally good”? Can you even describe a real world scenario where that is true? How we evaluate it then depends on the particulars you describe. It sounds like you are imagining a Trolley Problem, which can bump against contradictory human psychology.
Let’s imagine a real world case:
An Ebola outbreak unexpectedly starts wiping out the population of California, killing 90% of everyone it infects, and it is spreading fast. You have a chemical that, if you snuck it into California’s water supplies, both cures and inoculates everyone against Ebola, but it kills 1 in every 40,000 people who drinks the tainted water (so, it will kill “1000 random people” but save the lives of tens of millions). Is it moral to sneak the chemical into all of California’s reservoirs?
Generally the answer is “No.” And in law, it’s definitely no. That would be a crime. Because it is not necessary. You could, instead, tell everyone what the chemical does and let them choose their own risk level of dying. And you’ll notice that’s what governments did during covid (they did not force anyone to take a vaccine; they gave you a choice, and then mitigated everyone’s risk around you, e.g. if your choice made you dangerous, you would be kept by various policies from infecting others until you were certifiably safe to be around).
So to get it to be “moral” to do it against everyone’s will, you have to make the scenario ridiculous (this is why movie plots that justify things like this always have to be convoluted and bizarre). For example, the plague will magically kill everyone in one hour unless you set off a neutrino burst (using a machine you just happen to have in your garage), whose effects across California will be as above (it will kill 1000 Californians at random but prevent the deaths of the remaining 30 million or so, and have no other effects of substance).
Most people would agree setting off the device in that scenario is moral: because it was necessary. In terms of force majeure, you only have two actions possible relative to the situation: do nothing (and thus choosing to kill 30 million people) or activate the device (and thus choosing to kill only 1000 people). In most real world cases, there are not only those two options (per above). So you have to imagine really convoluted cases. This has been done before (the film Fail Safe is about exactly this kind of scenario, and to get there its plot requires an extremely curated sequence of events to justify the final decision, by conveniently walling off—rendering impossible—all other options).
Now to your question:
Could anyone be morally convinced that that was the scanerio they were in when they actually were not? That’s extremely hard to imagine. Either you’d have to be the stupidest person on the planet (as in, literally mentally disabled) or extremely negligent (and extreme negligence is itself immoral), or otherwise insane (like a schizophrenic hallucinating the whole bizarre movie plot and incapable of realizing it’s not real). The first and third cases will be acquitted on an insanity defense (because knowing what they did was wrong was literally impossible, so there was for them no other possible action classifiable as moral). The second case will get you convicted on a thousand counts of negligent homicide.
This is, again, because moral action is always constrained by what is known. Knowledge and intent determines right action; and no one can ever be expected to do impossible things. Such an expectation would itself be immoral. This is, again, inherent in all legal and moral systems.