In my work I have repeatedly pointed out two things about what philosophers think the options are in developing a theory of moral truth: (1) that their standard assumption of only three options (consequentialist, deontological, and virtue ethics) curiously omits a fourth of equal importance, the only one developed by a woman, and (2) that these are actually all the same ethical theory and the fact that no one has ever noticed this is very annoying, and impeding progress in moral philosophy. Today I’m going to outline why both points are true, and matter a great deal. Philosophy will forever remain stuck and getting barely more than nowhere, until it acknowledges and integrates both facts in all future analysis of this question: What moral propositions are true?
The Moral Theory Debate
In broadest scope, the moral theory debate can be divided into two general camps, those who think there is no discoverable moral truth (e.g. nihilism, skepticism, emotivism, prescriptivism, etc.) and those who think there is. I’ve written extensively before on why there necessarily must be moral truth, and it is empirically discoverable. You needn’t trouble with the proof now, but if you wish to, see [1]. Here I will take that as assumed and discuss what then.
Philosophers who teach theories of moral realism most commonly claim (and especially in introductory courses and articles nearly always claim) that there are three incompatible theories of moral truth under that umbrella. You can see this in the Internet Encyclopedia of Philosophy entry on “Ethics” and the Stanford Encyclopedia of Philosophy entry on “Virtue Ethics” (both peer reviewed resources written by professional experts, they will be referenced heretofore as the IEP and the SEP).
These three theories are:
- Consequentialism: Moral truth is a function of what behaviors produce the best consequences. The question then becomes “What are the best consequences?” Once you’ve worked that out, “What behaviors best produce those consequences” becomes a straightforward empirical question. There are two general versions of this, based on the question “Consequences to whom?” The most widely pursued is the tradition of utilitarianism sort of formalized by John Stuart Mill (d. 1873, in actuality various forms of utilitarian ethics precede him, even by thousands of years, but he is the first to frame up the category in the form subsequently debated). Utilitarianism now comes in many debated varieties—it has long since advanced well beyond Mill’s original conception. But two general classes of it are ethical egoism (whereby consequences to the moral agent are the deciding factor: the IEP and the SEP both have entries) and ethical pluralism (whereby the consequences to everyone, or at least some public group, is the deciding factor: the IEP and SEP both discuss this under consequentialism).
- Deontology: Moral truth is a function of what behaviors are intrinsically the best, irrespective of consequences (in some sense). It is often described as a theory based on moral duty, or wherein the rightness of the act derives from the nature of the act itself, or what sort of person you become in acting as you do (the SEP has an entry; the IEP covers the subject within its entries on Kant and Natural Rights Theory). The most seminal formulation, which has become foundational to all subsequent variations, is that of Immanuel Kant (d. 1804). He developed the concept of morality as that which is entailed by a categorical imperative, as distinct from hypothetical imperatives (which everyone who knows what they are talking about knows are empirically testable propositions). One example of how it supposedly differs from consequentalism is that deontologically, killing innocents can be wrong even if some greater good can come of it, a conclusion consequentalism might struggle to justify.
- Virtue Ethics: The oldest, fully-developed formal theory of moral facts is that of Aristotle. It similarly drove variants in other moral philosophies of antiquity, such as Stoicism (Epicureanism was consequentialist, and involved some of the earliest social contract theory justifications of morality). Though Stoicism resembled deontological theories more than is sometimes noted, it did so around the model of embodying moral virtues. Aristotelianism, by contrast, was actually ultimately consequentialist, but again through the model of embodying moral virtues. In this theory, morality consists of those behaviors that are entailed by the best virtues of character. It is distinguished both in emphasizing the need to cultivate habits of character (and thus not just following rules, consequentialist or deontological) and in its casuistic situationalism: moral truth derives not from rules but from the combination of the particular situation one faces and the best virtues guiding action in all situations.
Sometimes what will be suggested as a fourth theory is some form of Social Contract Theory (see entries at IEP and SEP). Christian apologist William Lane Craig was most famously defeated by it in a debate with philosopher Shelley Kagan. However, SCT is always framed as either a consequentialist or deontological theory (and could be alternatively framed as a consequence of virtue ethics). It needs to be, since it has no justification without some underlying theory of why we should follow the social contract, and answering that question always throws us back into the three ways of doing that usually laid out above.
But what should be a fourth theory, the equal of the standard three and required in any introduction to moral theory (notably also as it can serve as justifying foundation for SCT, thus similarly illustrating its precedence to it), is the one developed by one of the most important women philosophers of the 20th century: Philippa Foot. Her theory was that morality is not as Kant thought contrary to hypothetical imperatives but in fact actually a system of hypothetical imperatives. This is summarized by Nick Papadakis at Analysis, but her famous paper on the subject can be found online and in one of the best reference collections on moral realism: see [2]. Spoiler: I think she is right. My article in [1] formally proves her case.
These Are All the Same Theory
I have made this point before: see [3]. But never in a single place and in clear enough fashion to make the point obvious. So that’s what I’ll now do.
What will become clear shortly is what I proved in my peer reviewed chapter in The End of Christianity (see citation in [1]): that Kant’s categorical imperative reduces by his own reasoning to a hypothetical imperative (pp. 340-41), and since all hypothetical imperatives are consequentialist, any deontology that has a plausible claim to being true (pp. 342-43) always reduces to consequentialism, and consequentialism in turn always entails virtue ethics (p. 424 n. 26), and vice versa, by the following reasoning:
Virtue theory of ethics has the most scientific support [source] (modern social contract theory still explains the evolution of most human moral reasoning [source], but such reasoning still assumes the primacy of associated virtues), and is thus what I defend elsewhere, but virtue theories still reduce to a system of foundational imperatives (e.g., “you ought to develop and cultivate the virtue of compassion”), from which follows a system of occasional imperatives (e.g., “if you are compassionate, then you ought to x in circumstance z“).
I’ll unpack that shortly. But first let’s settle the claim about all true deontologies collapsing into consequentialism and all true consequentialisms collapsing into deontology.
Deontology Reduces to Consequentialism
Kant’s first formulation of the categorical imperative remains the most familiar: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.” His other two formulations are just attempts to build on the first formulation in different ways.[4] In short, the morally right act is that act you would gladly wish everyone perform. But on what basis do you decide what behaviors you would wish to be universal? Well, guess what. Consequences. You are thus, when following a categorical imperative, actually covertly engaging in consequentialism.
And there is no avoiding this. It is logically impossible to decide what laws to wish universal without any context as to what such a universalized behavior will do (to you, and the world). It is always decided in reference to that context. It is thus always decided in reference to consequences. This is even true in Kant’s desperate attempt to avoid this with his second formulation (see [4]), since he has much to say about how important it is that we not treat others or even ourselves as only a means to an end, but in every case he appeals to consequences in arguing that point. They might be different consequences than utilitarians talk about, but that just means Kant was noticing consequences they were ignoring: consequences like what sort of person you become when you act a certain way (and thus the consequence of how that will make you feel, how that will impact your happiness, how that will influence others’ behavior, and so on). Kant’s argument against suicide is full of covert appeals to how it would make you feel if you realized the significance of what you were doing. That’s an appeal to consequences.
This is more obvious if Kant was wrong: if we didn’t care about any of the consequences he appeals to, we would have no basis for willing his ban on suicide to be a universal law. And guess what? Kant was wrong. His claim that suicide never treats a person as an end in themselves is manifestly false, since alleviating someone’s intolerable misery is precisely that which Kant regards as laudably treating a person as an end in themselves. One can only apply Kant’s argument to those suicides contemplated in the absence of any such end (e.g. when the misery is not actually intolerable or inescapable, or doesn’t even exist, being only a product of the imagination, or a false apprehension of future events), and it becomes apparent why: the consequences then are not what we would will to be universally sought. But those of us who see clearly, do indeed see the option of suicide as sometimes what we would indeed will to be a universal law (e.g. the liberty of the individual to choose medical euthanasia or heroic death: Sense and Goodness without God, pp. 341-42 = V.2.3.1). And we do so precisely because it accepts persons as ends in themselves: their wishes and dignity, reflected in their own exercise of autonomy, with full and rational cognizance of the truth of their situation and the differential consequences of their choosing to act or not to act.
Some have attempted to claim Kant’s categorical imperative admits of no exceptions, and this is what distinguishes it from consequentialism, but that is not logically true. For example, I can will to be a universal law that no one kill except in self-defense. This satisfies the categorical imperative as stated. Kant denied this, but on no logical ground. Because exceptions are themselves universal laws: they can be willed into existence by the same categorical reasoning. In fact exceptions are built in to every rule derived by his categorical imperative. Though Kant was absolutely against killing, many have claimed they’d will to be a universal law that one not kill innocents. But that is simply a disguised exception: “one shall not kill anyone except the non-innocent.”
And metaphysically, it’s exceptions all the way down. “One shall not kill, except people who enjoy being killed because they are instantly resurrected even healthier and happier.” It just so happens that that isn’t an actual consequence. But it contingently could have been. And it will be someday (e.g. when we all live in fully programmable virtual realities a million years from now). This reveals the fact that not only is the categorical imperative actually consequentialist, but its consequentialism is circumstantial. It is simply overlooked that the fact that killing someone is permanently destructive is a contingent fact of our being accidentally evolved biological organisms. If we were indestructible gods, who always rise from the dead in better health and actually desired that, “thou shalt not kill” would not be a categorical imperative anymore. It would be as trivial and permissible as stopping someone’s heart to surgically repair it with their consent. An act routinely performed every day around the world.
And this leads us to the hidden secret of all plausible deontologies: to be true, they must appeal to reasons we have to obey them (otherwise, we have no reason to obey them, and they are therefore no longer true, in any sense that commands our concern: TEC pp. 342-43), but those reasons will always consist of an appeal to consequences (because all desires are by definition an interest in certain consequences: the consequences being desired: TEC pp. 340-41). Therefore all true deontologies reduce to consequentialism. As I wrote of Kant’s case in particular (TEC p. 340):
Kant argued that the only reason to obey his categorical imperatives is that doing so will bring us a greater sense of self-worth, that in fact we should “hold ourselves bound by certain laws in order to find solely in our own person a worth” that compensates us for every loss incurred by obeying, for “there is no one, not even the most hardened scoundrel who does not wish that he too might be a man of like spirit,” yet only through the moral life can he gain that “greater inner worth of his own person.” Thus Kant claimed a strong sense of self-worth is not possible for the immoral person, but a matter of course for the moral one, and yet everyone wants such a thing (more even than anything else), therefore everyone has sufficient reason to be moral. He never noticed that he had thereby reduced his entire system of categorical imperatives to a single hypothetical imperative.
Kant made it all about an end (a consequence) we all want, and want more than anything else. He thus couldn’t even justify his categorical imperative without covertly hiding the fact that it was a hypothetical imperative all along.
The same will follow for any other purported version of deontology. Either there will be no reason to obey it (and thus it will be literally false, i.e. it will not truthfully describe how anyone ought to behave), or it will collapse back to consequentialism. And it will do so through precisely that channel of the reason to obey it: which reason is and will always be the consequences of adopting it. Philosophers ought therefore to stop acting like deontological ethics are not consequentialist, and start exposing the actual consequences being appealed to in every appeal for any deontological conclusion.
Consequentialism Reduces to Deontology
A common example that is supposed to illustrate how deontology gets a different result than consequentialism is that deontologically we ought not kill the innocent for any reason, whereas consequentialism supposedly entails that we should kill the innocent whenever a greater good results. But this is a hopeless confusion. The error is in assuming there is a good that results from the killing of innocents that is greater than the good that results from leaving them alive. The error, in other words, lies in misidentifying something else as “the greater good.” And in fact that is all deontologists are actually saying (generally unawares): that there is always greater good in not killing innocents than in any other consequences of doing so. In other words, deontologists and consequentialists are really just arguing over which consequences matter more than others. They are simply all consequentialists disagreeing on what the greater good is.
So there is no fundamental distinction here. It’s consequentialism all the way down. But there remains the actual dispute, which differs from the dispute these philosophers think they are having. The actual dispute is: What is the greater good we should be pursuing? Kant argued it was a feeling of “a greater inner worth of our own person” that comes from being a certain sort of person (who acts a certain way and cares about certain things). If he was right, then his version of consequentialism, his utilitarianism, would define acts that produce “a greater inner worth of the moral actor’s own person” as the end justifying all means. Kant’s utility function was simply “feeling a greater sense of inner worth that comes only from being a certain sort of person in our actions.” For example, the sort of person who just doesn’t kill innocent people. His hypothesis was that we will feel better about being that sort of person than about being any other—like, say, the sort of person who would dispose of innocent people to gain some other end.
Of course, Kant would say, that this is what will be the case when we are fully aware of who we are and what we’ve become and what that means in terms of consequences to ourselves and others—many a delusional person will be ignorant of these and thus continue to make bad decisions falsely believing them to be good ones. And I agree. Moral truth cannot follow from a false accounting of the facts. Therefore, moral truth is what follows from the true account of the facts, even if we are not yet aware of what that is. But this can be as true of Kant’s own account as anyone else’s. And in fact, it largely is: Kant was wrong about a lot. His conclusions in moral theory are therefore also wrong—as in, factually false. The higher level point is true—who we become in our actions and how a fully aware person will feel about that is a consequence that can supersede all other consequences—but the derivation is not: sometimes, it might be moral to kill innocent people.
On full analysis, projecting the differential consequences to our feeling of self-worth for each available choice, (A) killing innocents to (for example) save a greater number of innocents or (B) not killing those innocents and thereby letting even more innocents die, might actually lead us to a greater contentment with ourselves if we choose (A). Which would then mean that (A) is actually what’s right by Kant’s own reasoning. After all, we can easily imagine willing that to be a universal law. Even on self-interest, since our chances of dying are greater if we allow smaller groups to be saved at the expense of larger, since statistically we will more likely find ourselves in a larger group. This has of course been studied scientifically. It is known as the Trolley Problem (incidentally first formulated by, guess who…Philippa Foot). And the findings don’t back up Kant as much as his theory requires. Kant needs everyone to agree which decision makes them more comfortable with themselves. But that is not what we find. In fact, when faced with the dilemma, most people kill the few to save the many. And those who don’t, tend to cite their subjective feeling that inaction is not choosing an outcome, which is objectively false.
But this only illustrates that any fully realized consequentialism (and thus any consequentialism that is actually based on all the facts) will be far more complex than simplistic utilitarians have imagined. And thus, deontological thinking has valuably called attention to consequences utilitarians have typically overlooked and thus not accounted for. One example relates to variants of the Trolley Problem in which, covertly, the scenario becomes one in which a Duty of Care has entered in. Duty of Care is a legal term in tort law. But it actually reflects a moral reality, which on the surface is deontological (it is, after all, called a “duty” of care), but in actuality is consequentialist, as revealed when analyzed through the lens of Social Contract Theory (which reduces to Game Theory, but that’s an argument for another time: see [5]).
For example, one of the Trolley variants is the Transplant Problem, developed by Judith Jarvis Thomson:
A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor. Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying persons and save their lives?
Transparently this looks just like the Trolley Problem and exposes the horrors that deontologists claim consequentialism leads to and which deontology avoids. Though that’s false: any consequentialist argument for murdering the hapless patient can be reconstructed as a deontological argument to the same conclusion—since what we will to be a universal law is actually based on the consequences we want, so if we wanted the consequences of saving five lives at the cost of one, then killing the patient would even be deontologically correct behavior. The deontologist can only object by appealing to consequences that are worse—thus admitting that they are really consequentialists after all. But in this they would be right. And this is why deontologists have been looking at a piece of consequentialism that the self-described consequentialists have been ignoring. Unifying both views is the only way to produce a valid consequentialism. In essence, both groups are looking at the exact same theory from different angles and thus, due to perspective, seeing different things. What they don’t realize is that they are both not seeing the whole picture. And if they did, they’d end up completely agreeing.
Here is why. If it were a universal law that single patients attending hospitals can be killed to save five, no one would ever go to hospitals. The social consequence of this would be vastly worse than letting the five patients die. This is why Duty of Care exists as a concept. For social systems to function, we need certain duties to be followed (generally, those that allow people to be assured of their safety in various respects when performing certain actions that are necessary to their lives or happiness or those of others—otherwise, they wouldn’t do those things—or would take socially disruptive measures to do so, e.g. any lone patient entering a hospital will bring guards to hold the doctor at gunpoint to ensure his safety, creating a huge drain on the economy, increased fear, and an increased risk of bad outcomes). This is a greater consequence that consequentialists all too often overlook in their supposedly studious math.
Notably, no such Duty of Care exists in the original Trolley Problem: implicitly, no one in that scenario has any reason to expect their life to be preferred over anyone else’s by the rail switch attendant. Why? Because we are compelled by logical necessity that this be a universal law. There is no way, without self-contradiction, to say one switch position is better than the other based on any duty of care to the smaller group, since the duty of care for them is the same as for the larger group, and so all things thus being equal, what remains to decide the correct decision is what remains: how many die. We are thus compelled even by deontological reasoning to kill the fewest. Kant did not foresee this.
Besides these, there are many other respects in which a full-fledged consequentialism actually ends up entailing every preferable conclusion of any deontological ethical system. Duties are morally compelling because of the wide social consequences of not obeying them. Consequentialism thus collapses to deontology, in respect to anything deontology ever had to offer. Philosophers ought therefore to be analyzing every deontological conclusion they think is sound so as to expose what consequences actually make it morally preferable to what any incomplete consequentialism seems to entail. Notably, some philosophers have been doing this without even knowing it: it’s called rule utilitarianism. But overall, instead of just saying some deontology entails you do x, do the hard work of asking yourself why you really think doing x is consequentially better. Because really, you do. And it is doing philosophy no service to ignore the consequences you are preferring and why. And less even, if you are also ignoring consequences.
And They Both Reduce to Virtue Ethics
If a certain set of behaviors is morally right (as both deontological and consequentialist theories assert), then it is by the same reasoning morally necessary to cultivate those habits of character that will make those behaviors common, consistent, and easy to perform. Any categorical imperative will in turn entail this, as will any consequentialist imperative.
Deontologically, if you will something to be universally performed, you are de facto also willing that people cultivate those virtues that will produce this universal behavior. Because “I would will that everyone behave thus” entails “I would will that everyone cultivate those moral virtues that will cause them to reliably behave thus.” It would be self-contradictory not to. And the categorical imperative rules out self-contradiction. The same reasoning will follow for any coherent deontological system that has any claim to being true. The fact that Kant and later deontologists didn’t think this through so as to notice it, is just another example of the same blindness that caused consequentialists to fail to see the kinds of consequences that deontologists didn’t realize they were arguing as more important.
Consequentially, if you want the greatest good, you need to accept those behaviors that produce it, and that therefore must include behaviors that produce the character that produces those behaviors—more commonly, consistently, and easily. Thus, virtue ethics is entailed by consequentialism as well. It is only the more pertinent that science has established the needs of this: moral behavior only reliably issues from persons who have fully habituated moral virtues (such as compassion for others, a passion for honesty and reasonableness, etc.). Systems of rules are simply ineffectual, unless moral agents feel naturally inclined to follow them. And that requires cultivated virtues. (See Personality, Identity, and Character and MIT’s multi-volume series on Moral Psychology.)
And the reduction goes both ways. Deontological and consequentialist ethics reduce to virtue ethics, as just demonstrated. And virtue ethics reduces to deontology and consequentialism. The justification for virtue ethics (that which motivates anyone to obey it) has always been explicitly consequentialist: the production of personal happiness, or more precisely that state of contentment with oneself and one’s life Aristotle described as eudaimonia (which is egoist, but Aristotle also implied a non-egoist consequentialism: society will function better for everyone if the members of that society live by moral virtues). Deontologically, a justification for virtue ethics arises from the same fact that reduces deontology to virtue ethics: you would will to be a universal law that everyone live by moral virtues. Deontological ethics has long been about what sort of person you become in the act (as opposed to ignoring that and focusing solely on the external consequences of the act), so it is surprising no one realized that “who you become as a person” is quite simply virtue ethics.
Philosophers therefore should abandon an exclusive focus on moral rules and recognize that moral virtues must also be fully integrated into any true moral theory. Virtue ethics can no longer be treated as a side option. It is fully a component of any valid consequentialist or deontological ethics. And not surprisingly both, as they reduce to each other.
And It’s Hypothetical Imperatives All the Way Down
Social Contract Theory emerges from any deontological, consequentialist, or virtue based moral system when placed in contact with the reality of social systems. So given that deontological, consequentialist, and virtue based moral systems are all actually in fact the same one system, just looked at from different angles, it is easier to see that SCT is also an inalienable component of any true moral system. We already saw a taste of that fact in our realization of how prima facie deontological duties of care emerge from consequentialism.
But one other thing you might have noticed by now is how often hypothesis has come up in every part of this discussion, explicitly or implicitly. Kant’s moral philosophy was a system of hypothetical imperatives—not only did we show his categorical imperative was in fact a hypothetical imperative, but all his derivations of moral rules were heavily built out of hypotheses: about what people want, and what certain behaviors will cause, in them and to others, such as whether it will cause people to be treated as means and not ends and what the consequences of that will then be to them and to society. The same holds, again, for any deontological system that has any plausible claim to being true. Because, as shown, it’s consequentialism all the way down.
But consequentialism is fundamentally a system of hypothetical imperatives. It’s all about what consequences are to everyone most desirable (the definition of the greatest good), which is the condition component of a hypothetical imperative; and the actions that will produce those desired outcomes are the consequence component. Virtue ethics likewise: if you want above all things eudaimonia, or a greater sense of self-worth, or to live in a just and functional society, or anything else (or a collective of things) that can only be reliably realized through cultivating certain moral virtues, then virtue ethics itself becomes a system of hypothetical imperatives. If you want the outcome, then you ought to do what will produce it. Which in this case means the virtues of character that will more reliably cause in you the behaviors that will generate that most desired outcome.
There is a great deal more to everything, of course. The details of what’s moral, and of what are moral virtues, for example (see The Real Basis of a Moral World and Your Own Moral Reasoning: Some Things to Consider). Or the complexities of different circumstances. And how we integrate unavoidable ignorance into sound decision-making. And pedagogy: what people ought to do, and what will convince them of that, are very different things. Likewise, consequences to add up don’t just include inner worth and external effects, but also states like joy and security and contentment (in yourself, and in others), which often depend on respecting such personal goods as autonomy and privacy. And what we happen to desire most right now, could well be completely not what we would desire most if we reasoned logically from true facts instead of reasoning fallaciously from false beliefs—and moral truth can only follow from factual truth, which means our “greatest desire” from which all moral truth follows is only the second of these desires: the one you would have if you were logical and informed. You therefore ought to be pursuing what that is. Which is a categorically and consequentially true fact about you. It only remains for you to realize it.
-:-
Update: Just a few years after I published this article, much the same argument was made independently from the perspective of moral psychology and neuroscience, maintaining that the disparate “conflict model” of human moral reasoning (which is a descriptive rather than prescriptive theory, about how humans do reason morally rather than how they should), which I cover in my online course on the Science and Philosophy of Moral Reasoning, is a product of a unifying contractualist goal, which would indeed reduce to a system of hypothetical imperatives, subsuming (as they indeed argue) both consequentialist and deontological reasoning approaches: see Sydney Levine et al., “Resource-Rational Contractualism: A Triple Theory of Moral Cognition,” PsyArXiv (2023).
-:-
Notes
[1] That true moral facts exist: Moral propositions are imperatives that supersede all imperatives (by definition and practice, that is what everyone always really means by an imperative being a moral imperative as opposed to some other); hypothetical imperatives are well recognized as verifiable and falsifiable empirical propositions, and many have in fact been proved objectively true (e.g. best practices in surgery, agriculture, engineering, are all systems of hypothetical imperatives, many of which have been proved true by empirical means, refuting the claim that there is an absolute dichotomy between is and ought—there isn’t; get over it; and be thankful, because it’s the only thing that keeps bridges up in an earthquake or you alive in the surgical theatre); hypothetical imperatives are by definition conditional propositions whose condition is a desire for the consequence; greater desires supersede lesser (a material fact); greatest desires exist (a material fact); therefore there are imperatives that supersede all other imperatives, and which are factually true (for each moral agent a greatest desire actually objectively exists, as do best practices to realize the consequences thus desired). All that remains is to determine what the greatest desire is (an empirical question) and what will obtain it (also an empirical question). For the formal syllogisms and peer reviewed defense of this argument see: Richard Carrier, “Moral Facts Naturally Exist (and Science Could Find Them),” The End of Christianity (ed. John Loftus: Prometheus 2011), pp. 333-64, 420-29 (formal proofs: pp. 359-64).
[2] Philippa Foot on moral theory: Philippa Foot, “Morality as a System of Hypothetical Imperatives,” The Philosophical Review 81.3 (July 1972), pp. 305-316, reproduced in Moral Discourse and Practice (ed. Stephen Darwall, Allan Gibbard, and Peter Railton: Oxford University Press, 1997), pp. 313–22. She took a different tack later in life with an argument in Natural Goodness (Oxford University Press, 2001), although one not incompatible with her more well-known proposal, that moral facts are natural facts about Homo sapiens as a species. There is a general and chapter-by-chapter summary at the publisher’s website, and some thoughtful analysis of this (more fundamental) view by Brook Sadler in Essays in Philosophy 5.2.28 (2008), and by Peter Eichman in “Thoughts as Data, Thoughts as Code: Natural Goodness and a Model of the Will” (2007).
[3] My previous notice that all moral theories are the same: In Sense and Goodness without God: A Defense of Metaphysical Naturalism I developed colloquially and at length the moral theory I more formally and succinctly proved under peer review in The End of Christianity cited in [1]. That treatment in SGG spans pp. 291-348 (Part V), with a pertinent semantic foundation spanning pp. 37-40 (Part II.2.2.3-6). My conclusion sums up the consequences of all that was there demonstrated: that my Goal Theory of Morality (a variant of “Desire” or Preference Utilitarianism) actually unifies all other ethical theories—it unites subjectivist and objectivist accounts of ethics (p. 346); it unites cognitivist and noncognitivist accounts of ethics under a common moral realism that explains how moral intuition is actually a manifestation of implicit cognizable truths (pp. 346-47); it shows how egoism entails behavior that is observationally identical to altruism (pp. 347-48); and it “also unites both deontological and teleological ethics under the umbrella of a virtue-based theory” (p. 347). I illustrate all of this again in my subsequent paper in TEC (pp. 340-43), pointing out that “a theory that can unify all competing theories under one umbrella (and thereby explain and justify them all) has a strong claim to being true” (p. 424 n. 26).
[4] That Kant’s other formulations of the categorical imperative build on the first: His second formulation, “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end,” is more like a second stage application of the first formulation. Kant believed that if we can will that a rule become universal, we would all will this rule to be universal: that people treat others as ends as well, and not only as a means. As such this formulation is too narrow and cryptic to have as much utility and clarity as the first formulation. It is also a disguised hypothetical (about what everyone would want; indeed, it is only a hypothesis about what he himself would want, because Kant could be wrong—if after seeing this in action, Kant could well decide he predicted its effects badly and would no longer will that it be a universal rule). Kant’s third formulation, “Every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends,” is just another way of describing the point of the first formulation, in particular that it has certain (universally desirable) ends as an objective, which is an even clearer (though still inadvertent) admission that even the first formulation is a hypothetical imperative after all.
[5] That Social Contract Theory reduces to Game Theory: Ken Binmore, Game Theory and the Social Contract (MIT Press: Vol. 1, 1994; Vol. 2, 1998). See also Gary Drescher, Good and Real: Demystifying Paradoxes from Physics to Ethics (MIT Press: 2006), pp. 273–320.
If we don’t take a “holier than thou judgmental attitude” but simply allow the phenomena of behavior to appear, it would seem that “Moral Relativism” is a useful descriptor for the foundation of ethics, because it best describes why things like (a) cultural-based cannibalism, and (b) The Romans feeding the Christians to the lions in the arena for the exciting sport of the crowd, and (c) child sacrifice, etc., could occur. From the point of view of our time and culture, these practices are “judged wrong.” But who are we to judge? From the point of view of the people who were committing these acts, they were acting in a perfectly socially acceptable manner. So they are “wrong” from our point of view, but not from theirs. Relativism.
This is false. Who are we to judge? We are the people with more correct facts. We are by that reason authorized to judge ancient people’s beliefs about the universe wrong, about human psychology wrong, about the best governments wrong—likewise about how to do surgery, engineering, agriculture. We are well qualified to judge them wrong—about everything they are in fact wrong about (and we will be wrong about things too—which is why we are obligated to continue study & research, and be careful & self-critical rather than blind & arrogant). This is as true in ethics as any other domain. Most moral beliefs have been based on falsehoods, and are therefore false: fetuses don’t have souls, people don’t go to heaven or hell after they die, gods and spells do not protect the innocent or harm the guilty, honor is not as socially useful as empathy, classism is socially ruinous, racism is false, homosexuality and female sexual liberation do not lead to widespread misery (but attempts to curb them do), and so on.
The reason cultures differ in their morals is the same reason they differed in their belief systems about the universe, the mind, governments, medicine, and everything else: they differed, because they were all wrong about everything. This no more supports moral relativism than it supports epistemic relativism. Geocentrism is not “equally true” to heliocentrism, no matter how many cultures believe it and no matter how fervently certain they are. And likewise slavery and genocide are not “equally good” for the societies that endorse them as the endorsing of their opposite is in societies that reject them.
There is only one sense in which moral relativism is true. And that’s that morals for a species are species dependent. My example of immortal gods illustrates that. I discuss the point in general in TEC, pp. 354-56. But even then, when cognitively competent species have to live together or affect each other, there will usually be a non-relative meta-ethics that governs their exchanges (the exceptions might be wholly sociopathic species—which we would then for that reason be morally warranted to eliminate: SGG, pp. 342-44 = V.2.3.2).
The bottom line is, if you have a greatest desire (that which you would most want out of life, if you were inferring it rationally from true information), and you do (even if you are mistaken about what it is or haven’t ascertained it yet), then there is a set of imperatives that are true for you and supersede all imperatives. Humans are so similar in core facts of biology, psychology, and social and physical environment (despite their many differences) that those core facts will very likely entail a shared set of core values. The consequence will be a common moral system: imperatives that are true for everyone and supersede all imperatives (even if those shared imperatives entail different actions from different people: TEC, pp. 352-54).
The only question remaining is to empirically discover what they are.
Richard, do you think that human variation (in the values and desires we hold, and thus what is ultimately conducive to our greatest desire/eudaimonia) is great enough to lead to contradictory moral imperatives for different people, for specific situations?
Yes and no. I discuss how this will likely play out with the toy examples of swimming skill and food allergies in TEC, pp. 352-54. There will be universal covering laws. How they are enacted will always be situational (e.g. whether you jump in the sea to save a drowning person will be a function of whether you are a skilled swimmer; whether you give someone some food to prevent their starving will be a function of whether the food you have is poisonous to them; etc.). So human variation means you will get different actualizations of the common principles. And moral truth resides in those common principles.
I was thinking more in terms of variation in values, where these are perhaps more on the periphery of our value systems. Could the ethics of meat-eating vary between people due to a difference in how much value is placed on animal suffering, even if all the empirical facts are shared between them. Eg. perhaps I just feel worse about animal suffering (due to personal experience and personality/capacity for empathy, etc) such that my personal eudaimonia would be enhanced by being vegetarian, whereas someone else’s eudaimonia would be diminished more from giving up their favourite meals than it would gain from vegetarianism.
To meet your condition of “all empirical facts” agreed upon (note that this is not generally the case in the debate between vegetarians and non), we would be in a condition of admitting that this is idiosyncratic and thus not moral. Thus, your being squeamish about meat is just a personal aesthetic preference, not a moral demand on anyone.
At most you would be following the moral covering law of not doing that which makes you feel unnecessarily bad, a moral law everyone is equally following, including those eating meat.
The trick is though getting the facts right. Animals don’t commonly suffer more in husbandry than in the wild, or (depending on where you procure them) much more than human laborers do in agriculture (the vegetables you eat are not all that ethically procured either). Death is not suffering but the end of suffering. And not eating them actually doesn’t substantially alleviate their suffering (they still suffer in the wild or in the hands of others anyway), whereas eating them by selectively patronizing businesses that treat them better will substantively reduce their differential suffering (vegetarians are bad economists: businesses have no incentive to treat animals better unless doing so will gain them customers: which means only meat eaters can affect how animals are treated). Etc. These are facts vegetarians often refuse to acknowledge or logically incorporate into their math. So we have to get to a common agreement on facts before assessing if what’s left is just aesthetics and no longer a moral question after all. I address even more in my article on this.
P.S. I should also have mentioned that whether and what to do when there are contradictory imperatives is something I formally address in the cited TEC chapter, p. 425 n. 33. That was a different aspect of your original question.
P.P.S. A broader question of the same form is the distinction between people who value their autonomy over their security vs. people who value their security over their autonomy. Assuming this difference still exists after factual agreement is reached (not something that can be presumed), then moral facts will consist in those covering laws that generate a society that respects and accommodates both.
An excellent discussion. I would like to see a few things addressed.
1. I think you need to explain your theory of truth, at least in outline. I assume you do this in your published work, but especially when many philosophers dispute that concepts like “truth” and “fact” apply to the domain of morality like they apply to the sciences, you should clarify your use of terms like “moral truth” and “moral facts” for readers of this article. It seems like you support a kind of correspondence theory, but that can be problematic when applied to morality. Some explanation is necessary, I think.
2. I also think you fail to justify–in consequentialist terms–why a person should *always* prefer true beliefs to false beliefs in the area of morality. You write: “Moral truth cannot follow from a false accounting of the facts. Therefore, moral truth is what follows from the true account of the facts, even if we are not yet aware of what that is.” Later you frame the question in terms of the greatest desire one would have if one were perfectly logical and fully informed. But if this really is a theory of desire-based utilitarianism, you need to face the uncomfortable fact that people are not perfectly logical and fully informed, and that some people might actually prefer it that way. That is, some people might prefer a self-serving delusion to a true account of the facts, especially if the delusion enables them to (falsely) justify certain immoral but profitable actions as moral in their own minds. Studies of criminals, for example, have shown the tremendous depths of rationalization that terrible people can sink to, and your whole theory turns on the expectation that they would prefer to live without committing crimes and opportunistically rationalizing them away. That is an open question you might not like the answer to. I don’t deny that in most cases true beliefs and sound reasoning will produce results that most people would prefer, but that might not be true in all cases, especially in cases where a person’s capacity for self-delusion enables him to enjoy ill-gotten gains without having to confront the horrible reality of what he has done. In those cases, you need deontology. It will not do to say, “if he were logical and informed, he would have another preference.” He isn’t logical and informed, and he’s happier that way.
1. Yes. Indeed. Sense and Goodness without God situates my moral theory in an entire worldview (it begins with semantics and epistemology, then physics and metaphysics, before getting to ethics, then aesthetics and politics). I have also blogged a lot since then on epistemology. At the bottom of my Naturalism as a Worldview page you’ll see a paragraph listing philosophical subjects, linking to my blog lists on them, among which is “epistemology.” In brief, though, I have similar issues with the faulty way philosophers demarcate theories of epistemology as I have with ethics. I think trying to classify a correct theory as either correspondence based or not often overlooks the complexity of how knowledge actually works. But “roughly” yes, mine is a correspondence theory in respect to empirical claims about the world, and all imperative statements are: e.g. “you ought to sterilize your instruments” is a true imperative for a surgeon intending to save a patient’s life. The condition (the desires of the surgeon) is a material fact of the world (thus, correspondence) and the consequence (the behavior necessary to obtain that desire, instrument sterilization in this instance) is a material fact of the world (thus, correspondence). In a previous blog article I have more to say about Moral Ontology.
2. First, truth by definition cannot be false.
So I will assume you mean to ask, “What about people who don’t care about the truth?” That has no bearing on what is true, however. That you don’t care that heliocentrism is true does not make it not true, for example. The truth remains regardless of what you want or care about. So you can’t really say you ought to do what follows from a false belief—someone can always come along and point out that you are not fulfilling your true desires: if you are conditioning on false ones, they are right, and if you are pursuing false means to fulfill true ones, they are right. Once you realize this, you can’t go back. Thus, delusion no more allows you to claim a moral system is true than it allows you to claim geocentrism is true. Once the false beliefs (or fallacies) are corrected, you get a different result. And that result will as a matter of fact more correctly align with what you really actually want out of life and how really actually to best obtain it. Everything else is either identical to that (and thus moot, since it already follows from the truth, so that its following from a falsehood is just another Gettier case) or falls short of that. And by falling short, is always false. Regardless if you don’t know that yet.
Second, I have a specific note on these questions in the TEC article worth reading: “someone may object that perhaps we ought to be irrational and uninformed…” pp. 426-27 n. 36.
The bottom line is twofold:
(1) That we are empirically comparing alternative timelines. Just because someone thinks they are happy in condition x does not mean they would not be happier in condition y. But the truth is whether y is better. Being ignorant that it is does not make x better. The truth remains. (Although satisfied with self and life is a better metric than “happy,” which is too vague and not the greatest desire for anyone, unless it is synonymous with satisfaction with self and life, as otherwise, everyone wants the latter more.)
(2) Bad epistemologies are self-defeating. Because there is no way to design an epistemology so that it just conveniently only ever leaves you with false beliefs that are harmless or beneficial. You will always collect false beliefs that are harmful (to your own wishes in fact, if you were aware of what the actual effects were on you and others of those false beliefs). Because the only way to distinguish harmless from harmful false beliefs always entails discovering that they are false. And once you’ve done that, you can’t go back. Thus, it is a universal truth that you ought to replace bad epistemologies with better (i.e. epistemologies that defend or accumulate fewer false beliefs). This follows for everyone, even the most delusional, by the point above: they will always be acting contrary to their desires with an array of harmful false beliefs, therefore it is always true that their desires will be more achievable without those harmful false beliefs. And there is no way to purge harmful false beliefs without purging all false beliefs (as the same remedy deployed on the one remedies the other). The upshot is: even when true beliefs set us back in satisfaction, any effort to avoid that set-back (by adopting the bad epistemology necessary to) is worse. Because it always throws you into a condition of accumulating harmful false beliefs undetectable to you. So you will always be haunted by the fact of knowing you are acting contrary to your own desires and not even aware of how or when or how bad the risk you are taking is.
“even when true beliefs set us back in satisfaction, any effort to avoid that set-back (by adopting the bad epistemology necessary to) is worse”
That raises the question, could there be a true belief so detrimental to our satisfaction that we would be better off never having learnt it (ie. better off no belief about x, as opposed to a false belief about x)? I suppose we couldn’t know if that were the case until after we had discovered the true belief, so unless we want to give up on ever learning anything new, it’s not worth worrying about.
One could also ask, “Is there a dragon that will eat us as soon as we adopt the best epistemology?”
I’m not being facetious. The analogy is relevant. Does the above proposition have a nonzero probability? Yes. Is it high enough to worry about? No.
So the bottom line is, you can’t just ask whether maybe. You have to actually find an instance of the thing (you need to actually locate the dragon). Until then, the question is moot.
The trick is, as you note, that once you find the dragon, you can no longer pretend it doesn’t exist. And there’s the rub. Even if there were a disastrously true proposition, you can’t ever know it exists until you know it is true. And then it’s too late to not believe it.
So there is no reason to worry about it. Either it is too improbable to matter, or it is inevitable and thus you may as well plan on figuring out how to cope (because, per my original point, you can’t design an epistemology that only gets you useful false beliefs, without already knowing which beliefs are false). And the latter actually indicates the former: we can manage a satisfying enough life under almost any conditions. We already do (facing inevitable personal extinction, a massively unjust world, etc.). And that’s because we either can’t do anything to better things, or are already doing it, while we can secure enough joys in the circumstances actually available to us to make life worth living.
Extremely bizarre scenarios can be imagined wherein learning the truth would warrant suicide. But then, it would actually be imperative that we kill ourselves (even if indirectly, e.g. by acts of defiance against the evil we discovered that just happen to result in our deaths) and so not a loss to know after all. This comes close to the Magic Pill experiment I discuss in Goal Theory Update.
Holy shit. Dr. Carrier, can I steal this? Like, ALL of it?
You’ve expressed things I have felt for years and years and years but was never articulate enough to express, and furthermore proven them nine ways from Sunday, AND expanded on them in ways I wouldn’t have been able to even if I had the words.
Seriously, this is your calling; you could forget all the historical stuff and you’d still be a demigod on moral philosophy alone just from this one entry. You bridged the is-ought gap. Holy shit.
I would love to see the look on Bill Craig’s face when someone hits him with this.
Thanks. A bit effusive. But I understand your point. It is true I have two tracks to my life, my work in philosophy, and my work in history. They sometimes happily benefit each other, as in my philosophy of history, developed to practical ends in Proving History. But I have always been developing work in both areas. My cv lists several examples. And my Naturalism as a Worldview page’s bottom paragraph links to lists of philosophy articles on my blog. Most of it is rougher, some of it incorrect or obsolete, but it’s all gradually working toward articles like this one.
Richard, curious if you think Divine Command Ethics would also ‘collapse’ into Consequentialism. I understand that you reject Divine Command Ethics for other (very good) reasons, but to me, DCE seems (in theory) a more pure form of deontology that might not be contaminated by consequentialism.
Of course, it does. I prove DCE is consequentualist in the same TEC article I’ve been talking about here: pp. 335-38. Advocates of DCE deny this, but their arguments have no logical coherence without the same hidden consequentialist appeals as all deontologies (or else they are uttering demonstrable falsehoods—even if God exists and issues commands, if we have no reason to obey those commands, there is no relevant sense in which those imperatives are “true”). Example shown in my treatment of Flannagan, where I find consequentialism in DCE thus:
And this remains true even for theists who abandon hell doctrines. They always still appeal to some consequence that we are supposed to care about, whether it’s “enduring the eternal disappointment or absence of God, or feeling hollow and purposeless and unloved and unliked” or producing self-defeating outcomes or whatever it may be. Remove all consequences, and you remove all truth value to any set of commandments, whether issued by gods or not.
Here’s a question for you though; the Aztecs practiced human sacrifice. Most (not all) victims were volunteers, cheerfully submitting themselves to the knife (and other methods, not everyone had their heart cut out) human sacrifice in turn boosted the cohesion of Aztec society, and made the population feel secure that the Gods would favour them. If this state of affairs could be shown to have contributed to the well-being of the Aztec people to a greater extent than if they had not practiced human sacrifice and centered their culture on some other less bloody rite, can we say they were ethically wrong?
I doubt the “volunteers” claim is true (it would be contrary to all scientific and historical precedent that tens of thousands of people voluntarily commit suicide each year in a society of only a million or so; and that authorities said being a victim was “an honor” does not mean the victims and their families shared that opinion). I also doubt it had any “cohesion” effect at all, much less one that would not be even more strongly realized by means less foolishly destructive of human capital (which all scientific and economic precedent establishes would be ruinous to society, not beneficial).
Implausible contrafactuals may be entertaining but can’t substitute for facts. And your contrafactual is as implausible as saying we are actually all in a computer Matrix and can escape this prison in our real, immortal bodies, when our “fake” bodies in this world are killed, therefore murder is not only moral, but morally obligatory.
Sure. It would be. But we have no reason to believe we live in that world.
I have a whole section on silly things like this in the TEC article, called “THE MORAL WORRY (OR “CAVEMAN SAY SCIENCE SCARY!”),” pp. 343-47 (using slavery instead of human sacrifice as the example).
The bottom line is, you can’t get to any true conclusion from false facts, so arguing from a premise of false beliefs, cannot get you a true moral system. And unless the Aztec victims gave up their lives literally because they wanted to produce social cohesion among the survivors and for no other reason, and they knew of no better way to produce that same result, and the result was actually produced and they had good empirical evidence of this (every single one of those conditions is extremely unlikely), they could only have been acting on false beliefs. The imperatives that resulted were therefore false as well.
Here’s an important distinction between deontology and consequentialism:
Consequentialism says the morality of an action turns only on the actual consequences of the particular act.
Deontology says the morality of an action turns only on the counterfactual consequences of the act being undertaken by everyone at all times.
That’s just another consequence.
So again, all you are doing is saying the first definition of consequentialism is ignoring a set of consequences. Then saying deontologists have pointed out the consequences the consequentialists were ignoring. The end result is simply a consequentialism that accounts for all consequences.
Hence it’s consequentialism all the way down.
Of course, I disagree with your claim about a “greatest desire”. There may or may not be such a thing, but that is irrelevant.
The best analogy I can come up with is that desires operate like vector forces in physics. There may be a strongest force pushing in a particular direction, but two or more weaker forces in the opposite direction can still push the object in the opposite direction. And every force acting on an object has an influence – even the weakest force.
Similarly, while morality is a system of hypothetical imperatives (I agree with you fully on that claim), all desires have relevance, not just some hypothetical “greatest desire”. It is still only one desire among many, and it might just be pointing in the wrong direction.
Yes, for those who want to see more of the debate between whether there is a greatest desire or not, the Carrier-McKay debate was on precisely that question (and an excellent debate at that, as it was two atheists who weren’t engaged in trying to bullshit the audience or game their opponent, so if anyone wants to see what an honest debate looks like, this is an example). The links for that are in my post-debate article, Goal Theory Update.
We actually disagree less than you make out. When I say there is a greatest desire, I am saying it is the desire that those other desires are aiming at, i.e. we desire the other things, because desiring them fulfills the ultimate desire (such as being satisfied with oneself and life). It therefore only supersedes desires that go contrary to it. Meanwhile, a collection of desires pushing in a vectored direction simply sum to a single desire for that combined outcome and for that reason. This is fully compatible with my account of human desiring.
For example, I actually live by a rule that I prefer to do things that I have more than one reason to do. Dating, for example, allows the pursuit multiple desires at once, fulfilling any one of which will have been worth it. So there is a collection of desires there. But that collection actually produces a single desire: to date (in general, and a given person in particular). When I ask “why do I want to date?” the answer is that collection of other desires. When I ask of each of those desires “why do I want that?” I end up with a narrower set of even more fundamental desires. When I ask of those why I want those things, I get an even narrower set. When I continue this process, I always end up with only one desire that is the only thing I desire for itself and not for some other reason: satisfaction (with who I am and with my life).
There is nothing I want more than that. And there is nothing I want as much as that, unless it participates in fulfilling that. Therefore, that is my greatest desire. Everything else I desire, I desire for some other reason, which other reason is ultimately because that reason. It doesn’t take much analysis of the scientific facts of human nature to see that this is probably true of all human beings and not just me.
I’m glad someone with credibility finally said it. Throughout the last 4 years, while working toward my B.A. in philosophy, I’ve often thought that the conflicting nature of the different groups of moral theories is actually an illusion created by the fact that morality can be thought of on many different levels. The power of a moral system lies in its ability to set rules which, for pragmatic reasons, are expected to be absolute (deontology), which are enforced within moral epistemic communities through narrative (virtue ethics), and the whole system is justified by the overall consequences (consequentialism). Of course, other theories can be thrown in there in different ways as well, but those are the “big three” anyway. These are just different parts of the same machine and any contradictions that exist between them can be explained by the fact that each part appeals to a different set of moral intuitions (each of which probably evolved separately and therefore processes data differently).
As for John MacDonald’s comment, I agree with Dr. Carrier’s reply, but because I tend more toward a sort of pragmatic skepticism, my reason is different than his. I don’t think that we can say with certainty that we have “more correct facts,” but what we do have is a set of knowledge-generating rules (science) that appeals to the most highly conserved intuition (or set of intuitions) that humans possess: reason. While the ability of science to make predictions that can later be verified is philosophically interesting, it’s not that ability itself, but the fact that that ability allows us to develop novel systems for improving the lives of people that makes science valuable.
And in that respect, it’s reflective dialogue between the science narrative (within which we can only generate statements that can be deduced from the scientific axioms) and a larger consequentialist narrative (within which we can only generate statements in respect to how they affect people’s lives) that makes science “right” and other traditionalist or religious systems “wrong”. Within this framework, the traditionalist and religious cultures that give rise to child sacrifice and the like are wrong because they appeal to less conserved intuitions, like emotion, which (while morally significant) can’t generate reflective dialogue the way that science can.
Imagine two languages that use the exact same words, but the words have vastly different meanings between the two languages. Now imagine that two people, one who speaks one of those languages and the other who speaks the other, are attempting to engage in conversation without knowing that they aren’t speaking the same language. They will think that they are talking about the same things, but they really aren’t. No meaningful conversation can be had under these conditions. This example is a caricature of what happens in a debate between religious people of significantly different denominations. They will both appeal to emotion, but emotional intuitions tend to give different people vastly different answers to the same questions, so in effect, the statements that they make will have vastly different meanings to each of the participants.
But when you have a way to ensure that the words have the same meanings (as science has in the fact that it appeals to a highly conserved intuition) and a way to ensure that each member of that epistemic community follows the knowledge-generating rules (as science has in the requirement of peer review), meaningful conversations can be had, not only about the relations of the statements to the particular rules of the particular system, but also about the consequences of the system. In short, science is right not because of its ability to generate true statements, but because of its ability to generate reflective dialogue about the consequences of those statements.
If the religious cultures that gave rise to child sacrifice had allowed such dialogue, they would have soon noticed that child sacrifice had no measurable positive impact on their societies or the lives of their people and they would have done away with it. Science, on the other hand, allows us to talk about these things.
Thank you. I concur.
(I added some paragraph breaks to your comment to ease others I reading it.)
Fascinating read, Richard, thank you.
Have you encountered Martin L Hoffman’s ‘Empathy and Moral Development: Implications for Caring and Justice Implications for Caring and Justice’? I think it approaches much of what you say, but from the psychological side. It places particular emphasis on internalisation of rules, empathy, and so forth.
It seems to me that the modern Euthyphro only has a problem when he wants to rationally justify saying what the gods declare good is good. The modern Euthyphro doesn’t face a paradox if he denies that he needs rational justification. Which I suppose he could do on the grounds that reason doesn’t seem to be able to justify itself from logically necessary a priori principles. Many or most philosophers are believers, and many or most scientists agree with them. I can’t really agree with the modern Euthyphro, but I can’t really say that I understand the rules of the philosophy game well enough to definitively score one side or another as the victor. Given the continued controversies in philosophy I’m not sure that is my error.
But assuming that any moral realism can be analyzed as a system of hypothetical imperatives about empirical reality, which I must admit does seem quite plausible, I’m still not clear on a couple things. For one, morals seem to refer to the way a group of people treat each other, what you might call the game of life, as opposed to the game of philosophy. That is, moral realism must be inherently collective. It seems that any normativity is inescapably part of being in the game, but that those who don’t play the game are outlaws. For them the question of justifying the normativity is irrelevant, but for those in the group the normativity lies in maintaining the conditions required to play the game. I’m not sure any view of this sort is compatible with moral realism as expounded here.
And for another, it seems that the game of life keeps changing albeit over long periods of time. Which seems to imply that moral realities are historically relative. Which suggests the desirability of projecting improvements in moral realities or at least rationalizing the current state of affairs. I’m not sure how a system of hypothetical imperatives would accept there are such changes.
The thing is, I would think we would need to possess genuine knowledge to fulfill such aspirations. Historically, the way to actually know has been, loosely speaking, science. (I know that many deny that there is such a thing as social science, but please assume there is.) In science broadly considered, foundations are never completely coherent; many necessary facts are unknown and practicably unknowable; many other facts are only imperfectly measured, with errors; many consequences are unpredictable even from perfectly determinate causes (popularly speaking, chaotic,); there are always areas with conflicting explanations for currently available date which cannot be ruled out; etc. In short, i guess you could say that sometimes moral science might look more like a five day weather forecast than a graph of a linear equation. And, does moral philosophy really engage with empirical reality in a scientific way?
First, yes, indeed, moral facts are facts about social systems (which are systems of interacting psychological agents). The question of the outlaws becomes: why would you want to be an outlaw? Analyzed factually you find the reason we create social systems and operate within them is that this brings us countless benefits that cannot be as reliably gained from outlawry. And that’s even before we get to the fact that outlawry, unless isolationist (like, maybe, the Amish), poses a threat to the social system that the social system then entails moral imperatives to respond to. Hence jail. The system of hypothetical imperatives thus includes imperatives not to be an outlaw (in this narrow sense you’ve stated), because it is contrary to self-interest to be. Joining functional social systems is always better. And then the remaining system of hypothetical imperatives are hypotheses true if the social system itself through the agents acting within it and thereby realizing its parameters—or failing to, either way an empirical question.
Second, core facts of human biology and of mentality and the laws of physics and social systems are not changing historically because they cannot be changed. As they are universal and eternal (in any universe capable of supporting social systems of mental agents with physical bodies), so are the hypothetical imperatives they entail. Variations only exist at the level of particular realizations of universal laws. See the exchange upthread about this, and the referenced pages of my article in TEC, which treats this issue and explains. Hypothetical imperatives can easily change with circumstances—just as murder changes into self-defense with the circumstances. In truth they aren’t changing, but which ones apply is changing. The imperatives are always true of the hypothetical systems—so once a particular system is realized, so are its imperatives.
In answer to how we deal with ignorance and incomplete knowledge, the answer is the same as how we do that in medicine, engineering, agriculture, and everything else. I discuss the formality of this again in a specific note in the TEC article: p. 424 n. 28 (“In the absence of perfect knowledge, approximate knowledge is optimal, a fact we accept in all domains…”) and p. 425 n. 34 (“Things we want that are unachievable are of course out of account precisely because there is no action we can take to obtain them…”). See also discussion of “inaccessible information” in n. 35.
P.S. “And, does moral philosophy really engage with empirical reality in a scientific way?” — Some philosophers have been, somewhat. But the failure of academic philosophy to do this as thoroughly and competently as it should is a pervasive failing of the field as it is currently being practiced in the academy.
Morality can be no more objective than tastes in food or sexual preferences. What is objective are the facts about what moral, food or sexual preferences a given individual has, but nobody can decide what “objectively” tasty food “an sich” is, without any reference to the individual, although it is objectively tasty for a given person. And that objectivity is neither here, nor there. Some find pineapple on pizza horrifying, others love it, that’s an objective fact, but nobody can say “pineapple pizza is objectively tasty/horrible” without adding “for me”.
And yes, just as there are some biology-based statistical facts about sexual and food preferences, there are facts about humanity as a whole that will make some basic moral choices a statistical reality. But there’s not much “philosophically interesting” about this statistical reality. Sure, you can take a poll of all people, conclude that 90% of people find pineapple pizza horrible, and front-load your def of objectively tasty food with reference to this majority, but that’s shooting an arrow and drawing a target around it.
A person whose (biologically caused) greatest desire is to molest children may as well find it a moral thing to do, while the rest of us find this horrible. But there is nothing objectively “true” or not “true” about both positions, unless you front-load your definition of morality with references to “society’s greater good” etc., which makes it your private definition which is not very useful for discussing what other people mean by morality, which is not necessarily defined as something that will necessarily lead to psychological, physical or societal well-being.
You are wrong. Food tastes vary, but the need to eat does not. Nor does the desire for a satisfying life. Or the laws of economics, social systems, game theory, etc. Too many core facts of human biology, mentality, and social and physical environment are the same for everyone. Those common facts entail a common set of hypothetical imperatives. See the exchange upthread on this. Individual variation then only varies how those universal rules are to be enacted by individuals, just like any other situational ethics (and all ethical systems end up situational).
Note that there is no “greatest desire” to molest children. So that never being a fact, it has no bearing on what is true in the actual world. Child molesters don’t molest children for no reason. Doing so fulfills other desires for them. Which entails there is a greater desire. This is how we treat the insane: by pointing out how their derivative desires are self-defeating and acting contrary to their greater desires—such as a desire to be satisfied with themselves and life, which entails a desire, even if they don’t know it, to be empathetic, which will in turn entail revulsion at child molesting. This is all at the level of working out the details of why certain virtues are best for everyone to cultivate, which is a level beyond the analysis of the article you are commenting on, which is only about why we need to be investigating that, and not what the investigation will find in every particular. But psychotherapy (e.g. REBT) already has learned a lot about that.
This is also true for “society’s greatest good”: that is not the greatest desire either. People will only want that because it satisfies some other desires. So there is a greater desire than that. And that’s the desire that justifies caring about social good (although not necessarily its “greatest good,” whatever that means). Thus, you have to pay attention to how the system of hypothetical imperatives works, and what a greatest desire actually is (and how we empirically discover it). You have carelessly not done either here.
Of course, it may be that there are unfixably insane people for whom no moral truth obtains (in the TEC article I cite in note 1 I discuss sociopaths; in SGG I discuss sociopathic aliens like depicted in several movies). But if that’s the case (and it is not as obviously the case as people think; see my analysis of sociopaths in SGG), those people then become monsters, for whom we have moral imperatives to segregate or eliminate. That monsters are not bound by moral imperatives does not mean we are not bound by moral imperatives. For we are not monsters.
Thank you for mentioning your debate with Mike McCay. I really wish that you were invited to participate in more intra-atheist debates. I’d love to see you debate someone like Peter Singer for example.
I concur. Although I prefer written debates when matters of that importance are at hand, since one has to be able to check facts claimed in a debate, to get the best debate out of it, and to have time to diagram an argument just made so you can be sure of understanding it and responding correctly to it.
Someone in realspace asked why exceptionless categorical imperatives are logically impossible. I am posting the answer here for convenience:
All words entail exceptions. Otherwise words would have no meaning. So if exceptions are not allowed by categorical imperatives, categorical imperatives cannot exist (because they cannot be formulated without words). “Thou shalt not kill” entails exceptions about who doesn’t count as “thou” and what doesn’t count as “kill,” for example. If it did not, those words would not specify anything, and therefore would signify nothing.
This is more obvious when you formulate categorical imperatives in pure set theory. All imperatives are then a function on a set, of f{x}. The function being some action to be or not to be performed on members of that set. The set can be defined any way you want. Because sets are arbitrary. Therefore there is no way to disallow exceptions from categorical imperatives. As exceptions are simply what is excluded from the set.
For example, “Do not kill” has to mean “Do not kill persons” (otherwise, you can’t even kill plants, or bacteria, or viruses, or engines…). But “persons” is simply a set filled with whatever “persons” signifies. We can define our own word, innocersons, thus filling our own set, x, with “innocent persons.” We end up with the exact same logical formula: f{x}. All we have done is change which elements fill x. Which we already did with “persons” so as to allow us to eat vegetables, cure disease, and turn off cars. If we can pull members of the set out to do that, we can pull more members of the set out to replace persons with innocersons.
There is therefore no logically coherent way for exceptions to be excluded from categorical imperatives. As long as they use words, and declare functions on sets, exceptions are intrinsic to every categorical imperative that can ever be uttered.
What about other “commandments”? Can’t a commandment like “Thou shalt remember the Sabbath day, no exceptions” exist? What about “Thou shalt not commit adultery, no exceptions”?
Sure, we can imagine exceptions to those original rules, but the exceptionless versions of these imperatives are viable (unlike “Thou shalt not kill”, which you chose as an example). “Thou” in each commandment certainly excludes non-sapient beings, but we never expect trees or clouds to follow any moral imperative, so I don’t see that as a meaningful exception — we interpret “thou” as “any recipient of this message”. Or (to be a bit facetious) we could do away with the non-sapient exception and just conclude that all clouds and trees are going to Hell for not observing the Sabbath.
Sorry. I cannot fathom your point.
And all western philosophy lacks an understanding of Anutpada.
Anutpada is a semantic error. The concepts it aims to deal with are in fact well understood and analyzed in Western philosophy, particularly in that branch of it called “the sciences.” What it means to come from somewhere or be produced depends on the particular aspect of a thing you are talking about, but is well covered in relativity theory (particularly B theory of time and the distinguishing of places in a coordinate system), and in thermodynamics and quantum mechanics (in the role of energy transformations).
Insofar as one tries to argue that this is all illusion, that hypothesis has been proved highly improbable. As improbable as there being a god. If not more so.
Richard,
I think you are actually (rightly) moral subjectivist, but are trying to sneak in ‘moral truth’. For you, (again, rightly), morality comes down to a) human desires and b) the consequences of action (and how we in turn feel about those consequences). Very often, its true, people desire more or less the same thing (this is the basis for societal morality i.e behavioural rules/expectations etc). But, the fact is, people (maybe, Dear Richard, even you!) often have selfish or destructive or asocial desires, and you cannot prove/show that these are morally ‘false’ or ‘incorrect’. How could you show such a thing?
Take, for example, the selfish robber, who steals all your stuff and does not give a shit about you feel about it. How could Dr Carrier show that t robber is breaking some moral ‘law’? He might point out that the robber has made his victim very upset. But the robber might say, so what, I don’t care about them. Dr Carrier might attempt to convince him that stealing is not is in the robbers own self interest: i.e he might go to jail, or he might end up feeling miserable cause he has no friends. But all Carrier would be doing is trying to persuade the robber to value something i.e his long-term freedom, higher than something else i.e short term $$$. Dr Carrier, with all his intelligence, has done precisely NOTHING to show that either value is OBJECTIVELY or FACTUALLY, the morally ‘true’ value to hold.
Welcome to the subjectivist club Carrier. Don’t worry, you soon find that the moral order does not cave in a) because most people are deeply deluded – even more so than the God issue – that moral laws/facts exists and b) because, as you point out, we need morality for well functioning societies…
Thanks
Jonathan.
You need to read my peer reviewed article in The End of Christianity cited in the post above. It refutes what you are attempting to argue. For the full demonstration, you’ll have to go read that. I published it so I wouldn’t have to repeat myself. I’ll only summarize its pertinent findings here:
By showing that they go against greater desires you (the moral agent) already have. There are many respects in which pursuing selfish or asocial desires actually hurts you (by depriving you of goods you could otherwise obtain, goods that in fact you would want more if you were aware of their comparative value to you; or by outright defeating even your own selfish desires). I enumerate some of them in the TEC article. I enumerate more in my section on this in Sense and Goodness without God (also cited in the post above).
Not so. I actually do show (in the resources above) that the moral agent (your robber) already, as a matter of objective fact, has greater desires that these other desires are thwarting. Once the robber realizes this, they will realize their selfish desires are self-defeating.
This is not subjectivism. Because the robber can be ignorant of their own greater desires and thus in error about what they would most want if they were aware of all the options and their value to the robber herself. And because the existence of that greater desire in the robber is already a fact, literally an objective fact of the world, in principle observable to a third party (e.g. by observing the brain’s neural structure that produces it).
The robber can only deny this by being ignorant or irrational. Which means the robber’s own imperatives, the ones they ignorantly and irrationally choose to follow, are literally false. Because they derive from false statements of fact (the robber will claim a desire is her greatest that in actual material fact is not) and fallacious sequences of reasoning (which by definition cannot produce conclusions known to be true, and very frequently produce conclusions that are false, since only by mere accident can they produce conclusions that are true).
P.S. In TEC I use the example of slaveowners rather than robbers. Check that out. It illustrates the point. But elsewhere, not in that chapter, I use the example of rapists, and since it’s not in the morality chapter, I’ll quote it here (also discussed in my last blog post about Divine Command Theory):
Great essay, just brilliant. I should think many reading this have probably had vague intuitions along similar lines, but as with your naturalistic philosophy in general, you really get a big picture in view and articulate it very well.
I particularly like the linking of virtue ethics and consequentialism/deontology with social contract ideas and game theory.
As can be seen from some of the comments above, some people have a resistance to the idea of objective ethics. In a way the Biblical intuition re. the Garden of Eden, and Christian perception of the psychology of “sin” is sort of correct psychologically – we want to “hide” from the fact that our actions have consequences that we may not like, or be embarrassed about, so we make up all sorts of things to avoid the issue, therefore ideas like relativism have appeal.
I think there are two things we need to remember, both to allay this qualm (lets’ call it the qualm of the Central Scrutinizer) and to keep an objective ethics liberal (or libertarian in a general sense):-
1) you’re always talking about each individual moving their body into action based on their own, private punt re. the truth, and since (at the present stage of technology) individuals’ control over their bodies is inalienable, then a) you have to allow some elbow room for people to make mistakes about what they believe is true (yet at the same time, they have to accept the consequences of those mistakes) and b) if you want to change peoples’ behaviour without force, you’ve got to appeal to reasons that appeal to them (at whatever stage in their quest for truth they are). It’s really the same rationale for (a certain degree of) economic freedom as for (a certain degree of) social freedom and for (a certain degree of) intellectual freedom (the scientific method). We don’t know apriori what’s right, true, good, etc., so we must arrange social circumstances so that it’s more easily discovered.
2) everyone’s private punt re. truth is based on limited information. Granted the idea that you’ve got to “do right by others” as part of your own personal pursuit of satisfaction, there’s an inherent limitation, a sort of “mandala” (the Hindu/Buddhist tantric image used to represent a king and consort, then their courtiers and govt machinery, then the workers and peasants, then foreigners, etc.). At the centre, we can know our own business, our own particular circumstances, what makes us happy, etc., fairly well, and we’re tolerably able to know our family and friends’ circumstances (this is partly why we don’t mind the kind of interference from friends that we wouldn’t take from strangers – it’s not just that we can feel confident that our friends and family have got our best interests at heart, but that they know our circumstances fairly well, much better than a relative stranger would). But the further we get from ourselves and our small circle of family, friends, etc., then “doing right by” people towards the fringes of our mandala becomes less and less concrete and more and more abstract, more and more a matter of following general rules (i.e. of a commitment to allowing all our actions to be conditioned by certain abstract rules or adverbial qualifications) precisely because our knowledge of their circumstances is becoming less and less concrete and more abstract – and more subject to the possibility of plain error.
This feeds back into the social contract idea too – “ought” implies “can”, and we can’t make general(izable) rules, or expect others to follow general rules, or do our bit to enforce general rules, on the basis of limited information, but we can make rules, and expect others to follow rules, whose content is itself general, abstract, etc. (“act in such a way that …”; or to put it another way, act with an adverbial qualification – “kindly”, “respectfully”, “conscientiously”, etc.) That way, by enacting and encouraging rules that apply both to us and to those at the fringes of our personal mandala equally (abstracting away both our and their concrete circumstances and “reducing” us both to bare agents), we clear the way for those at fringe of our mandala, with their more concrete knowledge of their circumstances, to be able to improve their lives and the lives of those around them without interference, and without interfering with others’ attempts. If they and we both have individual freedom that’s being upheld by whatever society they and we share, and we’re doing what we can to support that general umbrella society, then that’s the best we can do for them without knowledge of their concrete circumstances (of course it’s still open to us to find out more and do more concrete things if we’re moved to do so, but that’s beyond the question of what we must do, what we ought to do).
With these kinds of qualifications in mind, it’s then easier for people to accept that while yes, there is an objective ethics somewhere out there in possibility space (and while we have a good sense of the general outline), we’re not necessarily trying to straightjacket people solely on the basis of our own knowledge (which might yet be wrong) which we’re trying to impose on them – we thereby avoid fear of the Panopticon, of the Central Scrutinizer. We allow elbow room for an overall (Bayesian) iterating social discovery process (of what’s good, true, etc.), and for the fact that we ourselves may at any given moment be mistaken in our punt as to what’s good for others. So everything (so long as it’s within the bounds of a commitment to reasonableness) remains somewhat loose, somewhat tolerant of error.
I concur. I’ve made similar points before, in scattered places. But it’s nice to have a summary here.
“If it were a universal law that single patients attending hospitals can be killed to save five, no one would ever go to hospitals.”
But hypothetically, what if people weren’t smart enough to realize the risks involved in going to hospitals, or if there was some way to completely hide these murders so the public would not know, so the consequences you spell out could be avoided? Then would it be Okay to murder one to save five?
No.
Because you can’t ever successfully hide such things long enough for the principle to hold. Disaster is inevitable.
It is also far more expensive in all resources and other consequences to attempt to maintain such a secret. Are you, like Hal 9000, also going to kill everyone who might otherwise expose the secret? So now you need a massive spy agency to detect them, a network of murder squads, etc. … The social and material costs are mounting here.
Moreover, by agreeing to hide such a thing, you immediately must live in constant fear of what else is being hidden from you. You may just as likely be chopped up for someone else’s good somewhere else in society. Thus, you cannot actually tolerate a social system that would be willing to sustain such a secret as you would then be privy to.
And finally, you can’t hide this from yourself, so your conscience will haunt you. Only delusion can escape this consequence, and delusion is bad for you. This relates to the problem with trying to embrace bad epistemologies as some sort of utilitarian solution, as if false beliefs and lies can be useful in conditions where good epistemologies will expose them, requiring the adoption of bad epistemologies, which then in turn have a broader negative consequence that is intolerable, therefore ruling that option out. See my discussion up-thread.
All of this is also why totalitarian states, which do indeed attempt this hiding tactic, become universally miserable. Even the privileged elite in them live with perpetual fear, paranoia, and other kinds of deprivation than material. And such societies always end badly.
-:-
For more on the conscience point, see my discussion of the Magic Pill scenario (an extreme form of what you are proposing) in Goal Theory Update.
As someone much better versed in Kant than I pointed out when we were studying Kant, this is a misinterpretation of what the categorical imperative meant. It doesn’t actually make a claim that you try to universalize the maxim and see if you still like or would want a world where that maxim was universal, but instead makes a logical claim: can you universalize it and not have it be self-defeating? In short, can you universalize it and still have it make sense — logical sense — for it to be a maxim at all? The common example is lying, and I think the logic goes something like this: the purpose of lying is to get people to believe something true that is false. But if you made lying a universal maxim, then not only would everyone lie, but more importantly everyone would KNOW that in that case you’d be lying, and if they know that you’re lying, then they won’t believe you. Thus, there’d be no point in your lying at all, and so no point in having that as a maxim. Thus, lying cannot be moral by that maxim, but telling the truth can be. This also takes care of most of the exceptions, as even in the case where you are confronting a murderer if it is known that you will lie in that situation, the murderer won’t believe you, again making lying pointless.
In terms of killing, I think the issue here is that you’re reducing the second principle to the first, but the admonishment against killing only follows directly from the second: killing someone else, even in self-defense, is always using them as a means to an end. In the self-defense case, it’s as a means to preserve your life. Thus, if we take the second principle seriously, it DOES mean that killing, even in self-defense, is wrong.
So it seems to me that you reduce Kant’s deontology to consequentialism only by defining it in a way that makes it consequentialist, even though Kant didn’t actually do that. After all, he criticized the STOICS for being too hedonistic, so even if it boils down to consequences it won’t be the sort of consequences that you favour to get to your hypotheticals, but again Kant doesn’t seem to actually do that.
The one always reduces to the other.
Countless systems of laws that are internally coherent can be imagined. So how do we narrow them to just one? By reinserting the consequential context. Thus, you end up doing just what I said: asking what the consequences would be if a rule were universalized.
For example:
Note you just listed a social consequence. A hypothesis about what will happen in a social system. The hypothesis is actually false. We function perfectly well knowing that lying is allowed in certain circumstances. We don’t assume everyone is lying all of the time, and of course no one would propose a universal rule that we do so. Precisely because of the social consequences you enumerate.
In reality, we evaluate when lying is okay and when it is not, and we adjust to the expectation that everyone else will operate that way (and do our best to catch people who operate differently, or protect ourselves from that contingency: risk management being a necessary component of any true system of imperatives).
Your counter-example doesn’t work, since it is not the case the murderer will know you are lying. At best they simply won’t know if you are lying, or don’t have the information they are asking for. Often, they won’t even know you know their intent, or your opposition to it. That’s why it works. Many a Jew was saved by a liar in Nazi Germany. So the prediction that it wouldn’t work is falsified by reality.
But notice that even supposing that prediction would come true is a prediction of consequences. So even you could not avoid justifying a categorical imperative by appealing to purported consequences.
There is no avoiding it. No matter how much Kant may have wanted to. And that’s the problem I am illustrating and pointing out in this article.
Note that preserving your life treats yourself as an end and not a means, whereas not doing so allows you to be used as a means and not an end (by the murderer), making you an accomplice in the violation of the second form.
We can also do both under Kant’s maxim, of course; he did not rule out using people as a means as long as we also treat them as an end. But self-defense does not treat the murderer as a means (you aren’t using them; you are stopping them). And it can still recognize them as an end in themselves: you can pity them for the fact that they have compelled you to stop them by killing them, so as to treat more people as an end than your not stopping them would.
One can say that’s not what Kant meant, but then you are saying his second formulation contradicts the first. And then his second formulation simply becomes false.
Once again, to imagine that stopping someone is immoral because it doesn’t treat them as an end in themselves entails being against all justice. You can have no prisons, no fines, no punishments of any sort, for any crime. The destruction of society that would ensue is so ridiculous that no such system of imperatives has any possible claim to being true.
Even restricting this point just to death, a society that will never kill, will become enslaved by the horrible tyranny of a society that will, resulting in a total violation of the second formulation, by allowing everyone to be treated as a means and not an end. Such a system of imperatives is self-contradicting, and thus violates the requirement of logical consistency.
So if the second formulation is not synonymous with the first, it becomes false. Whereas if synonymous, it entails stopping harm is morally necessary, because it increases the number of people being treated as an end, relative to the result of not so acting. The murderers thus killed are still to be pitied on this formulation (the most we can treat them as an end in themselves is to have sorrow for the fact that they must die). But the moral necessity of instrumentally stopping them remains.
The problem is, Kant can’t avoid doing that. That’s a flaw in his reasoning: the task he imagines is impossible. In result, he ended up covertly sneaking in countless references to consequential context when arguing for a rule. As I demonstrated with examples. Not only his own admitting that the truth of his categorical imperative stemmed from an egoist consequence directly to the moral agent. Which consequence ends up in actual practice depending on the overall consequences of universalizing certain rules.
Note that the problem I illustrate is that Kant is ignoring consequences in his reasoning even when he logically cannot do that to get a true proposition. Just as the utilitarians are ignoring the consequences Kant was calling attention to. When we include all the consequences, we end up with a unification of Kant and Mill. QED.
Off list an Alex wrote:
I do not see this as a relevant distinction. Even apart from the fact that consequences are consequences, and deciding on consequences is consequentialism, regardless. That Kantians are adding up consequences the Utilitarians overlook does not matter to the actual definitions used in philosophy.
But I mean, indeed, even apart from that:
The utilitarian also cannot be assured of his predicted consequences, and in fact many of the things she expects will not transpire, due to unforeseen contingencies. So that the predicted effects are imaginary and only hoped for is also a property of utilitarianism. This has long been a leading criticism of Utilitarianism by the Kantians in fact.
Meanwhile, if it were really true that the Kantian cannot expect anything to be different from their actions, then they can have no justification for engaging in those actions. An imaginary universe is irrelevant to how we ought to behave in the real universe.
Which is why Kant reduced his categorical imperative to the pursuing of a singular internal consequence (and thus to a hypothetical imperative): the consequence to oneself of accepting a certain behavior. Once you factor that in, it’s just egoist consequentialism. And if you remove it, you remove all claim to its imperatives being true (and Kantianism becomes self-evidently false).
But that consequence to oneself requires that the systemic consequences also be true: that encouraging more people to behave that way will make the system better. And that is just rule utilitarianism. Which is a standard consequentialism. Follow the link in the article on rule utilitarianism to understand the point.
That cannot be true. If it were true, Kantianism would be false. Because it would lack any real-world motivation. And imperatives must be motivating to be true. “You ought to do x” requires that it be true for you that you ought to do x.
And as even Kant himself argued, this requires two things to be true: that it actually does improve the agent’s happiness to act this way (an egoist consequence) and that the behavior that does this really will make the whole world better if more people practiced it (a rule utilitarian consequence).
So it is in no sense true that these things don’t matter to the Kantian. They matter fundamentally to the Kantian.
Notice how this is untrue: the informed Kantian most certainly will approve of lying in all circumstances that save an innocent from being murdered. For the exact same reasons the rule utilitarian would. Plus one more reason on top of that, and indeed because of that: as Kant himself said, when you are fully conscious of reality, following this rule will make you feel better about yourself.
Please read my article more carefully. I show, with many examples, that this assumption in philosophy is false. Kant’s own reasoning shows it is false. It is the consequences of a behavior being universalized that determine for him which categorical imperatives are true. And when one follows his own reasoning, you get different results than Kant claimed. Kant simply didn’t follow his own rule.
That’s another false distinction. Those reduce to the same thing in set theory. See my discussion of this point in an earlier comment up-thread.
That’s logically impossible. See linked comment above for why.
Actually, no, it isn’t. It is logically impossible exclude them. See linked comment above.
Again, the analysis you are attempting here violates logic. See linked comment above. It is logically impossible for Kantian ethics to exclude exceptions. Of either type.
Followup:
This is not relevant to my point. Because my point is that those differences exist only because each system is ignoring consequences the other is calling attention to. When we unify all consequences, the differences vanish. They become the same system. And since it is a fallacy to ignore evidence, and consequences are evidence (that pertain to Kant’s claimed motive for us to obey his imperatives), it is a fallacy to ignore consequences. Unless your objective is to develop a system of Kantian imperatives that has no claim to being true. But that would not be a worthwhile endeavor. We want to know which imperatives are true.
See above: this is not relevant to my point. “Using it two ways” is just saying what I am saying: Kant is ignoring a whole set of consequences; the utilitarians are ignoring a whole set of consequences. When we stop ignoring consequences, the difference vanishes.
Indeed. All consequences. Therefore, a consequentialist who ignores consequences, isn’t getting correct conclusions even on their own hypothesis. When we add the consequences Kantians are talking about (including those you are pointing to as their concern), we get a different—and more correct—result. But this goes both ways: Kantians can’t ignore evidence either. That’s a violation of basic epistemic logic. When the Kantians do what they are supposed to and take into account all the consequences that actually determine whether it is true that we should obey those imperatives (by Kant’s own reasoning as to why we should: his covert declaration of an egoist consequentialism to the agent), we get a different result than Kant claimed.
So, when you don’t violate logic (e.g. by ignoring evidence; by not respecting the requirements for a proposition to be true and not merely coherent; etc.), the two systems become the same system.
Not if we follow the first formulation of the categorical imperative. Certainly, Kant did not do that, so his error resulted in incorrect results. But I am not talking about his mistaken system. I am talking about what happens when you follow his own rule correctly. As in, without logical fallacy, and as stated. When you do that, the categorical imperative logically entails “the morally right action is the one with the best consequences.”
Note the difference: you are talking about what Kant said, which my article explains was often fallacious; I am talking about the logically necessary consequences of what he said (i.e. what happens when we remove his fallacies).
You are confusing historical statements about what Kant said, with the actually logically entailed truths of what Kant said. Those are not the same thing. I am talking about what’s true. Not what Kant said was true.
In the same way I point out that the utilitarians said one thing, but what they said is false, because when you follow their own reasoning, you get a different result than they claimed, e.g. when you take consequences into account, you must include the consequences Kant was talking about, including consequences to the agent (his consequentialist motive for following his imperatives, the only thing capable of making his imperatives true) and consequences of the sort you describe (regarding the way the consequences of adopting rules are assessed).
When you avoid the errors of both Kant and Mill and correctly follow their own reasoning, you end up in the same place: both Kant and Mill were looking at the same moral system, each from a different angle, and neither complete. And thus, neither, alone, correct.
If that were true, then Kantian ethics is false when applied to the real world. And we only live in the real world. Our only interest should therefore be in imperative statements that are true in the real world.
So either you are not concerned with what’s true (in which case, you are missing entirely the point of my article: I’m getting at what imperative system is true, not which ones different people just happened to have proposed), or you aren’t aware of how Kant argued his system to be true. If the latter, read my article more carefully, as I explain what Kant said about that, and why it is crucial, and why it changes everything.
That doesn’t change the fact that he cited an intrinsic motivation that entails concern for extrinsic motivation. He cannot avoid the facts of reality. His attempts to do so were fallacious and produced false results. When we don’t use his fallacies but stick to valid logic, and take correctly into account the actual facts of the world (such as what actually makes people feel the way Kant claimed, or actually can do so), we get different results. This is the whole point of my article. Likewise with his categorical imperative as first stated. Which is, as I just noted and as I explained in the article, a hypothetical imperative after all. There is no such thing as a true, non-hypothetical, categorical imperative. Kant would scoff. But alas. His own attempt to prove otherwise ended up proving the reverse. As even he had to appeal to a hypothetical in the end. He just didn’t notice.
Kant is stuck with two options: admit his categorical imperatives are all false; or admit they are all hypothetical imperatives that reduce to egoist consequentialism (by his own words, as I quoted).
It just so happens that when we follow that egoist consequentialism in applying his first categorical formula, we end up with full consequentialism. Kant did not know that. But alas, it’s what is logically entailed. It cannot be escaped. As my article explains. You do not seem to have paid attention to the article’s actual arguments here.
All of us. We know vastly more now about human psychology and social systems than Kant did. And we also can see his fallacies and thus correct those logical mistakes and restore logical consistency between his claimed motive for his imperatives being true and what is then entailed by his first formulation of the categorical imperative.
By the same token, we know vastly more now about everything than Mill did when he formalized his consequentialist system. We know more facts. We know more about the logical entailments.
You are confusing what Kant said with what is logically entailed by what he said.
I think you seem to think I am writing a piece in history of philosophy. I am not. My article is not about what Kant thought historically long ago. My article is about what is morally true.
Yes, they do. They all become f{x}, a function on a set. As I showed in the linked comment.
No. There is only what is in the set. Period.
Every imperative can be stated as f{x}. Period.
There is no way you can avoid reducing every imperative to some f{x} form.
It does not seem you understand how language works. Or set theory.
I can’t fathom what you are trying to say. If “the set of possible values were (1words are exception based.
You can’t avoid this.
And, more importantly, neither could Kant. That Kant did not realize his logical mistake is moot now. We do realize it. And when we include the correction, we end up with my result: Kant was looking at consequentialism all along, just from a different angle.
This is the last entry by Alex. He still does not seem to understand what my article is arguing.
That is not an “accident” of language. That’s deliberately what the word means. Consequences are consequences. You can ignore some consequences and pretend you aren’t basing your conclusions on consequences. But that is untrue. Kantians are still adding up consequences. They are just adding up different ones, ones the utilitarians ignore, while largely ignoring the consequences the utilitarians focus on.
Meanwhile, when we try to get Kant’s imperatives to be true, by his own reasoning as to what motivates them (egoist consequences, a fact which you still keep ignoring), we end up discovering we have to include even the consequences that utilitarians concern themselves with.
So you have a choice: either Kantian ethics collapses into and completes standard consequentialist ethics, or Kantian ethics is false.
That’s the argument of my article here, and likewise my article in TEC. You simply aren’t responding to my actual argument.
And my article argues that when you try to get their propositions to be true, they end up having to unify all the consequences.
So, again, take your pick: do you want to talk about false theories of morality, or true ones? I’m only interested in (and only talking about) the latter.
It is, when what you are looking for is the truth. When we fix what is “unsatisfactory” we discover they were both looking at the same system from different angles. That’s what my article shows. You don’t seem interested in addressing the actual argument of my article. You seem obsessed with a tautology that if we only look at what, historically, particular people said, and ignore their failure to actually follow their own logic or use correct facts, and thus ignore any concern for what is actually true, then differences “remain.” Well, duh. That’s why no one has noticed they were talking about the same thing.
This is like saying that when a blind man touches the trunk of an elephant and reports a snake and another touches a leg of that same elephant and reports a tree we should conclude they are not talking about the same one elephant. I’m the guy pointing out that they are talking about the same one elephant and just touching different parts of it, and that if we follow the procedure through—their own procedure (in the analogy: touching through)—we will discover that that’s the case. It makes no sense to respond to this argument by insisting that snakes and trees are different and therefore we have no right to say it’s an elephant, because these dead guys hundreds of years ago said so.
Nothing here addresses the actual argument of my article.
This is again like insisting that the “snake” and the “tree” are not connected therefore it’s not an elephant, because the two blind men insist it’s not connected and refused to explore their own procedure beyond their single effort. The premise is false (they are connected, even if the two reporting the tree and snake do not see this) and the conclusion is false (it is an elephant). That’s the argument of my article.
I’m discussing what those theories logically entail, that their founders did not think through or realize.
That’s what it means to be concerned with what is true, and not with what certain persons historically and mistakenly said.
This is an example of what I mean: you are confusing history of philosophy with philosophy.
I will repeat this one more time:
I am not writing an article about the history of philosophy.
I will repeat it again so that you know I’m really really serious that this is a really really important and crucial point and you won’t understand any of this conversation until you understand why this point is important:
I am not writing an article about the history of philosophy.
I am writing an article about philosophy. As in, the actual quest for the truth. I am asking what is actually true. Not true about history. True about morality.
Moral theorists have been trying to explain a fact of the world (moral reasoning). They propose theories. Those theories are attempting to explain an actual thing. I am interested in what the actual thing is. Each of these theories, when carried to its actual logical conclusion with correct facts, ends up describing the same actual thing. That’s not what the theorists noticed or said. It’s what is actually the case.
…of the truth about moral facts.
When we correct errors of fact and logic, all these theories collapse into one.
QED.
That’s the argument of the article. You have yet to ever respond to the actual argument of the article.
Your argument seems to be “I am upset that this wasn’t an article about the history of philosophy.” Sorry. Like these theorists were, I’m interested in what the true moral theory is. I’m not interested in the historical contingencies of how these old dead guys missed it. Nor should philosophy as an academic field. Leave history of philosophy to the history department. Philosophy departments should be looking for the true moral theory. Not endlessly repeating past failures to find it.
That’s absurd.
One does not need to time the fall of every apple on earth to conclude the value of g.
So, evidently, even you couldn’t fathom what you were trying to say about set theory. Noted.
You never responded to, or even described, my argument about set theory eliminating your claim about exceptions. I have to conclude you never read it or never understood it.
Not everything can be consequentialized. Campbell Brown demonstrated this in his 2011 paper.
[1] Consequentialize This, http://www.jstor.org/stable/10.1086/660696
That paper fails for one simple reason: he demonstrates no actually non-consequentialist moral system to be true. As in, a system of imperatives anyone actually has sufficient reason to obey over all other imperatives.
Since we are only interested in true moral imperatives, the result of that paper is of no use.
He also commits other logical errors in that paper; e.g. he falsely claims consequentialism entails only one maximization output, when in fact that’s exactly what Harris’s landscape theory disproves; my traffic systems example also disproves its underlying assumptions. Similarly, his definition of consequentialism is inadequate, excluding many other forms of consequence to consider—in fact, he arbitrarily plays a semantic game by ruling moral relativism and ethical egoism non-consequentialist, which is a perversion of the English language and an insult to philosophy. But all that is moot anyway. Because he generates no true propositions about morality. Indeed, he never even asks what the truth conditions for a moral system are, much less ever applies them. This is a common folly in academic moral philosophy.
Which doesn’t entail they’re untrue, nor does it entail that *all* non-consequentialist more theories are untrue as you’ve claimed elsewhere. You’ve frequently charged Harris with making this exact mistake against criticisms leveled by other philosophers.
Firstly, Brown doesn’t falsely claim anything. Consequentialism has no rigourous formal definition, so any claims of falsehood are merely wishful thinking.
Brown provides a formal definition while being quite charitable to the consequentialist’s position. If you disagree with the specific properties Brown describes, namely agent neutrality, no moral dilemmas and dominance, then describe which ones are false and why, and provide your own properties formalizing consequentialism.
If instead you just axiomatically assume that anything of interest can be consequentialized, then consequentialism becomes vacuous and of no interest.
Finally, I suggest you read the paper more carefully because Brown already covers traffic rules and such in his derivation of consequentialism, and his justifications for arriving at the definition he does is perfectly reasonable. You have raised no serious objection to this derivation beyond claims that you simply don’t like the outcome.
Because that’s not the point of the paper. The point is to test the claim that any moral theory can be consequentialized, and it answers that question in the negative.
Which doesn’t entail they’re untrue.
If you want to say there is a true morality that isn’t consequentialist, you have to present one. Failing to do so, fails to establish the proposition. That’s how this works.
Consequentialism has no rigourous formal definition, so any claims of falsehood are merely wishful thinking.
If I make decisions according to what the outcome will be, Brown’s definition entails I am not making consequentialist decisions.
So either you choose to speak English, or you play his dishonest shell game that tries to change what things are, by changing what they are called.
I prefer speaking honest English, over games that hide the truth.
Brown provides a formal definition while being quite charitable to the consequentialist’s position. If you disagree with the specific properties Brown describes, namely agent neutrality, no moral dilemmas and dominance, then describe which ones are false and why, and provide your own properties formalizing consequentialism.
I did. They are all either consequentialist, as in, they all base the morally true on some outcome measure (some consequence), or they are not demonstrably true and therefore irrelevant.
This is even admitted in the paper—he just avoids using the word “consequence” to describe the outcome measures, the consequences, that define moral rightness in agent neutrality, for example.
Brown is playing semantic games that hide the consequentialism in his proposed moral systems.
I am speaking English.
Any philosopher who says ethical egoism is not a consequentialist theory is being dishonest.
And only honesty produces truth.
Because that’s not the point of the paper. The point is to test the claim that any moral theory can be consequentialized, and it answers that question in the negative.
But if that’s only true of false moral systems, it’s moot. That’s my point. So to show there is a true moral system that is not consequentialist, you have to actually provide one. You can’t claim gremlins exist, and expect not to ever have to prove any exist.
Hi Dr. Carrier,
I’m afraid you’ve definitely misunderstood Kant here. You wrote:
“Kant’s first formulation of the categorical imperative remains the most familiar: ‘Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.’…In short, the morally right act is that act you would gladly wish everyone perform. But on what basis do you decide what behaviors you would wish to be universal? Well, guess what. Consequences.”
You’re ‘in short’ explanation is highly misleading, for the universalization of a maxim has nothing whatsoever to do with what you’d ‘gladly wish’. Rather, it concerns whether universalizing a maxim results in what Korsgaard calls a practical contradiction (or, if no practical contradiction results, a contradiction in the will).
What is a practical contradiction? Suppose you will the end of receiving money as a result of a false promise. If you will the end, you necessarily will the means. Hence, you will the causal efficacy of the means vis-à-vis your end. The question raised by the universalization procedure is, would my means retain their causal efficacy if my maxim were universalized? Here we can easily see the answer is ‘no’. For if everyone accepted the maxim, ‘lie to receive money when you need it’, no one would lend money, since a promise to repay would not be trusted. But then you cannot *both* will your maxim *and* its universalization. For that results in a contradiction: by willing the universalization of the maxim, you will the causal inefficaciousness of the maxim’s action, but by willing the maxim, you will the action’s efficaciousness. Hence, the maxim fails the categorical imperative, and your acting on it is impermissible.
Note that nowhere does this explanation appeal to the consequences of acting on your maxim. For the problem is not that if the maxim were universalized, then such and such would result. Rather, it’s that *you* cannot both will the maxim and its universalization without contradiction. (In other words, in principle another maxim with precisely the same universalization consequences could be permissible to act upon *if it does not result in a contradiction with the original maxim*).
Note that this isn’t some idiosyncratic take on Kant. Rather, Korsgaard’s practical contradiction understanding of the practical contradiction (in conception!) that results when a maxim fails the categorical imperative is widely accepted among Kant scholars.
However, since the very first step of your argument relies on the reduction of deontology to consequentialism (a decidedly suspect move, given the sundry varieties of both deontological and consequentialist ethics!), the rest of the argument necessarily fails.
I’m afraid you’ve definitely misunderstood Kant here.
None of the four philosophy professors who peer reviewed my chapter said so. Including an expert on Kant (Erik Wielenberg).
I think I’ll trust their judgment over yours.
In my chapter, I point out that merely not being contradictory is not a truth condition for moral propositions.
But at any rate, Kant was clear that the consequences that motivate agreeing to abide by such dictums was the desire of the agent. I have a direct quote in TEC. That ends the argument.
This is true even at the meta-ethical level (the reason Kant gave for all categorical imperatives to be regarded as true for any agent). But it’s also true at the ethical level (how you decide which outcomes are to be preferred over others requires desiring them, e.g. a coherent evil moral system is possible, if you do not regard human suffering as a negative outcome; Kant was thus sneaking desires in even at that stage as well). And even within the system apart from that, you can have a non-contradictory dictum “kill only those attempting to kill you” (fully realizable without practical contradiction), and yet why prefer that over “kill no one”? The latter has more practical contradictory outcomes (e.g. it entails suicidal behavior and thus suicide, which Kant deemed immoral). But how you choose which outcomes “contradict” the dictum always relate to agent desires (e.g. how else could Kant say suicide is bad, or even any kind of killing or being killed at all for that matter?).
For example, as you yourself note:
For if everyone accepted the maxim, ‘lie to receive money when you need it’, no one would lend money…
That’s literally false. Some people would lend money anyway. Others would just vet you first. Which is exactly the system we have (banks and lending agencies always assume you are lying—they check facts instead; or else they charge exorbitant interest to compensate them for the risk, and take steps to increase the likelihood of recovery if you default).
So it isn’t even true that a maxim “lie to receive money” results in any contradiction. The system that results is perfectly coherent.
The only way you can gainsay this is to appeal to agent desires, e.g. if the agent wants a system whereby borrowing money is cheaper and easier, they shouldn’t lie to borrow money. But if the agent is perfectly happy with a lending system that expects lying (and thus compensates for it), in what way is the maxim “wrong”?
And you even realize this but don’t notice it. See your own words…
…by willing the universalization of the maxim, you will the causal inefficaciousness of the maxim’s action, but by willing the maxim, you will the action’s efficaciousness. Hence, the maxim fails the categorical imperative, and your acting on it is impermissible.
“Efficaciousness” is a consequence (indeed it’s a synonym).
That’s exactly my point.
You have to desire that the lending system be “efficacious” to even get a reason to disapprove of the maxim.
But that then gets us all the way back around again to the first point: Even assuming you correctly describe a maxim that is devoid of “practical contradiction,” why care? What reason does anyone have to believe that that maxim is moral, or at all anything they should adhere to? The answer always ends up appealing to agent desires: what the agent would or would not want as the outcome. And Kant fully admitted so. Again, I quote him directly saying so.
Rather, it’s that *you* cannot both will the maxim and its universalization without contradiction.
Note that that is a consequence.
Why prefer that consequence over any other?
The answer is always an appeal to agent desires.