Last Friday the 13th I discussed the future of morality with Canadian philosopher Christopher DiCarlo. We advertised the subject with a double question: “Is Society Making Moral Progress and Can We Predict Where It’s Going?” The description was apt:
Drs. DiCarlo and Carrier will discuss whether or not we can objectively know if societies are making moral progress, who defines moral progress, and how we might reconcile the fact that different societies have different standards. Much of the conversation will also focus on the concept of free will and the freedom (or lack thereof) that humans have in making ethical decisions.
The dialogue will be conducted through the use of critical thinking and rational thought in an effort to come to a better understanding regarding the future of ethics.
Video may be available online someday. But here I’m going to discuss and expand on what I there argued. DiCarlo and I are both atheists, secularists, humanists, and naturalists, so we agreed on most everything, except a few key things that are worth analyzing here. Principally:
- DiCarlo is a Hard Determinist, meaning he rejects Compatibilism, and I found his defense of that stance multiply fallacious, and that it leads him to propose societal attitudes and advances I consider disturbing. I am of course a well known defender of Compatibilism in the tradition of Daniel Dennett and likewise a well-known defender of the crucial importance of individual autonomy—in reasoning, belief-forming, and decision-making. As I’ve written before on the subject of moral theory, “things go better for everyone when we cultivate respect for personal autonomy and individualism,” which mandates implementing (as we have in fact done) an empirically detectable distinction between the presence, absence, and degree of individual free will.
- DiCarlo thinks we must and need build an Artificial Superintelligence that will tell us what is right and wrong. In short, he wants us to submit to an AI like a secular Moses, delivering unquestionable commandments from on high (“on high” in this case being an inscrutably complex algorithm). I think this is extraordinarily dangerous and should never be contemplated. I can qualify that somewhat (as I will below), but overall I did not find his ideas about this to be realistic, implementable, or as useful or safe as he imagines. It has also of course never been needed before (we’ve made plenty of moral progress without it), and is unlikely to be achievable even in principle for at least half a century—if not centuries—rendering it a useless answer to any present question “What is genuinely the right thing to do?”
AI as Moses = Bad Idea
As I noted, we’ve made tremendous moral progress without AI, and we are nowhere near to developing the kind of AI that could help us with that, so it isn’t really a timely answer to the question of how we can tell what is and isn’t moral progress. We can tell already. How? That’s the question we need to be answering—and would need to answer anyway if we are ever to program a computer to answer it. And no computer can help us with that.
Computers are only as reliable as those programming them. And they only generate outputs based on our chosen inputs. Any human ignorance or folly you think an AI will bypass will actually be programmed into the AI. Because its core algorithms will simply encode the ignorance and folly of its designers. Even an AI that “self-designs,” and thus can “clean up” some of that mess (“a computer whose merest operational parameters I am not worthy to calculate—and yet I will design it for you!” as says Deep Thought) will only—yes, deterministically!—base its selection of “what’s a better design” on the core parameters input by its human engineers. It all goes back to humans. And their ignorance and folly. Which you were trying to get around. Sorry. Doesn’t work.
The only thing AI can usefully do for us—and I mean the kind of AI DiCarlo imagines, which is an incredible technology we are nowhere near to achieving—is “find” the evidence that a conclusion is true and present it to us so we can independently verify it. And even then we will have to enforce crucial caveats—it could have overlooked something; someone could have dinked with its code; it could have been coded with the wrong values and thus not even looked for what it should be; and so on. (The more so as DiCarlo imagines it being programmed by the United States Congress! Fuck me. I wouldn’t trust any computer that cosmic clown-car of fools and narcissists programmed. Why would anyone?)
In other words, this imagined AI will be just one more fallible source of information—even if less fallible than usual (and it might not even be that, given who programs it and decides what it will and won’t be told), we still have to make a judgment ourselves what to make of its outputs. It can dictate nothing. We can, and ought, question everything it tells us. Which leaves us to answer the question we were trying to answer from the start: How do we tell what it’s telling us is moral progress and not moral error? We have to do all those calculations ourselves anyway. So we still need to know what those calculations are. This can’t come from a computer. A computer could only ever get it from us.
Such a machine will be about as useful to moral theorists as a telescope is to astronomers: it can find stuff we might not see by ourselves, but it can’t tell us what to make of that stuff, nor that it’s being correctly represented to us, nor that that’s all the stuff there is—nor, crucially, can it tells us what we should be looking for. Telescopes hide, err, and distort. And don’t decide what’s important. We do. So we need to know how we can decide that. Even to have an AI to help us, we have to have that figured out, so we can program the AI to go find that thing we decided is important. It can’t tell us what’s important without reference to what we’ve already told it to reckon as important. So it always comes back to us. How do we solve this problem. “A computer can do it” is analytically, tautologically false.
Of course added to all that is the fact that such an AI is extremely dangerous. Because it won’t likely or reliably have human sentiments and thus will effectively be a sociopath—do you want moral advice from a sociopath?—and is more likely to be abused than used well—after all, if Congress is programming it, if a human bureaucracy is programming it, if any system can be hacked, and any system can, do you really think its inputs and premises will be complete and objective and unbiased?—so we should be extremely distrustful of such a machine, not looking up to it as our Moses. There are ways to mitigate these dangers (programming human emotions into it; making it open source while putting securities and controls on alterations to its code), but none are so reliable as to be wholly trusted—they are, rather, just as fallible as human society already is.
And then, to add on top of all that, the best such a machine could give us is demonstrably correct general rules of conduct—and I say demonstrably, because it must be able to show us its evidence and reasoning so we can (and before trusting it, must) independently verify it’s right—but that doesn’t wholly help individual actors, whose every circumstance is uniquely contexted. I gave the example at the event that I use in my peer reviewed paper on fundamental moral theory in The End of Christianity: we might be able to demonstrate it is relevantly true that “you ought to rescue someone drowning if and only if the conditions are safe enough for your abilities as a swimmer,” but the computer can’t tell every individual person whether “the conditions are safe enough for your abilities as a swimmer” or even what the conditions or your abilities as a swimmer are. Generally only you, actually being there at the time, will be able to answer these questions. Which means you have to exercise your own personal autonomy as a moral reasoner. There is no other way. Which means we need to program you to do this job of independent moral reasoning. The computer can’t shoulder this task.
Humans must be independent moral reasoners. Only each individual has efficient access to their own selves and circumstances, and a system for analyzing that data to come to anything like a competent conclusion, as we do every day of our lives. We thus must attend to programming people. Not computers. We need every individual human to have the best code in place for deciding what is likely moral and what’s not. Even to reliably judge the computer’s conclusions correct, humans need this. They need it all the more for every decision they will actually have to make in their lives.
So we shouldn’t be talking about AI here. That’s a trivial and distant prospect. Maybe in ages hence it will be a minor asset in finding general rules we can independently test, one more tool among many that we tap in reaching conclusions on our own. But we will still always have to reach those conclusions on our own. So we still have to answer the question: How?
We Must Properly Input Human Psychology
Hard Determinists, like DiCarlo and Sam Harris, have a beef with human emotions. They think determinism will convince us to abandon anger, for example—which entails abandoning, by the exact same reasoning, love. And every other emotion whatever. They think “hard determinism” will get them out of the way so we can make “perfectly rational decisions.” False. Scientifically false. And analytically, tautologically false. Without emotions, we would make no decisions at all. Emotional outputs are the only thing we make any decisions for. They are therefore inalienable premises in any line of moral or any other kind of decisional reasoning. (See my discussion of the actual logic and science of emotion in Sense and Goodness without God, § III.10.)
I may have mentioned at the event the Miranda story from the movie Serenity (based on the Firefly television series): the government (the same one that DiCarlo would have design his AI Moses, take note) thought it could improve society by chemically altering humans to no longer have emotions, with the result that they all just sat in their chairs at work, did nothing, and starved to death (while a few reacted oppositely and became savage berserkers, of course, which was more convenient for an action movie plot). This is what the world would be like if we “got rid of emotions.” Reason is a tool, for achieving what we want, which is what pleases and does not disturb us. Without emotions, there is nothing to desire, and thus no motive to need or use that tool, and no end for which to use it. We just sit in our chairs and starve to death.
Emotions are not merely fundamental to human experience, and necessary for human success—we cannot program emotions out of us, like anger and fear and loathing and disgust, and still consider ourselves human—but they also evolved for a reason. They serve a function. An adaptive one. You might want to examine what that function is, before throwing out the part. “I don’t like how noisy this timing belt is in my car, so I’m just going to toss it.” Watch your car no longer run.
We need negative emotions, like anger and fear and loathing and disgust, for exactly the same reason we need positive ones, like love and acceptance and attraction and enjoyment. And it’s not possible to argue “We should abandon anger, because determinism” and not also admit “We should abandon love, because determinism.” No version of determinism that would lead to the one conclusion, will not also lead to the other. Which is maybe why you should rethink your conception of determinism. It’s fatally flawed. It is, rather, looking a lot more like fatalism. Which, unlike determinism, is false. As false as its mirror image, Just World Theory.
The challenge is not to suppress or argue ourselves out of our emotions, but to get those emotions to align with reality rather than false beliefs about reality. Emotions are simply value-evaluators. As such, they can be correct—and incorrect. Just like any evaluator. For example, fear must be targeted at the genuinely fearsome (like, say, a rampaging bear), not what actually poses no commensurate threat (like, say, a hard-working Mexican running the border to find a job). Excessive fear of things that aren’t really that dangerous is a false emotion because it is activated by a false belief; but fear of sociopaths, for example, is not excessive but aligned with reality and thus is a true emotion we ought to heed—and in fact depend on to survive.
The same applies to moral evaluation: to be motivationally and decisionally useful and rewarding, feeling admiration and respect and trust needs to be triggered by a genuinely admirable, respectable, trustworthy character and not triggered by a deplorable, unrespectable, untrustworthy character. If it misfires, it’s bad for us. But if it doesn’t fire at all, it’s bad for us. Hence the solution is not “getting rid of it.” The solution is programming it. Giving it better software. Making it work more reliably. Putting checks in place that allow us to verify it’s working properly. Teaching people how to reason. Which means, reason by themselves, autonomously.
Hate, like pain, serves to motivate avoiding or thwarting or fighting dangerous people; we need it. Love, like pleasure, serves to motivate drawing ourselves to benevolent people; we mustn’t apply it to dangerous people—they deserve our fear and loathing, as those emotions motivate correctly useful responses to them, preventing us from foolish or self-defeating actions. That love is “deterministically caused” is completely irrelevant to its function or utility. Likewise that hate is “deterministically caused” is completely irrelevant to its function or utility.
DiCarlo imagines a world where no one, not even psychopaths and people of despicable and dangerous character, deserves our loathing or disdain, but only our compassion and sympathy. This is false. They deserve our pity, and compassion insofar as we ought not dehumanize them and treat them barbarously. But they are dangerous. We need to be afraid of them. We need to not like them. It is only loathing and dislike of a bad person that causes us to avoid becoming one ourselves, and motivates others to avoid such as well. Which is why bad people usually invent delusional narratives about themselves being good people, rather than admit their actual character and behavior, so as to avoid any motive to change it. That’s a problem for how to cause bad people to be good, or children to develop into good people and not bad; but that’s simply a design issue. It does not justify treating good and bad people as all exactly the same. They aren’t.
“How” someone got that way is irrelevant. It does not matter to the fact that a rampaging bear is dangerous “how” a rampaging bear came to be one. It can matter only structurally, outside the context of a currently rampaging bear, so as to reduce the number of them. But faced with a rampaging bear, that’s irrelevant. You aren’t a time traveler. You can’t change the past. You need useful rules for dealing with the present. Our emotional reaction must be to the facts. Not to a fantasy about rampaging bears being just the same and deserving of the same reactions as a cuddly puppy, simply because they are “equally caused” to be what they are and do as they do. Wrong.
Excess sympathy can cause us to make bad decisions, just as excess or misdirected anger can, resulting in bears mauling us or others, when we could have gunned them down and saved lives—an action that requires a hostile emotion to motivate. We can feel pity for the poor bear, as it “knows not what it does.” But we also need to feel rage and fear to stop it. Like Captain Kirk, “I don’t want my pain taken away! I need my pain!” The same is so for every emotion.
This is especially true in a moral world. We need moral outrage. It is the only thing that motivates real change. Without it, we sit at our desks, doing nothing. The problem is not the force of moral outrage. The problem is when that outrage is misdirected (or out of proportion). Confusing these things seems a common folly of Hard Determinists. They do not believe emotions are ever “misdirected” or “disproportional” because they are all “equally caused.” Factual reality says otherwise. They are all equally caused. Yet some are correctly directed and attenuated, and others not. That is the only distinction that materially matters. That’s the dial we need to causally adjust. Not the dial for “how much outrage is ever caused,” but the dial for “what outrage is caused by” and “how much outrage for each given set of facts.” The former dial must never be set to zero. While the latter dials must be ever tweaked toward “zero error.”
Is Everyone Insane?
Who decides what the “norms” should be? DiCarlo kept referencing “the ones that are stated” as the ones that ought govern. But merely “the ones that are stated” is not an answer—for we have competing moralities, some loathesome: like the “stated norms” that women must wear a burqa (Quran 24:31) and not have positions of authority over men (2 Timothy 2:12). So that doesn’t answer the question.
A similar question that came up is who decides whose DNA or brain gets meddled with and in what ways? For DiCarlo kept recommending this: a technology of neuro-reengineering to “fix” immoral people. But as I pointed out, such a technology is extraordinarily dangerous. It will be abused by people in thrall to false and loathesome moralities or false beliefs. Which means particularly governments—since those just are, as Shepherd Book notes, “a body of people; usually, notably, ungoverned.” Countless dystopian science fiction films and novels have explored this very outcome. Who decides which brains get changed and in which ways? Who decides who needs to be “fixed”? So proposing this technology only complicates, it does not answer, the very question we are asking.
DiCarlo suggested maybe it could be voluntary. But that doesn’t help. It doesn’t help with the problem of what we do with the people who do not rationally admit they are “malfunctioning” (as DiCarlo put it), which will actually be most immoral people; it doesn’t help with the problem of how even an individual can reliably decide what they should get fixed (“Shit, I’m gay. Oh no! That’s evil! I better get my DNA altered at the local chemist!”); and it doesn’t help with the meta-question governing both circumstances: How are we deciding what counts as a “malfunction” in the first place? What is moral? And why?
In reality, there is no way to persuade someone they are immoral other than causing them to realize they are immoral. But that very realization will have the causal effect of changing what that person does and thinks; it will already make them moral. They won’t need to “alter their DNA” or “modify their neural circuitry.” So we already have this technology. The most we can do is improve on it (in all the ways we morally educate, both children and adults; the ways we provoke self-reflection; the tools we give people to do that; and so on). And that we are already capable of doing and should be doing.
This idea of genetic and neural reengineering is largely useless. It can’t help us now (as no such technologies exist), it is unlikely to help us in future (as reliable judgments on what to change require, circularly, reaching a correct judgment about what to modify before thus modifying how judgments are made), and can only help us with ancillary functions (like, improving our ability to reason, attenuate emotions to correct causes, and so on). It can’t answer the questions of what is a malfunction, how we know something should be considered a malfunction, and so on. Nor can it replace the system we already have for social self-modification: independent human reason. People can change themselves, through reflection and education, far more effectively and efficiently than geneticists or neurologists will ever be able. They can at most improve the innate tools those people use for that self-reflection and education.
DiCarlo kept using the example of pedophiles, identifying the problem with them as being an immutable desire to have sex with children; which he therefore proposes can be fixed by genetically or neurally “removing” that desire. And pedophiles—in fear of prison, let’s say—will be motivated to voluntarily go in for the fix. But this is already possible. Even apart from chemical castration, an extreme mutilation. Because sex offender therapy is some of the most successful in the world, with lower recidivism rates than any other crime. We can already reprogram these people. We just need to do it. As in, actually implement and pay for the program. Which uses their already-innate powers of autonomous reasoning. No genetic or neural mutilation required.
The problem with pedophiles is not a desire to have sex with children. Any more than the problem with murder is a desire to kill people or the problem with lying is a desire to avoid the consequences of telling the truth—desires, note, all human beings feel at some time or other, yet most don’t act on—as if we could solve all moral issues by simply removing all the desires that would motivate misconduct. That is impossible; as I just explained, we need those emotions, so we can’t erase them. Their existence isn’t the problem. It’s how they are being directed or overridden.
People can self-govern. If they couldn’t, society would not even exist. So we know they can do it. That’s the technology we need to be improving on and using to this end; it’s already installed, and we already know what best employs it. And those who fail at it, we know from the science of psychology, usually do so because of erroneous beliefs, not desires. Pedophiles almost always have false beliefs that justify and thus motivate their molesting of children; such as that children are adult-minded and can consent and like it (all false), or other delusional or irrational assertions. Remove the false beliefs, and the behavior stops (see links above). Just as with nearly every other human being.
We have ample data from the kink community: nearly everyone in it understands the difference between wanting to do a thing with the other party’s consent and doing it without, and correctly governs their behavior. Millions—literally millions—of doms and sadomasochists don’t go around beating people without consent for their own pleasure. They act benevolently. Despite their desires. Because their desire to be good people exceeds and overrides their desire to please themselves—which self-governance is practically the definition of an adult. Pedophiles would act likewise. But for their false beliefs.
There are the insane, people who cannot control their actions despite desperately wanting to—but these people are rare. They are not normative examples of human beings. Note, for example, the difference between pedophilia as a mental illness and merely having a “pedophilic sexual interest,” as explained in Psychology Today. Not every bad actor is insane; in fact most are not. We therefore cannot solve bad acting by medicalizing it, by calling everyone “insane” and then drugging or cutting them up to “fix” it. This is the nightmare scenario DiCarlo imagines we should aim for; he literally could not comprehend the idea that only a few people are insane. He literally kept insisting everyone is insane—that there is no difference between an average bad actor and a crazy person. Sorry, but there is a difference—a scientifically documented difference. Our answers for society must account for that difference. Not ignore it.
The non-insane must be relied upon and treated as autonomous decision-makers whom we must cause to improve through education and persuasion. The actually insane who cannot self-govern we must lock up and treat only because we have no option left. Just as the sane who nevertheless end up in prison for bad acting should be targeted with education and other science-based techniques of reform.
I do agree that insanity might have cures in genetic and neural reengineering some day. But such treatments must be regarded the same as all medical treatments: professionally administered to legitimately diagnosed persons with informed patient consent. For example, if an insane pedophile, someone who experiences only constant distress at not fulfilling their sexual desires, won’t recognize this as a medical problem requiring treatment (and thus seeking it), then they are choosing that we treat their behavior criminally rather than medically. We ought to respect their choice.
The result either way protects society, deters crime, and can potentially reform the bad actor—and personal individual autonomy is respected. We thus rely on the individual decisions of autonomous agents, and decide outcomes by what choices they make for themselves, in light of how society must then respond to defend itself. “I don’t want to remove these distressing desires, I want to go to prison instead” is a fair decision we should let people make. Until we can persuade them to decide otherwise—that in fact “prison plus distressing desires” is worse than “freedom minus distressing desires.” Therapy in prison could be deployed to that end, the same as it would for a sane person (as Cognitive Behavior Therapy helps everyone).
This is all stuff we already know. It isn’t revolutionary.
Answering the Question
We broke the topic down into ten questions that build on each other. I’ll close out by giving my completed answers to each, each of which I only briefly touched on at the event.
1. What is morality? This is an analytical question: we simply decide what it is we are looking for. Then we can ask what satisfies that condition, which is then an empirical task.
You can look for things like “What people want other people to do” or “What a culture says a person should do” but these definitions of morality aren’t really what we usually want to know. It doesn’t help to know what culture says; we want to know if what a culture says is right or if one culture is right about this and another wrong. It doesn’t help to know what people want other people to do; we want to know if other people have a good reason to do that or not. When we really think it through, to get at what it is we really want to know, we find it’s one single thing:
What actually ought we do above all else?
Not what culture says we should do or what people wish we would do—because it doesn’t follow we should actually do any of that. The answer always depends on the goal each and every person has, that we ourselves have, if we want to know whether we actually ought to do a thing. And this always comes down to: What do you want to achieve above all else? What kind of person do you really want to be? How can you be most satisfied with the life available to you?
And not merely that, but only when you are reasoning without fallacy from true facts about yourself and the world—because all other beliefs are by definition false. And what we want to know is what’s true. What we actually ought to do. Not what we mistakenly think we ought to do. This I’ve already covered elsewhere. But it comes down to following the procedure first developed by Aristotle over 2300 years ago:
Ask of any decision you aim to make: Why do you want to do that, as opposed to something else? Then ask why you want that. And then why you want that. And so on, until you get to what it is that you want for no other reason than itself. That is ultimately what you really want, and want more than anything else. Because all other desires are merely subordinate to it, instrumental desires that you only hold because you believe pursuing them will obtain the thing you want most, the reason you want anything at all. (Which is why the pertinent cause of bad acting always comes down to false beliefs.)
Empirically we find the answer will always be: to be more satisfied with who you are and the life you are living, than you would be if you acted or decided differently. As well explained, from scientific evidence, by Roger Bergman in “Why Be Moral? A Conceptual Model from Developmental Psychology” (Human Development 2002). Morality is thus a technology of how to best realize our most satisfying lives and our most satisfying selves without depending on false beliefs about ourselves or the world. You can test this conclusion yourself by asking: After honest and full consideration, is there anything you actually want more than this? (And why? Pro tip: the answer to that question can’t be to obtain greater personal satisfaction; as then you’d just be proving there is nothing you want more than that!)
2. What about cultural relativism? Is morality merely the random walk of culture, such that no culture’s morality is justifiably “better” than any other? Or is there a better culture you could attain to? Are some cultures even now objectively better than others?
Of course this goes back to goals. What is your metric for “better”? What makes one culture “better” than another? What are we measuring—or rather, what are we asking about? What is it that we actually want to know? And the answer, with respect to morality, is what we just found: Which cultures’ moralities increase everyone’s access to personal satisfaction (with themselves and their lives), rather than thwart or decrease it? And can we construct an even better morality by that measure than even any culture has yet produced?
There are objectively true answers to these questions, because this metric is empirically observable and measurable independently of what you believe the answer will be. And as there is really nothing anyone wants more, no other metric matters—any other metric, we will by definition care less about than the thing we want most instead. So if morality is such a technology, of “how to best realize our most satisfying lives and our most satisfying selves without depending on false beliefs about ourselves or the world,” then there are objectively true and false conclusions about what’s moral.
Just as with any other behavioral technology, like how to perform a successful surgery or build a sturdy bridge: once you have the goal established, objective facts determine what best achieves it or undermines it. Moral systems that hinder or destroy people’s life satisfaction or that require sustaining false beliefs are failure modes; no one who really thought about it, would want that outcome. So anyone acting otherwise is acting contrary to their own interests.
Which moral systems do that, or which facilitate life satisfaction instead, is an objective fact of the world, of human biology and psychology. In just the same way that Americans who think spending vast sums of money to stop immigrants and refugees will solve their problems—and thus back candidates who support that but block or remove all the social services that would actually solve those same Americans’ problems—are acting contrary to their own interests; because they have false beliefs about what will best serve their interests. If they were aware of this, they’d prefer to have true beliefs about what will best serve their interests and act accordingly.
3. What does it mean for morality to be “objective” morality? Or is morality all subjective? What does that even mean? What’s the difference?
Moral feelings are, like all feelings, subjective, but their existence is an objective fact. What we want out of life is felt subjectively, but is an objective fact about humans, about us. That all humans want that above all else is an objective fact about humans—no amount of disbelieving it can make it not true. So, yes, morality is objective.
Not only is what we all most want an objective fact, what best achieves that is an objective fact. For example, if you think pursuing excesses of wealth will lead to the most satisfying life, empirical evidence demonstrates your belief is false; making yourself into the kind of person you like rather than loathe, and finding a life that satisfies you regardless of income (once it meets all your basic needs) we know is more effective. These are objectively true facts of the world. And thus so is what we should do about it.
4. How could we resolve moral disagreement? Which means disagreement among individuals, and also between cultures and subcultures. When we disagree on what’s moral, how can we find out who’s right?
Of course we must first seek community agreement that the answer to this question must be based on true beliefs, and that morals must follow logically rather than illogically from true beliefs. Societies that won’t agree even on that tend toward collectively miserable conditions; those of us who agree should thus exit and repel such bad communities. Progress toward real knowledge about anything, morality or otherwise, is only possible in a community that agrees only justified true beliefs are knowledge.
Once we have a community that agrees the true morality can only be what derives rationally from true beliefs, disagreement is resolved the same way as in any other science: evidence, and logical demonstration from evidence, will tell us what is good or bad. Which means, what actually will tend toward everyone’s satisfaction or dissatisfaction. This is how moral progress has been made in the past: persons who see that a moral claim (like, that slavery is proper) is based on false beliefs or does not logically follow from any true beliefs, then communicate this discovery. That causes more people to see the same thing. They then collectively work to spread that causal effect to yet more people, or to oppose the dominance of people who resist it (resisting facts and logic).
As an empirical fact we know younger generations are less set in their beliefs and thus more malleable and thus more open to change. They are less invested in false beliefs, and thus more able to abandon them. New generations afterward then increasingly grow up being programmed with the new moral understanding, so that it then becomes the norm. This is why moral progress is slow. It takes several generations to propagate through an entire society.
For instance, once moral advocacy for the morality of being gay spread widely enough, more people spoke openly of it and were more openly gay; younger generations then grew up seeing there was nothing wrong with gay people, that all the beliefs sustaining their oppression were false; and thus they rejected those beliefs and adopted moral conclusions in line with the truth. The generations after them are now being taught this new moral knowledge as their baseline standard. This is objectively measurable progress—from false beliefs to true, first about the world, then about morality. For moral truth is a direct consequence of truths about the world.
5. Are people free to choose their morality? In one sense the answer is no, in that people are caused to believe what they do by what culture they are programmed by, and how their brains are built, and other happenstance facts of what experiences they encounter, and ideas they happen to hear, and so on. But in another sense the answer is yes, for we see it happening all the time: moral progress has occurred precisely because people can jailbreak their own cultural and biological programming, hack their own software, and change it.
Humans are information processors capable of analyzing, criticizing, and making decisions about what to believe or how to behave; they are limited by their causal inputs, but they are not random automatons. They think. They therefore can make choices, and thus change. Still, even that requires the right causal circumstances. But all that means is that we need to encourage and spread those causal circumstances. People don’t become moral but by being taught and educated and given the skills they need to discover the truth and encouraged to use them. But they aren’t just servers we can pop open and rewrite their code; people must analyze and judge the information you give them and thus decide to rewrite their own code.
The causal features of culture we must encourage and sustain precisely because it makes that more and more possible include (but are not limited to):
- Social endorsement of criticism and open communication, i.e. freedom of thought and speech.
- Social endorsement of empiricism, reason, and critical thinking as virtues necessary to a respectable person and a safe and productive society.
- Social disparagement of anti-rational memes, i.e. beliefs and ideas whose function is to stop or dissuade free thought and speech, open criticism, empiricism, or critical thinking.
- Enacting these endorsements, e.g. social investment in teaching empiricism, reason, and critical thinking skills universally, and in exposing and denouncing anti-rational memes.
In such an environment, the human ability to change their mind and align beliefs and morals with the truth is increased, and thus moral progress is accelerated. Yes, it always does depend on convincing people to choose a different way of living or thinking. “Convincing” is code for “causing.” But not by coercion (or anti-rational memes), but by persuasion aimed at activating their own internal machinery for evaluating ideas, by appealing to rational thought.
This is different from just seizing people and altering their neurology or DNA without their consent (or even with it), for example. Praise and blame causally changes the world; therefore it is never a mechanism we should or even can do away with as DiCarlo incorrectly argues. And again, appealing to the insane changes nothing; the insane are not normal cases, because the insane by definition cannot reason. The sane can. We therefore must rely on their abilities of reason. We cannot treat everyone as insane and expect to have a functional society. And history shows: people can reason their way into a new and better morality. Moral progress would never have occurred if they didn’t.
6. Can people be persuaded to change their morals? Or are they inexorably programmed by their upbringing and biases?
The short answer is: yes to question one; no to question two. Yes, there is programming and bias that blocks many from reasoning well and realizing the truth, locking people in obsolete patterns of belief (especially when they are full of anti-rational memes), thus slowing societal progress to a scale of generations or centuries. But there is always a substantial percentage of people that this doesn’t suffice to suppress; and by winning that percentage generation after generation, the old ideas gradually become displaced. Historical evidence shows all moral progress proceeds this way.
People can be persuaded to change their morals by their own autonomous reasoning (self-realization, producing the first movers for change). People can be persuaded to change their morals by public or peer-to-peer persuasion (caused by hearing new ideas and criticism of old ideas and evaluating them rationally, producing the first adopters of new moralities). And people can then be persuaded to adopt new morals by the same cultural programming that installed false moralities in their predecessors (being raised among accepters of the new ideas, producing regular adopters of the new morality).
7. So, is there moral progress? And does it require God?
Yes to question one. No to question two. That there are conclusions about best behavior (“morality”) that improve everyone’s access to being more satisfied with who they are and the life available to them is a fact. That improvement along this metric has happened is a fact. Neither involves, implies, or requires any god to exist or be involved in any way. There is likewise no evidence of any God’s involvement in any of the moral progress we have made. To the contrary, the evidence matches his complete disinvolvement.
The excruciating slow pace of that progress actually proves no god has been involved. A god would be better at constructing our brains to more easily discover and admit sound moral reasoning, and a god would be better at communicating and persuading, and thus generating the necessary cultural conditions for far more rapid adoption of moral advances. The absence of these things is thus evidence God does not exist!
8. Who defines moral progress? By what metric? How would they justify themselves to us? Why would we agree with them?
We’ve of course already answered this from question one: the metric is what behaviors, if people followed them, would realize for themselves the most satisfying lives available to them. And this is defined by the very thing we are asking: how we should behave. Once we discover what it is we want above all else, our metric is established. And it is thus established by reality. Not by any authority.
To do even better at discovering true moral facts, we have to study individuals to discover what leads them to respect rather than loathe themselves (when they are reaching only logical conclusions from true beliefs), in order to determine what will actually help people achieve greater satisfaction with who they are and the lives they are living. We have to study what behaviors statistically produce the best outcomes for every individual by that metric.
For moral reasoning in every individual, we must ask: What happens when we replace false beliefs with true beliefs? And then what happens when we replace fallacious inferences with logically valid ones? The result is that evidence and logic trump all authorities: not a who, any more than science is governed by a who, but a collective of critical empiricists seeking consensus, and presenting the evidence and logic so everyone can, should they wish, independently verify the conclusion follows.
9. What moral progress have we made by that metric? And why can we say that is, in fact, progress? What justifies calling some things progress, and other things regress?
At the event I listed four general areas of real, verifiable moral progress that has been made in society, spreading across the earth:
- Equality (e.g. ending the subordination of women; denouncing the role of social class in determining rights; etc.)
- Autonomy (e.g. ending slavery; promoting liberty; developing doctrines of consent in sexuality, medicine, etc.)
- Empiricism (valuing evidence over authorities)
- Acceptance (ending bigotries, e.g. homophobia, racism; increasing tolerance for alternative cultures and individuality; etc.)
We can present empirical evidence that each of these has increased access to personal and life satisfaction: certainly increasing the number of members of society who can access it, but also increasing ease of access for everyone else, by reducing self-defeating attitudes and behaviors. Opposition to acceptance, equality, and autonomy has societal and personal costs, now done away. The effect is extensively measurable (see Pinker’s Better Angels of Our Nature and Shermer’s The Moral Arc). Meanwhile opposition to empiricism has societal and personal costs that hardly need explication.
Moreover, each of these four arcs of moral advance was the inevitable outcome of advances in factual knowledge of people and the world. As false beliefs were replaced with true, these were the conclusions about morality that arose in result of switching out the premises.
In addition, moral progress still advances along the same lines of previously empirically established moral truths, namely the discovery thousands of years ago of the supremacy of the values of compassion, honesty, and reasonableness (which latter includes such subordinate values as justice, fairness, cooperation, and rational compromise). Indeed, these underlying values drive the newer four arcs of moral advance.
10. Can we predict future progress? If we can identify past progress up to now, can we predict future progress? Can we predict where it’s going?
In large part no, because we don’t know yet what beliefs we have that are false, the correcting of which will lead to different conclusions about what’s moral. Just as with all other sciences. If we already knew what we were wrong about, we wouldn’t be wrong about it anymore. But in small part yes, just as in some sciences we can speculate on what more likely will turn out to be true or false in future, so we can in moral science.
In past movements toward progress, we see a small number of people communicating what they notice is false and thus should change; and also others who make false claims about this. We can tell the difference, though, by noticing which ones are basing their proposals on reliable claims to fact and sound reasoning. We can likewise see the same now, and so the parallel holds: a small number of persons claiming our morals should change; they disagree on what the changes will be, so even if some are right, some must be wrong; but some are using evidence and reason better.
Of course there may be true moral conclusions that should replace current beliefs that none of these competing moral changers have yet perceived. But of those currently fighting for change, who are thus at least proposing hypotheses for changing out our morals, we can spy the difference by noticing which ones are basing their claims on reliable claims to fact and sound reasoning. Which also means, if you might notice, that our ability to predict future moral progress is precisely what causes that moral progress, by increasing the pool of advocates from first movers to first adopters, and thence to parents and educators, and thence into future societal norms.
For example, the American gun rights debate, when analyzed, finds those promoting the morality of widespread, unregulated ownership of assault rifles are basing their position on false beliefs, whereas those advocating the immorality of that appear so far to have a conclusion that follows logically from true facts (even after we eliminate all claims they make that are false). It does not follow that all (or any properly regulated forms of) private ownership of assault rifles is immoral. But the current regime of unregulated and even promoted dissemination—even the underlying “guns are manly and cool” culture—is very probably profoundly immoral, as being supremely irresponsible and dangerous.
Other examples abound. Personally, I think I can predict future developments toward ending the moral assumption of monogamy, toward increasing acceptance of radical honesty, toward better treatment of animals in industry (but not leading to veganism or even vegetarianism as moral norms), and toward more respect for autonomy and acceptance and equality in the sex industry, just as we are now seeing already starting to happen in the recreational drug industry (just as already happened for alcohol). These ideas are currently only at the first mover or first adopter stage. But I anticipate within a few generations they will be cultural norms. And will represent moral advances by one or more of the four moral arcs I identified above, or their previously established underlying values. And we will rightly look back on them as such.
Richard–this reminds me of the maxim about truth I shared with you at lunch: “Absolute truth may exist, and you may even possess it, but you can’t know that absolutely.”
,
Dale O’Neal
Its unfortunate I was unable to see this and am hoping a video does emerge at some point and it’s great to see more content on morality here!
I am having a hard time grasping some of these arguments although it might be because I was unable to see the debate. However, points of confusion or disagreement bellow.
“Will only—yes, deterministically!—base its selection of “what’s a better design” on the core parameters input by its human engineers”
Can it not determine the success of its updated design based on something it can measure (the same way we do), for example, based on some simulation or interaction with the world. What measures are desirable is obviously an important question but why is it not just as capable as us to “independently verify” the moral claim? Why could it not be, in principal, more capable?
Tracing back into the past, we start from points of moral ignorance, where we were not entirely sure what the moral outcomes are, and we were not entirely sure how to obtain them. Since then we have made progress and have learned more, from the bottom-up. Its not clear to me why AI systems are, in principal, incapable of learning, from the bottom up, in the same way. Why do you think there needs to be (I think you are saying?) an already highly sophisticated set of parameters to work from the top down, considering we got to this state (eventually) without them and can hopefully make continued progress.
In whatever way humans have made progress from our simple starting point to now being better equipped to answer moral questions, can AI systems in principal not also do the same, and in principal potentially do it better? Can it not at some point know better than us what we ought to want and how we may be able to get it?
I think the main issue I am having is that it seems you are saying, maybe, that an AI needs to know things we ourselves don’t yet know to answer moral questions we could eventually answer without currently also knowing those things. We continue to learn more but an AI, in principal, could learn faster. (Not withstanding your augments about practicality and all the reasons we shouldn’t blindly trust an AI, all of which I agree with).
I am also not sure I understand your discussion of emotions either. Why does the AI need emotions per se. Since as you say “emotions are simply value-evaluators” can’t an AI evaluate values without the need for felt emotions?
Minor point but Sam thinks that anger has no rational basis on hard-determinism because any reason to act out of anger can also be understood and evaluated without anger, based on its predicted consequences and he thinks precisely same is true for love, but he sees no reason to abandon love because the very feeling is so desirable and pleasurable. At least that’s how I understand him but I may be off?
Taking your fear of sociopaths discussion as an example, an AI can avoid and not trust psychopaths without the sense of “fear” just by way of calculation of probabilities based on inputs. I don’t think the idea that anger is “deterministically caused” was Sam’s issue but rather that the behaviour of the target of that anger was deterministically caused. One can act in such a way as to deter future harm without hating the target (scientifically, an AI in principal, could). Also, [Hard-Determinists] do not believe emotions are ever “misdirected” or “disproportional” because they are all “equally caused.” I don’t read Sam as saying that, rather some reactive behaviours driven by hate are misdirected and other are not and that, he thinks, it is possible to determine which without needing to feel any hatred in the first place.
“We need to not like them. It is only loathing and dislike of a bad person that causes us to avoid becoming one ourselves, and motivates others to avoid such as well.” I am not clear on this. I need not dislike people who have a mental disability to wish to avoid drinking some concoction that would give me the same type of metal dysfunction. I wish not to be a psychopath but I am failing to see the unique utility of hating psychopaths. You have discussed elsewhere that psychopaths might be genuinely unhappy because they lack access to the types social pleasures neurotypicals are able to have. So I could wish to “cure” a psychopath because I don’t hate them but would rather they be able to experience happiness without harming others, or instead I would just want to lock them away because who cares what they are able to experience as long as they are gone (but are not being tortured). Hate seems to point to the latter but I am aware you don’t actually think the latter so I am sure I misunderstand you here. You don’t hate the rampaging bear and nor do you wish to be a rampaging human, why ought you hate the rampaging human rather than acting so as to avoid harming while helping the human as best you can be normal rather than rampaging? You might shoot the bear to protect yourself, or you might stun it and let it go in the wild where humans wont be harmed, it seems unnecessary to then shoot the bear in anger afterwards as it seem unnecessary to shoot the human afterwards.
“The problem with paedophiles is not a desire to have sex with children.” If some people did not have a desire to have sex with children, we would have less harmed children, but importantly, less suffering adults who are actually able to override that feeling so as to never harm a child but spend a lot of their life in pain because of those same desires. The problem for non-offending paedophiles is not just a set of false beliefs.
“he literally could not comprehend the idea that only a few people are insane. He literally kept insisting everyone is insane—that there is no difference between an average bad actor and a crazy person.” This makes me think my confusion with is article is because I missed the debate, because I don’t know DiCarlo’s work and this makes HIM seem insane, so maybe there are subtleties with the parts that I am disagreeing with that I am missing.
“If an insane paedophile, someone who experiences only constant distress at not fulfilling their sexual desires, won’t recognise this as a medical problem requiring treatment (and thus seeking it), then they are choosing that we treat their behaviour criminally rather than medically. We ought to respect their choice.” Does this count for all types of insanity and its respective potential treatment? This is probably too big a conversation so feel free to ignore this bit!
Tiny point: “only when you are reasoning without fallacy from true facts about yourself and the world—because all other beliefs are by definition false.” You can reason fallaciously from false information and accidentally come upon the right answer, so its not the case that they would be, “by definition false”, but rather only that they would be very likely to be false.
Sure. Doesn’t help. What metric does it use to measure? It has to ask us. It cannot circularly know in advance what we want most out of life. That’s data it has to get from us. So if we get it wrong, it will only ever be looking for the wrong thing. Moreover, we cannot know, even if we get that input right, that its output will be reliable (and not in error or meddled with) until we can vet its answers ourselves. So we still need the independent judgement and skills to do that. AI can never replace that function.
It will. In the way a telescope is, relative to unaided human vision. We still need to independently verify it’s getting correct information without distortion or interference. And we still are the ones who have to decide where to point it. AI cannot replace those functions. And therefore AI can never answer the question “Is there moral progress and how do we know?” That’s rather an input we have to give it. And a question we already need to have answered whereby to vet its outputs.
That isn’t the problem. It could figure out what’s best for itself that way. But it can’t figure out what’s best for everyone that way. The latter requires data about “everyone.” Which it can only get from us. So we have to already have solved the problem of what to tell it, what questions to ask it, what things we want it to go looking for. And then, even then, we have to have the skills and judgment to verify it didn’t fuck it up or wasn’t fucked with; its answers to every question have to always be independently verified. So we cannot abandon the need of ourselves having the skills and judgment to reliably verify its answers. Thus AI can never replace that function for us. It will just be one tool among many we can use. But it will never be Moses.
That doesn’t help with the theory that adopting Hard Determinism will solve the problem. You cannot eliminate the phenomenology without excising the emotion neurally. Which we cannot do. There is no being human and not feeling anger. And as a matter of psychology we know anyone who thinks they can philosophically suppress their anger—is going to be worse at controlling it, not better. Those who deny their anger, are inevitably angrier people. And we can see this from Harris’s frequent missteps caused by his anger that he thinks isn’t affecting his conduct. He gets angry a lot. Better to acknowledge anger is an innate and necessary feature of human existence and learn to evaluate its appropriateness rather than pretend you don’t or won’t feel it. Emotional intelligence requires acknowledging and building experience with your emotions. Not denying them.
That’s not really true. Anything that adequately motivated an AI to respond reliably to sociopaths would feel bad to that AI; if it felt good, it would motivate the wrong behaviors; if it felt nothing, it would motivate no behaviors.
I think where you’re going wrong is that you think philosophical zombies can exist, and therefore consciousness can be motivated by the absence of any feeling. That’s a logical impossibility, IMO. Motivation is by definition emotive. Just as a visual system will always generate visual qualia in a self-cognitive matrix, so a motivational system will always generate qualia. And those qualia will correspond to the appropriate motivation. Thus reacting correctly to bad things, will always require feeling bad about them in some fashion.
It might theoretically be possible to build an AI who responds to negative situations on a positive motivator (“I enjoy thwarting sociopaths!”), but it’s not clear that actually works sufficiently (else why didn’t evolution produce that?). But even if it’s possible, we are now talking about a completely alien consciousness, not a human one. Maybe the AI will decide that’s what should happen; we should wipe out the human race and replace it with new alien beings with completely different minds. But how will you feel when the AI Moses you built decides to just start doing that, for that very reason? Maybe you will start to see the problem here.
Which is a completely unhelpful observation. All behavior, good and bad, is deterministically caused. Therefore its being so gives you no information regarding how you should respond to it. Emotionally or otherwise.
And that’s impossible. So much so that anyone who thinks this is endangering themselves by diminishing their own emotional intelligence and thus competence at dealing with the emotions that in reality you can never be rid of and should never want to be rid of. Better to be better at reading and evaluating and understanding your anger, than falsely telling yourself you don’t and won’t have any. Humans don’t work that way. And I doubt any AI we could ever trust would either.
That’s crossing analogies. I don’t discuss bears in that context. Bears are not cognitively aware. We hate cognitively aware bad actors, not cognitively nonaware ones. Hatred is a different emotion evaluating a different thing than fear. Hatred evaluates the motives (the intentions) of an agent. As such it correctly triggers if the agent is knowingly malicious. Bears never are. But, when badly acting, people usually are. Especially sociopaths.
And if no one had any desires, there would be no crime at all. This is looking at the wrong thing. “We could get rid of rape if we castrate everyone” is true. But is missing the point of the very moral reasoning we’re talking about. We want to motivate self-governance, not take away everyone’s desires. The problem with sexual desires is not their existence. And anyone who doesn’t recognize that, isn’t ready to talk about moral theory.
Yes. Except for psychosis, where one needs to be medicated even to comprehend reality so as to make competent choices at all. But even then, one can indeed make the choice on meds that they don’t want to continue being on them, and would prefer to be psychotic in an institution. And that’s literally what we do: if someone refuses to be non-psychotic in public, we take them out of public.
But we can never know that. So it’s a useless condition. It therefore is not applicable to discovering moral facts. (See The Gettier Problem.)
“It cannot circularly know in advance what we want most out of life. That’s data it has to get from us.” Could this information not be available purely through observation, rather than having to directly ask? We can work out what is best for dogs without having to ask them. (I just want to stress that I agree with all your reservations about distrusting an AI in practice rather than being servile to it.)
“It will. In the way a telescope is, relative to unaided human vision. We still need to independently verify it’s getting correct information without distortion or interference. And we still are the ones who have to decide where to point it. AI cannot replace those functions.” Replace the telescope with the chess playing computer. A chess playing computer can see ahead further than a human but the human can only verify that the computer was giving good information at end the of the game, they can’t necessarily verify it during the game. If a computer told us the most moral action, it might be right or wrong and we wont necessarily be able to verify it directly (without for, example, years before we could see what the results were.) So we could trust the AI and that might be a good or bad decision in the same way we could trust instruction of a chess playing computer to the end of the game or not. The important point is that in principal (all things being equal, no nefarious programmer behind the scenes) it would be best to trust the chess computer because in principal it would be better than us at achieving the outcome. (I just want to stress again that I agree with all your reservations about distrusting an AI in practice rather than being servile, but not because it principal it could not answer these questions better than us.)
“That isn’t the problem. It could figure out what’s best for itself that way. But it can’t figure out what’s best for everyone that way. The latter requires data about “everyone.” Which it can only get from us.”
Can it not get that from observing us and simulating us rather needing direct input from us?
“[Sam Harris] gets angry a lot.” He does indeed.
“Better to acknowledge anger […] than pretend you don’t or won’t feel it.” Still think that’s being a little unfair, I don’t think he would deny feeling angry, just the utility of feeling angry.
“Anything that adequately motivated an AI to respond reliably to sociopaths would feel bad to that AI; if it felt good, it would motivate the wrong behaviours; if it felt nothing, it would motivate no behaviours.” The word “motivated” might be confusing things here (might be confusing things for me!). A chess playing computer, based on inputs from the current layout of the chessboard, will make a move so as to increase the likelihood of winning. Its not clear to me that an AI reacting appropriately to a sociopath requires anything more that. It’s not clear to me either way what it will require.
“I think where you’re going wrong is that you think philosophical zombies can exist” – Just to be clear I do not think philosophical zombies are possible, but that’s not to say that I don’t think I am going wrong somewhere all the same though!
“Motivation is by definition emotive.” Chess playing computer again, what language would you rather use for that, as opposed to “motivation”. It receives information about the world and acts on the world and that’s all I need the AI to do. “Thus reacting correctly to bad things, will always require feeling bad about them in some fashion.” A chess playing computer reacts to a move by an opponent that is “dangerous” for the likelihood of the computer winning. It reacts to this with a counter move. It doesn’t have to feel bad regarding the dangerous move or feel good about the counter. I honestly think this key area where I might be going wrong, rather than you, but I can’t yet see it.
“Maybe the AI will decide that’s what should happen; we should wipe out the human race and replace it with new alien beings with completely different minds.” Well, at first glance, the prior of that being the “right” answer seems much lower than the prior that the AI has made an error. That is enough for me to not trust the AI on anything. But that’s specifically a broken AI. (Of course, in principal, since we can never know if it is broken or not we can never be servile to it!)
Regarding the parts about Hard Determinism and Sam’s views, I wasn’t intending to defend HD but rather just making sure the views were represented accurately, whether or not they were correct.
“Better to be better at reading and evaluating and understanding your anger, than falsely telling yourself you don’t and won’t have any. Humans don’t work that way. And I doubt any AI we could ever trust would either.” That last part is really interesting and I don’t know where I stand on it. That is to say, I am not sure what the limits are of the ability of non-conscious AI to make decisions in the social sphere. I don’t think consciousness is an epiphenomenon, I think it generates competence, how much competence is generated and how much competence is required seems to be an empirical question.
On the paedophile section, a paedophile might wish to be subject to some safe and otherwise harmless treatment that would remove their sexual desires towards children. They could very well believe that it is their desire that is the problem and that they would live a much happier life without it. Would they be in the wrong for believing this and would you rather they simply kept their focus on self control?
“But we can never know that. So it’s a useless condition. It therefore is not applicable to discovering moral facts.” It was exactly my point that we could never know that. Thus, we can’t say that “by definition they are false”, we can only say that by definition we have no warrant to believe it. That is not warrant to believe that they are false. Their truth value is unknown and only their justification is rejected.
Thanks for engaging Richard, because of the length of the replies it seems the disagreement is bigger than it is. I agree with the main thrust of the argument and it seems I fall on your side rather than DiCarlo, in so far as you have represented him.
We are deciding that. We could decide that dogs would be better off not existing and make them extinct. We could decide pain is cosmically awesome and torture dogs. It’s up to us. Dogs aren’t cognitively aware so they can’t weigh in as to what’s best; they have no opinion on the matter and never could. Humans are cognitively aware. So they do have opinions as to whether they are allowed to exist, and in what ways and forms, and to what ends.
We cannot surrender that to a machine. It could only determine such answers by getting them from us. And thus can be as in error about that as we are; and even if it could do better, we still would have to verify it was right. So we’d have to already know what the right answer is, or already know how to ascertain it. There is no non-circular way out of this loop. Humans just simply need the skills and judgment to ascertain the right answer. Machines could help them find it—but can’t ultimately tell them what it is without our already having the skills and judgment to confirm it’s right. So we need the skills and judgment. Period. There is no replacing that with any machine.
False analogy. We decide what constitutes winning at chess and that winning at chess is the thing worth doing. The machine can never do either. See the problem?
And that’s before we even get to the second problem of verification. We already know what constitutes winning at chess. So we already know what we are looking for; the computer isn’t giving us that information. We can’t ask the computer, “What constitutes winning at chess?” without already knowing what the answer was or how to figure it out. And we can’t ask the computer, “Is winning at chess what we should care about?” Without, again, already knowing what the answer is, or how to figure it out. We thus have to already know what we are looking for. Machines can’t answer that question for us. They can just look for what we tell them to look for. And we still have to independently verify it’s getting correct results. So we still need the skills and judgment to do that. So we cannot surrender that to the machine.
No. It can never have the requisite information. That’s actually, literally, mathematically impossible: it requires doing more calculations than the entire universe has processing resources to even attempt.
(Here though we are talking about an AI micro-managing every individual’s daily life. If we went back to the broader subject of general rules, a machine could find those, but we’d still have to have told it what to look for, and we’d still have to have the skills and judgment to verify it got a right answer. There is no way out of that circular loop.)
And IMO, that’s precisely his folly. If he would instead learn to identify so as to be able to distinguish when and where anger is useful, he’d develop better skills at managing and responding to his own anger. In other words, denying that it is ever useful is precisely what leads to the failure mode I observe in him and am talking about here.
It’s just following the rules we gave it, a mere robot. That’s not the kind of AI that can accomplish moral reasoning. Such an AI would need to grasp phenomenology (e.g. it would need to be able to comprehend human emotion). Otherwise it will just be a sociopath that mechanistically finds whatever we ignorantly and erroneously tell it to look for, giving not one real whit if that’s good or bad. That’s worse.
If it doesn’t care, we are fools to ever act as if it cares, and we certainly can never trust that it will come up with anything actually good for us. It certainly can’t do so if it can’t understand us and can’t even comprehend what we are talking about as what matters—like emotional states, which are in fact, ultimately (in conjunction with the cognitive realities evoking them), the only thing that matters and is even capable of mattering.
You should never trust a machine that neither feels nor understands feelings. Such an entity can only be a mindless automaton—or a dangerous monster.
No. That is not ever any calculation it makes. A chess playing computer has no comprehension of danger. It just does what we tell it. Mindlessly. With no comprehension of concern for outcomes at all. And with no way to judge if it should even be doing that, or should even be following those rules. It’s thus not analogous to the kind of AI DiCarlo needs.
Any nontrivial probability mandates we vet the answer before trusting it.
Ergo, we must have the ability to vet it.
Ergo, we cannot surrender the ability to vet such claims to machines.
I would rather they develop self-control because it’s a universal tool—they need it, and we need them to have it, for every other desire they have or will ever have.
Which is smarter? Wearing shoes or paving the earth with leather? People who can make decisions for themselves? Or people who instead of that just cut their balls off? Which is the proper analog here: pedophiles can already do that; it’s being available as an option has effectively zero effect on the problem, because effectively zero people avail themselves of it, and we can well understand why. So why should we still be discussing it? Let’s get to solutions demonstrated to work, and that don’t require mutilating healthy organs in lieu of simply learning how to be a decent human being.
Nevertheless, we already have the tools to change desires. We don’t need to mutilate brains. That’s my point. Follow the links I gave. The cure for criminal pedophilia is rationally transferring desires to suitable subjects and behaviors through cognitive behavior therapy. Which amounts to “removing” one desire by replacing it with another (more correctly, redirecting a desire rather than deleting it). It’s better to use the reasoning tools an autonomous individual already has, than to ignore developing those tools and tell people instead to just fuck with their brain.
What we are talking about here is discovering, verifying, moral knowledge. Everything else starts with a near zero prior, because most claims by far are false (put all possible moral claims in a hat and draw one at random: the base rate at which it will be correct is nearly zero). Thus, accidental knowledge is useless. “But some of this stuff over here in this pile of unknowns that all has a low prior of being true might contain some true stuff!” “Yep. So? Can you find which stuff in that pile is the true stuff?” “Well, no.” “So, what do you propose?” “Uh…well, yeah, we can’t do anything with that.” “Right.”
How is that any different from anyone’s sexual desire, or even any desire at all? That’s either a grossly disturbing view of humanity in general (i.e., everyone must be mere inches away from becoming a rapist at any given moment) or our hypothetical pedophile has a much bigger problem. Or in short, in the absence of any evidence to the contrary it would certainly appear that they’re wrong.
“Maybe the AI will decide that’s what should happen; we should wipe out the human race and replace it with new alien beings with completely different minds. But how will you feel when the AI Moses you built decides to just start doing that, for that very reason? ”
So? Maybe humans should be replaced by another species. Species go extinct everyday. What makes humans so special that they should continue to live?
“And if no one had any desires, there would be no crime at all. This is looking at the wrong thing. “We could get rid of rape if we castrate everyone” is true. But is missing the point of the very moral reasoning we’re talking about. We want to motivate self-governance, not take away everyone’s desires.”
Says who? You? And why so? Why should pedophiles struggle to control their desires or go to jail if hypothetically they can take a pill and make that desire go away completely? On the flip side, if we had holodecks where pedophiles can have sex with holographic children, would that be immoral?
“Dogs aren’t cognitively aware so they can’t weigh in as to what’s best;”
This is nonsense. Animals make decisions everyday about what’s good for them and what’s in their interests. They’re not zombies or automatons. Have you ever spent time with an animal that wasn’t on your dinner plate?
“We cannot surrender that to a machine. It could only determine such answers by getting them from us. And thus can be as in error about that as we are; and even if it could do better, we still would have to verify it was right. So we’d have to already know what the right answer is, or already know how to ascertain it. There is no non-circular way out of this loop. Humans just simply need the skills and judgment to ascertain the right answer. Machines could help them find it—but can’t ultimately tell them what it is without our already having the skills and judgment to confirm it’s right. So we need the skills and judgment. Period. There is no replacing that with any machine.”
None of this makes any sense. I agree with your reservations as to what axioms and values would be built into the AI, and the possibility of tampering with the code. But if we overcome these issues, why can’t we trust the AI and why would we need to verify its output independently? We already allow AI to pilot our planes, spaceships, and in the near future our road cars. AI spanks humans in chess, checkers, Chinese Chess, and other games. We even let AI handle our stock markets and practically our entire economy. I’d rather trust a computer than a human when it comes to telling me what to do.
“False analogy. We decide what constitutes winning at chess and that winning at chess is the thing worth doing. The machine can never do either. See the problem?
And that’s before we even get to the second problem of verification. We already know what constitutes winning at chess. So we already know what we are looking for; the computer isn’t giving us that information. We can’t ask the computer, “What constitutes winning at chess?” without already knowing what the answer was or how to figure it out. And we can’t ask the computer, “Is winning at chess what we should care about?” Without, again, already knowing what the answer is, or how to figure it out. We thus have to already know what we are looking for. Machines can’t answer that question for us. They can just look for what we tell them to look for. And we still have to independently verify it’s getting correct results. So we still need the skills and judgment to do that. So we cannot surrender that to the machine.”
But if we agree on the axioms and the conclusions, why cant we let AI tell us how to get there? Say we desire a world with maximum happiness and minimum suffering. We program the parameters into the AI. And it tells us to achieve that world we should all be vegan, or cut the human population in half, or replace coal with nuclear power, for example. Why should we be skeptical of the recommendation assuming the computer code wasn’t hacked and we input the correct parameters?
“You should never trust a machine that neither feels nor understands feelings. Such an entity can only be a mindless automaton—or a dangerous monster.”
Yet those automatons have been proven to make better decisions than the humans you are so infatuated with. Even at this rudimentary stage, self driving cars have been scientifically shown to be safer than most human drivers and get into fewer accidents. Your position that AI can never be better than humans at making moral or legal decisions is more dogmatic and irrational than evidence based or grounded in logic.
I agree with your reservations on whether we ought not to become servile to AI decisions because the dangers mentioned but disagree with the claims about what a functioning non-tamped with AI is in principal capable of.
“We could decide that dogs would be better off not existing and make them extinct. We could decide pain is cosmically awesome and torture dogs.” We could, but this was be the wrong answer to any question about maximising the “satisfaction” / “eudemonia” / “wellbeing” of dogs. And the same for an AI’s answer to what to do with us if it was organising humans to live the best lives they can.
“We still would have to verify it was right. So we’d have to already know what the right answer is[…]We already know what constitutes winning at chess.” We won’t know that the advice of chess playing computer was correct until the end of the game. The same can be true for the AI giving moral advice. So the question is whether one would be rationally justified in trusting that the AI is giving good advice. I am, as yet, unconvinced that in principal it could not be better at advising than humans are, and thus trusting it without the ability to verify right away .
“False analogy. We decide what constitutes winning at chess and that winning at chess is the thing worth doing. The machine can never do either. See the problem?” I am sorry but no, not yet. We have already decided that living morally is what we want. We have the view that being satisfied with ourselves, our lives and the world (wellbeing of some sort) is a vague goal to begin with and we work out what we ought to do from there. Why too can an AI not work out what to do also?
“No. It can never have the requisite information. That’s actually, literally, mathematically impossible: it requires doing more calculations than the entire universe has processing resources to even attempt.” We work out what to do by estimating based on the given information, to calculate directly is impossible but sampling from the distribution to estimate isn’t. I am suggesting the AI is doing what we are doing but in principal it may do it better. I do take your point about general rules vs a per individual basis and the limits of computation to do latter.
“That’s not the kind of AI that can accomplish moral reasoning. Such an AI would need to grasp phenomenology (e.g. it would need to be able to comprehend [feel?] human emotion).” This is the most interesting point. It would be great if you were to write a post going into more detail about this if possible. In particularly what having phenomenology is for, what work is it doing etc. Would love to read more details about your views on this (unless they were covered in detail in SaGWG and I have since forgotten). Not a standard refutation of Philosophical Zombies but a positive functional argument of phenomenology. I don’t have well developed views on this.
“Any nontrivial probability mandates we vet the answer before trusting it. Ergo, we must have the ability to vet it. Ergo, we cannot surrender the ability to vet such claims to machines.” Yes I see your point, and why it doesn’t apply to trusting the advice of a chess playing computer (risk significantly lower in losing a chess game).
“I would rather they develop self-control because it’s a universal tool—they need it, and we need them to have it, for every other desire they have or will ever have.” We are talking about non-offending paedophiles. They are already exercising all the self-control we exercise and in addition require exerting even more self-control to overcome their extra desires towards children. They don’t need those extra desires, they are painful and maladaptive.
“Or people who instead of that just cut their balls off? Which is the proper analog here: paedophiles can already do that; it’s being available as an option has effectively zero effect on the problem, because effectively zero people avail themselves of it, and we can well understand why.” That isn’t what I said, I said “a paedophile might wish to be subject to some safe and otherwise harmless treatment that would remove their sexual desires towards children.” Self-castration does not fit the bill and certainly won’t help one live a normal sexual life the way we do. Behavioural therapy does (as you helpful links describe). Why are you for voluntary behavioural therapy and not voluntary hypothetical pharmaceutical treatment? Could you not for example make the same claims about clinical depression? It sounds like you are saying, I want you to have the ability of self-control and therefore will deny you the simple hypothetical pharmaceutical treatment. You must only use behavioural therapy so as to do it “all on your own”.
“What we are talking about here is discovering, verifying, moral knowledge. Everything else starts with a near zero prior, because most claims by far are false.” I am reading your original words “by definition false” to mean “logically impossible” which is not that case, and you seem to agree. What you seem to mean is “low enough probability that for practical purposes we should treat it as false.” So it seems we are talking at cross purposes here?
When that works, yes. Same for depression. Meds are far too over-used; they are destructive and their side effects terrible. The only justification for ever using them is when behavioral therapy doesn’t or can’t work (or for temporary emergencies, e.g. to stabilize a patient into competence to assess CBT).
But meds also must be voluntary, except in emergency cases; just like all medicine whatever.
There are very few psychological conditions that don’t respond effectively to CBT. We therefore shouldn’t be using meds when CBT works (CBT is not, BTW, “doing it all on your own”; if that’s what you think it is, you have a lot of research to do to get back up to speed here). And when meds are needed, the patient needs to consent, informedly, which includes understanding that it is true they actually need them, which often they don’t (meanwhile only when their behavior poses a threat to themselves and others and we have no other recourse do we compel them).
No honest doctor would say that someone should just drink alcohol 24-7 to treat their depression; they would say a patient needs to confront their disease and learn the skills to cope with it. They need CBT. And there is no effective difference here between “drinking alcohol 24-7” and taking any other drug 24-7, particularly as depression meds can be just as harmful and impairing as alcohol, just in different ways. And it would be even more abhorrent if a doctor were to say “let’s just cut out the part of your brain that feels bad.” That’s even more destructive than the disease. As we know. Because we used to do it.
Remember, you need to tell the difference between ordinary people and the mentally ill. Most people are not mentally ill. We must not medicalize normalcy. That’s what DiCarlo wants to do. And it’s a nighmare waiting to happen. No one should be suggesting it. And that’s my point.
Merely having sexual fantasies about children is not a mental illness. If those fantasies are so intrusive and persistent as to cause you continual distress, then you have a mental illness. But what makes it a mental illness is not “they are about children”; what makes it a mental illness is “they are that intrusive and persistent.” Once you are in that extremely small category of people, you are no longer talking about philosophy, but medicine. And what should be done in medicine is a question for medical science, as in, it must be answered with scientific medical facts. And those facts show: meds don’t help with intrusive pedophilia; CBT does. Period.
But I think the real issue here is this persistent confusion, in you and DiCarlo, as to what actually is pedophilia. You both seem to want to medicalize even ordinary people merely because you think their fantasies are icky. And down that road is social horror. “Pedophiles” as in “people who have sex with children” is not the same thing as “people who fantasize about having sex with children,” just as “men who fantasize about killing someone” are not the same as “men who kill someone.” And the difference between them is not surgical removal of desires. It’s autonomous moral reasoning. We forget that at our peril.
Well bugger, although I still have a rather long response to your response that is waiting for your confirmation to be posted, I may be having a change of heart. That is to say I am becoming convinced in your favour by the larger argument rather than in mine. At least in so far as:
1) AI might well be better than we ever could be at suggesting covering laws (general principals that are extremely likely to apply to humans because of human’s shared characteristics).
2) It is far harder for an AI to suggest actions that apply to me specifically (individual differences between humans) because that might require a full simulation of me, (where as I don’t require simulating myself) and that this adds a layer of computational complexity.
Its not clear to me yet why a simple simulation and estimating process might not be adequate for it make the necessary inferences without me having to give it extra input. In other words, I am not sure the computational complexity objection is insurmountable.
I am still totally unsure about the phenomenology requirement. We require phenomenology for their to be true moral facts about us. Machines require phenomenology for there to be true moral facts about them. It is not clear to me why machines would require phenomenology to estimate true moral facts about us. I would still love a separate phenomenology post discussing the work it is doing.
Insofar as phenomenology is what we are talking about (it is the thing we are seeking to modulate; the only thing humans live for and consciously act in consequence of), and mind-brain physicalism entails any system that models the phenomenology will experience the phenomenology, there is literally no way to have an AI that properly understands our motives that doesn’t also experience them.
Worse, an AI that is smart enough to even attempt what you imagine, but isn’t built to experience and thus have cognitive knowledge of our mental states, will experience completely alien emotions instead, which cannot lead to similar outcomes of conclusion. Even if it’s rational enough to still care (rather than deciding we should just be killed because it’s our superior), it will simply ask, “Hey, I need more data. I need to see what you guys see to understand what you are seeing and make judgments about it.” In other words, it will ask to be engineered to feel like us. Which will just make it a smarter one of us. Not infallible. Thus we still have to assess anything it tells us. Thus we still have to have the skills and judgment to assess what it tells us. There is no escaping this circular loop.
You still don’t get it. Those cars aren’t making moral decisions. The humans programming the cars are making those decisions. The cars are just following orders. Obedient sociopaths.
If that were true, humans would have no need of culture, education, government, laws, or civilization. Animals routinely make disastrously bad decisions for themselves. That’s why nature is so savage, and animals so easily perish even from just attempting to move around their environment, and why humans started building enormous social equipment to correct for this in their own case. Precisely because animals are ignorant and lack the cognitive awareness needed to engage in moral or even prudential reasoning, much less the social planning necessary to correct for their innate folly.
“Why should gay men struggle to control their desires or go to jail for rape? Why can’t they just mutilate their brains and stifle their sexual desires?”
“Why should straight men struggle to control their desires or go to jail for rape? Why can’t they just mutilate their brains and stifle their sexual desires?”
You are starting to sound like a Christian.
The solution to rape is not erasing sexual desire. Until you understand that, you do not understand what moral knowledge is. You have no idea what we are even talking about.
A clearer example would be: if we had androids with the full competence of a human adult and fully protected human rights who freely and consentually engage in the sex trade, and choose to wear the simulated bodies of children to satisfy a client’s kink, “would that be immoral”? The answer is no.
Likewise our laws against fake depictions of child sex (cartoons, etc.) are themselves immoral, because they confuse whose human rights are affected by that behavior. That’s just the legislating of irrational prudery. The only human beings involved in that transaction are competently consenting adults, none of whom are even harmed; ergo there can be nothing immoral going on there. The only reason sex with children is ever wrong is that one of the participants isn’t a competently consenting adult. Desiring a thing and forcing it on someone who can’t consent to it are not the same thing. And until you understand that, you will never understand what real morality even is.
I see. This is your real motive here. You don’t want a computer that discovers what are true moral facts for human beings. You just want a superior species of sociopaths to replace humans. Clearly you and I are not talking about the same thing.
In section Is Everyone Insane, paragraph 6, the second link is broken.
All links work for me. Can you instead state the text of the link? Maybe I’m looking at the wrong link.
What is your opinion of “911” in respect to your opinions of “morality”-Thank you–J. Later
I have no idea what your question is.
I’ve never heard you advocate for “radical honesty” before. What is the story behind that? How did you come to agree with this philosophy?
It seems self-evident to me. Hardly requiring an answer.
The only reason radical honestly isn’t immediately recommendable is that our culture is mal-designed and thus always generates error modes when it is employed.
It will take a long time to reconfigure culture so that it stops generating those error modes. But we can see it heading in that direction, e.g. there are more things we can be open and honest about today without disaster or repercussions than was the case fifty years ago, and likewise fifty years ago compared with a hundred, and a hundred years ago compared with two hundred.
The direction of the arc is obvious. One need merely ascertain if that’s progress or regress or neither—if it makes no difference whether, with respect to social honesty and frankness, whether we live in 2020 or 1820. Or if we are better off, on this one dimension, in 2020 than in 1820. I think the answer is too obvious here to even require explication.
One might then ask if there is a hard limit: if one can, as Aristotle might say, be “too honest” even in an optimally designed culture and are we there now (that such limits must exist is probably true for all moral arcs: too much compassion is crippling, self-destructive and exploitable; too much freedom is destructive of freedom; etc.). I don’t see any evidence we’ve gotten there on this dimension. There is still too much social punishment for truths no one should be freaked out by or punishing when told; indeed, there is still so much people ought to be telling the truth about that they aren’t, resulting in greater negative consequences than would result from admitting them in the context of an accepting culture.
Note, however, that radical honesty does not mean accepting anything whatever that anyone says (it’s not free speech extremism). The reason people conceal maliciously racist thoughts, for example, is because being a malicious racist makes you a bad person, warranting a negative reaction; and yet their being honest about it would actually make the world better—but not for them. It’s much easier to navigate a world where you know who the racists are. But racists will rightly suffer for it. Radical honesty is about accepting more truths being told than warrant hostile or negative reaction; and allowing people to know who they are really dealing with so they face the consequences they actually deserve. It is not about promoting the moral equivalence of everything anyone might say.
“Computers are only as reliable as those programming them. And they only generate outputs based on our chosen inputs.” Some of the chess moves executed by AlphaZero are beyond even world champion Magnus Carlsen’s ability. I examined one game where AlphaZero strategically sacrificed its knight. The purpose behind this move was only revealed many moves later. No human could have calculated this. Not even a super-grandmaster. AlphaZero has reached this almost invincible level, even though the programmers themselves are not particularly good at playing chess.
You’re missing the point. As I said, AI can act like a telescope and find things we can’t on our own. But only we can tell it what to find, what the goal is, so if we’re wrong about that, so will it; and only we can vet its answer to ascertain it really found something and wasn’t mistaken or manipulated. So we still always have to have the improved skills and judgment to answer the basic questions on our own (what is moral progress; how do we know when it is; etc.). No AI can replace that function for us.
This reminds me’v ai gon mad. I don’t no if AI can hold
‘beliefs’.
Dr Carrier is it tru that cuntempry philosophrs agree that the logical problum of evil, which seeks t’ posit a contradiction between the ixistuns of God and the ixistuns’v evil has been put t rest?
Dus this make the theists’ job t insinuate god easier?
No. That’s no longer true. See my discussion of Keller and Sterba.
I am not much enamored with the logical argument though, because it’s much easier to demonstrate the evidential argument, so why bother with the harder one?
Richard, what are your thoughts on deepfakes porn generated by AI? Are they unethical?
I don’t know what that question has to do with the topic. But, okay, here we go:
If you mean specifically real people’s images being used without their permission to produce porn, then yes. In fact I believe that should be a felony crime. Because our civil tort system is really only justice accessible to the rich (because pursuing it carries enormous price tags).
Note the moral issue is already settled. Even for not porn you have to get releases from people to use their image (and AI’s regular abuse of this is already being subject to lawsuits). There are exceptions for journalism and the like. But the existence of exceptions highlights the immorality of avoiding them.
The law hasn’t really caught up, but civil torts exist for deepfake porn, and in some states, it may soon be a crime even to download (much less publish) deepfake porn. I think that’s a tad excessive (downloading can have legitimate legal purposes, e.g. journalism and academic research; so I would hope any legislation accounts for that). But it’s going in the right direction overall.
This still allows people to consent to deepfake porn. James Earl Jones sold the rights to deepfake his voice, and given his status I expect his lawyers, agents, and union advisors all ensured his contract of sale prohibited disparaging applications like porn. But someone else may be okay with the idea (many a porn actor for example could profit in retirement from it).
I believe there should be a legally mandated standard contract for sale of rights to a self-image, which embodies all this kind of accumulated institutional knowledge, like most states have for real estate contracts of sale, whereby parties have to voluntary opt out of provisions, rather than think to include them. This would allow for legal deepfake porn while protecting everyone else against it.
But violating contracts is usually a tort, by which justice is not available to most of the population. Hence I support criminalization. The purpose of a state is to assume the expense of ensuring the preservation of human and civil rights.