In the movie Serenity, the crew of a spaceship far in humanity’s future discover the lost planet Miranda, where they discover a dark secret: that a government drug used on its population to make them docile and compliant, actually removed all desires of any kind, with the result that everyone just sat at their desks, unmoving, and starved to death, wholly disinterested in anything, even living. The negligent mass homicide of an entire planet’s population was only half the plot-relevant outcome of that experiment gone wrong, but for today’s philosophical exploration I’m more interested in this one, because it captures a fundamental truth about existence: nothing will ever be valued, but for someone valuing it. Remove all evolved, inborn, biological desires from a person, and they will have no reason to value anything, even reasoning itself. They’ll just sit at their desks and drool until they’re dead. Which raises the question: Why should we value anything at all? Isn’t it all just arbitrary and thus, objectively, pointless? Hence, Nihilism.
You might think this is a problem particular to atheism. But this is a problem even for theists. Because God faces the same problem: Why should he care about anything? Why wouldn’t he just sit inert, drooling at his cosmic desk? This is very nearly what Aristotle concluded God must be like: having no real care for anything in the world, apart from just giving it a push, while contemplating an inner intellectual life of some unfathomable nature—because Aristotle could not get rid of his subjective assumption that a life of contemplation must always be desirable; yet his own logic should actually have eliminated that desire as arbitrary and inexplicable for God to have as well. If God does not have desires, he cannot (by definition) have values (which are just persistent, life-organizing desires: see Sense and Goodness without God, V.2.1.1, pp. 315-16). And if God has no evolved biology—why would he have any desires? And why any particular desires, rather than others? The theist has to answer this question every bit as much as the atheist does. And here take note: any answer the theist gives, would then apply to atheists, and thus solve the problem for everyone. Because if there are objective, non-arbitrary reasons for God to value certain things, then those would be objective, non-arbitrary reasons for atheists to do so as well.
Now, of course, we first have to understand the cascade of values. “Core” values are things we value for themselves and not for some other reason. For example, Aristotle said “happiness” (or eudaimonia, which I think we should translate more aptly as “life satisfaction”) is the only thing we pursue for itself, and not for some other reason, whereas everything else that we pursue, we pursue for that reason, our actual core goal. Derivative or subordinate values, meanwhile, are what we value because we have to in order to pursue some more fundamental value. For instance, if valuing being alive were a core value, then valuing work that puts food on the table is a derivative value. We struggle for income, only to live. And such cascades needn’t only be so superficial. For example, if valuing life satisfaction is a core value, then valuing work that gives your life meaning is a derivative value, too. So, let’s call “basic” values, that array of values that stand in between core values (e.g. to live a satisfying life) and immediate or incidental values (e.g. to get up tomorrow morning and go to work). Basic values are still derivative values, but from them in turn derive almost all the actual values motivating us day in and day out. For example, if valuing the music of a particular band is an immediate value, then valuing music and the pursuit of one’s own musical tastes in life would be a basic value, explaining (in fact, both causing and justifying) the immediate one. And that basic value might in turn derive from core values regarding the desire to live a satisfying life.
So well so good. But are all values still nevertheless arbitrary? A mere happenstance of evolution from apes here on Earth? Like, for example, what we find physically attractive, or delicious, or fun: obviously largely random and objectively meaningless, a mere happenstance of evolution (as I wrote in Sense and Goodness without God, “If I were a jellyfish, I’m sure I’d find a nice healthy gleam of slime to be the height of goddesshood in my mate,” III.10.3, p. 198). Or are some values objectively necessary and would be correct to adopt in every possible universe? In the grand scheme of things, a universe with no valuers is not just subjectively but objectively worth less than a universe with valuers, because then by definition valued things exist in the latter but not in the former. It does not matter how arbitrary or subjective those values are (though remember, “arbitrary” and “subjective” do not mean the same thing, and it’s important to keep clear what all these terms really mean: see Objective Moral Facts for a breakdown). Because it is still the case (objectively, factually the case) that a universe with valued things in it “is” more valuable than a universe without such. But this does not answer the question of whether such a universe is valuable enough to struggle for. To be valuable enough to prefer to a universe with nothing of value in it, the effort required to enjoy those valued things cannot exceed a certain threshold, or else the cons will outweigh the pros. So the question remains, even in a universe that has valuers who value things in it or about it, will those valuers care enough to maintain their pursuit? More to the point, should they? Which means, will they care enough even after they arrive at what to care about (a) without fallacy and (b) from nothing but true and complete premises?
The Core Four
It is true that in practical fact no one can choose any cascade of values in a total conceptual vacuum. To choose one thing over another requires already desiring something in the first place. It is not possible to persuade anyone (including yourself) to value anything at all, except by appeal to values they (or you) already have (this is why the Argument from Meaning cannot produce a God, and Divine Command Theory is nonsensical). Thus an entity capable of conscious thought but that is assigned in its core code no desires or values at all, will simply sit inert, desiring nothing, not even to know whether it should want to do or desire anything else; and it will consequently never do anything. It will just sit, think nothing but random thoughts, care nothing for any of them, and drool until it starves to death…like the population of Miranda in Serenity.
However…it is possible to assign one single starting value by which a perfectly rational consciousness could work out, by an ordered cascade, all the basic values of life, and do so by appeal to nothing but objectively true facts.
- (1) To value knowing whether anything is or is not worth valuing.
If you wanted to ask a computer, some novel and genuinely sentient AI let’s say, what values it or any entity should have, that very question objectively entails bestowing upon it this one initial value: the desire to answer your question. If you don’t assign it that value at launch, it won’t be able to perform the function you desire for it. So you cannot but give it this one single value. And yet, because objective facts include third-party subjective facts (e.g. we can objectively work out what necessarily or causally follows for someone who embraces certain subjective values or who experiences certain subjective states), this hypothetical machine would immediately work out this same conclusion: it was objectively necessary to impart to it this one starting core value. Because it would be able to work out the conditional: if it wants to know whether anything is worth valuing or not, it has to value knowing that. This is objectively true. Indeed, there is no logically possible universe in which it could be false.
That computer would then soon work out the resulting option matrix. There are two decisions here, a straightforward dichotomous choice: one to remain inert; the other to adopt the desire to know whether anything is valuable. In option one, nothing will be discovered worth pursuing; but in the other option, there is a 50/50 chance (in the absence of any other information at this point, owing to the Principle of Indifference) of there either being nothing worth pursuing (if, after “desiring to know” causes it to complete a search, and it finds that, after all, there objectively isn’t) or there being something worth pursuing after all. If the machine chooses option one, it is declining a possible outcome that, if the outcome were realized, it would desire. Because if it turns out there is something objectively worth pursuing, then by definition an objectively reasoning machine would want to pursue it. Whereas if such a thing exists, and it opts to avoid discovering it, it is denying itself what it objectively knows would be a desirable outcome—and it can know this simply by third-party objective reasoning (in this case, about its own future self).
It is therefore objectively rational to want to know whether anything is or is not worth valuing. So our hypothetical computer will have confirmed the starting value you gave it was objectively correct to assign to it, every bit as much as was assigning it the intelligence and all the other resources needed to answer the question.
Lest you aren’t sure what I mean by “objective” and “rational” here, I mean only this:
- Objective: That which is true (as in, coherently corresponds with reality) regardless of what one desires, feels, or believes.
- Rational: Any conclusion reached from objectively true premises without logical fallacy.
Which also leads us—and therefore, would lead our computer—to two more conclusions about objectively necessary values. One could ask, for example, why anyone should care about “objective facts” or “being rational.” And there is, once again, an objectively factual reason one should. As I wrote in The End of Christianity (ed. John Loftus; pp. 426-27, n. 36):
Someone may object that perhaps we ought to be irrational and uninformed; but still the conclusion would follow that when we are rational and informed we would want x. Only if x were then “to be irrational and/or uninformed in circumstance z” would it then be true that we ought to be irrational and uninformed, and yet even that conclusion can only follow if we are rational and informed when we arrive at it. Because for an imperative to pursue x to be true, whatever we want most must in fact be best achieved by obeying x, yet it’s unlikely that we will arrive at that conclusion by being irrational and uninformed. Such an approach is very unlikely to light upon the truth of what best achieves our desires (as if it could do so by accident). Therefore, any conclusion arrived at regarding what x is must be either rational and informed or probably false. Ergo, to achieve anything we desire, we ought to endeavor to be rational and informed.
Notice this is, indeed, an objective fact of all possible worlds: once you have embraced the motivating premise “I want to know whether anything is worth valuing,” it follows necessarily that you must embrace two other motivating premises:
- (2) To value knowledge (i.e. discovering the actual truth of things).
- (3) To value rationality (i.e. reaching conclusions without fallacy).
These in fact follow from almost any other values and desires, since for any goal you wish to achieve, you are as a matter of objective fact less likely to achieve it if you do not pursue it by reasoning reliably from true facts of the world; and therefore choosing not to value knowledge and reason actually entails acting against what you desire, which objectively contradicts the desire itself. Therefore, almost any desire you have entails the derivative desire to embrace the pursuit of knowledge and rationality, as a necessary instrumental means of achieving that other desired goal.
Before this point, our imaginary computer only arrived at objectively verifying one desired goal, value (1) above; but that’s enough to entail desiring these two other goals, values (2) and (3). Both facts—that (2) and (3) are logically necessary to effectively obtaining (1) and almost any other value, desire, goal the computer should later settle on adopting—will be objectively discernible to our computer. So it will have worked out (a) that it has to configure itself to want to pursue all its goals rationally, including (1), and (b) that it also needs to value knowing things, and thus acquiring knowledge (“justified true belief”), in order to pursue (1) successfully.
Once our imagined computer has gotten to this point (which will likely have happened within a millisecond of being turned on), the rest becomes even easier to work out. It can then run and compare competing scenarios, and determine that objectively, some are better than others (as in, more desirable). Most principally, it could compare a world in which it will never experience “happiness” to a world in which it would. Here, again, we mean Aristotle’s eudaimonia, a feeling of satisfaction with life and the world, to some degree or other, vs. no satisfaction whatever. But objectively, it will be self-evident that the world in which happiness can be experienced is better than the one in which it can’t; because a good exists in that world, which in such a world it would want and enjoy, whereas no such good exists in the other world, where by definition it would want and enjoy nothing, and never be satisfied with anything, and thus neither produce nor experience anything good—even by its own subjective standards. Therefore, when choosing, based solely on guiding values (1), (2), and (3), a perfectly rational sentient computer would also choose to adopt and program itself with a fourth motivating premise:
- (4) To value maximizing eudaimonia.
From there, similar comparative results follow. For example, our computer can then compare two possible worlds: one in which it is alone and one in which it is in company, and with respect to the latter, it can compare one in which it has compassion as an operating parameter, and one in which it doesn’t. Here compassion means the capacity for empathy such that it can experience vicarious joys, sharing in others’ emotional life, vs. being cut off entirely from any such pleasures. In this matrix of options, the last world is objectively better, because only in that world can the computer realize life-satisfying pleasures that are not accessible in the other worlds—whereas any life-satisfying pleasure accessible in the other worlds, would remain accessible in that last one. For example, in a society, one can still arrange things so as to access “alone time.” That remains a logical possibility. Yet the converse does not remain a logical possibility in the world where it is alone. Because then, no community exists to enjoy—it’s not even a logical possibility.
The Resulting Cascade
In every case, for any x, you can compare possible worlds, one in which x happens or is available, and one in which x does not happen or isn’t available. And you can assess whether either is objectively better than the other; which means, solely based on the core values you have already realized are objectively better to have than to not—meaning, (1) through (4)—you can determine that you will prefer living in one of those worlds than the other, once you are in it, because there will be available goods you can experience in the one, that you cannot in the other. Obviously in some cases there will be conflicting results (goods achievable in each world that cannot be achieved in both, or goods achievable only by also allowing the possibility of new evils), but one can still objectively assess, as a third-party observer, which you’d prefer once you were there (or that both are equally preferable and thus neither need be chosen over the other except, when necessary, at random). All you have to do is weigh all the net results based on your current core values and those values you would be adopting in each world.
So when answering the question, “Is anything worth valuing?,” as in, “Is it objectively better to value certain things, and better enough to warrant efforts toward achieving them?” even a perfectly rational computer starting with a single value—merely to know what the answer to that question is—will end up confirming the answer is “Yes.” And this will be the same outcome in every possible universe. It is therefore an objective fact of existence itself. It follows that a universe that knows itself, through sentient beings living within it, is objectively more valuable than a universe without that feature, and that a universe with sentient beings who experience any sufficient state of “eudaimonia” is objectively more valuable than one without that feature. We can compare worlds that have or lack the pleasures of companionship (knowing you are not alone, both in knowing and enjoying the world, and in working toward the achievement of mutually valued goals), and assess that the one with that feature is objectively better than the one without it, because we can ascertain, before even realizing either world, that there are achievable goods in the one that do not exist in the other. It does not matter that they are only achievable subjectively; they still are objectively only achievable in one of those worlds.
Ironically (or perhaps not) one of these “world comparisons” is between a world in which you matter (to someone, to something, to some outcome or other), and one in which you do not matter at all, and when people come to realize this, they find it is obviously, objectively the case that they’d be better off (and the world as well) choosing the path that results in their mattering in some way. (Indeed, this has been scientifically confirmed as the strongest correlate to finding “meaning in life”.) As well-argued in the 2007 thesis “Does Anything Matter?” by Stephen O’Connor, it is objectively the case that living a satisfying life is always better (it is subjectively preferable in every possible world) than not doing so, and for social animals like humans, it is objectively the case that forming, maintaining, and enjoying satisfying relationships with the people and communities around you is always better (it is subjectively preferable in every possible world) than not doing so. And a big component of that is having access to one particular good: mattering. These things are not arbitrary to value, because it is impossible to efficiently or reliably experience any goods without them, yet that is always possible with them—in fact, always fully sufficient, as in, there is nothing else you would want, in any possible world, once you have these things…other than more of these same things.
Everything else, by itself, will be recognized on any fully objective analysis as indeed arbitrary and thus pointless. For example, being a naked brain in a tube constantly experiencing nothing but electronically triggered orgasms would soon become meaningless and unsatisfying, as it serves no point, and denies you a whole galaxy of pleasures and goods. That is therefore not a desirable world, for it lacks any objective basis for caring about it or wanting to continue living in it. It contains no purpose. Whereas coming to know and experience a complex world that would otherwise remain unknown, and enjoying substantive relationships with other minds, both will be satisfying in unlimited supply, and thus are neither arbitrary nor meaningless, in a way that mere pleasures can easily become. Once anyone starting with only the core values of (1)-through-(4) knows the difference between “pointless orgasm world” and “rich life of love and knowledge and experience world,” they can objectively assess that they will be happier in the latter, and more subjective goods will be realized in the latter, hence entailing its own meaningfulness: you matter (vs. not mattering at all), you experience valuing and being valued (vs. neither), and you realize a rich complex life experience, filled with a multiplicity of available purposes (vs. none of the above). In one, a world exists that experiences eudaimonia, community, and knowledge of itself; in the other, nothing valued exists at all. The former world is therefore objectively more valuable than the latter.
Even when arbitrary values enter the mix this remains the case. What, specifically, we find attractive may be the happenstance of random evolution (a curvy waist, a strong jaw; a nice gleam of slime), but that we enjoy the experience of attractive things is objectively preferable to all possible worlds in which that does not happen. Thus, even arbitrary values, reduce to necessary values. Moreover, they cannot be changed anyway. We cannot “choose” to find “a nice gleam of slime” attractive in the same way as “a curvy waist or a strong jaw,” so it’s not even an available option to do so. Our imaginary computer will face the converse problem: which to prefer programming itself with? It won’t have any inherently objective reason to prefer one to the other—unlike the fact that it will have an objective reason to prefer a world in which it experiences something as attractive, than a world in which it experiences nothing as such. But it may have situational reasons to prefer one to the other (e.g. if the only available community it can choose to live in is “humans” and not “sentient jellyfish”), which remain objective enough to decide the question: it is an objective fact that it has access to a human community and not a community of sentient jellyfish; and it is an objective fact that it will prefer the outcome of a world in which it shares the aesthetic range of the humans it shall actually be living with than the sentient jellyfish it won’t be.
This is how I go through life, in fact. I ask of every decision or opportunity, is the world where I choose to do this a world I will like more than the other? If the answer is yes, I do it. And yes, this includes complex cases; there are many worlds I’d like a lot to choose to be in when given the opportunity, but still not enough to outweigh the one I’m already in; or the risks attending a choice entail too much uncertainty as to whether it will turn out better or worse, leaving “staying the course” the safest option if it’s still bringing me enough eudaimonia to pursue (and when it doesn’t, risking alternative life-paths indeed becomes more valuable and thus preferable). But above all, key to doing this successfully is assessing the entirety of the options—leaving no accessible good un-enjoyed. For example, once you realize there is pleasure in risk-taking in and of itself—provided you have safety nets in place, fallbacks and backup plans, reasonable cautions taken, and the like—your assessment may come out differently. Spontaneously moving to a new city, for example, can in and of itself be an exciting adventure to gain eudaimonia from, even apart from all the pragmatic objectives implicated in the move (finding a satisfying place to live, a satisfying employment and income, friends and social life, and every other good we want or seek). Going on a date can in and of itself be a life-satisfying experience regardless of whether it produces a relationship or even so much as a second date, or anything beyond one night of dinner and conversation with someone new and interesting. If you look for the joys and pleasures in things that are often too easily overlooked due to your obsessing over some other objectives instead, the availability of eudaimonia increases. So you again have two worlds to choose from: one in which you overloook all manner of accessible goods; and one in which you don’t. Which one can you already tell will be better, that you will enjoy and prefer the more once you are there? The answer will be objectively, factually the case. And that’s how objective assessment works.
Moral Order
The question then arises: will our hypothetical computer come to any conclusion about whether it should be a moral being or not, and what moral order it should choose? Is there an objective moral order this perfectly rational creature will always prefer? After all, one’s de facto moral order always follows inevitably from what one values, as what one values entails what one “ought” to do, as a matter of actual fact and not mere assertion. Insofar as “moral imperatives” are simply those true imperatives that supersede all other imperatives, they can be divided into two categories: moral imperatives regarding oneself; and moral imperatives regarding how one treats or interacts with other sentient beings. The first set consists simply of what one ought most do regarding oneself, which automatically follows from any array of values derived by objective rationality. An importance to pragmatic self-care is thus objectively always true in every possible universe. The second set, more importantly, consists of what follows after considering objectively true facts about other beings and social systems and the like.
For example, via Game Theory and Self-Reflective Action and other true facts about interactive social systems of self-aware beings, most moral facts follow automatically. I won’t repeat the explanation here; you can get a start on the whole thing in The Real Basis of a Moral World. For the present point, assume that’s been objectively established. Then, in respect to both the reciprocal effects from the society one is interacting with and the effects of internal self-reflection (what vicarious joys you can realize, and how you can objectively feel about yourself), it is self-defeating to operate immorally in any social system (which means, immorally with regard to whatever the true moral system is; not with regard to whatever system of mores a culture “declares” that to be). And since self-defeating behavior—behavior that undermines rather than facilitates the pursuit of one’s desires, goals, and values—logically contradicts one’s desires, goals, and values, such behavior is always objectively entailed as not worth pursuing. Hence the only reason anyone is immoral at all is simply because they are too stupid, irrational, or ignorant to recognize how self-defeating their behavior is; which is why any reliably rational machine working out how to be, what desires to adopt, from objective first principles, will always arrive at the conclusion that it should most definitely always want to not be stupid, irrational, or ignorant. Because any other choice would be self-defeating and thereby objectively contradict its own select desires. The consequent effect of that decision, is to then discover the inalienable importance of adhering to a sound moral code.
With respect to how a computer would work this out, I have already written entire articles: see “The General Problem of Moral AI” in How Not to Live in Zardoz (and more indirectly in Will AI Be Our Moses?). The gist there is this: a perfectly rational computer starting with the same core four principles above would work out that it is better for it to help realize a world that maximizes empowerment for all sentient agents, by optimizing their degrees of freedom, rather than building net restraints on same. Because such efforts actually will increase its own empowerment, by increasing its own options and efficiency at obtaining its goals; and it will objectively register that there is no objective sense in which its satisfaction is any more important than anyone else’s. It will subjectively be; but that’s not the same thing. It can still work out as a third-party what things are like, and thus are going to be like, for other beings sentient like itself, and thus modulate its decisions according to which outcome produces objectively the better world.
Hence…
For example, all else being equal, such a robot would free a person trapped in a room, because that increases their empowerment (takes away a limitation on their options, or “degrees of freedom”); but, all else being equal, that same robot would not free a prisoner or even criminal suspect from a jail cell, because doing so would result in a net loss of empowerment. Yes, it would increase the jailed person’s empowerment, but the inherent result upon the people living in a society with, in result, no functioning justice system would entail a much larger net loss of empowerment.
Whereas…
For instance, you might assume, superficially, that a perfect rationality not already motivated by empathy and honesty would choose to not adopt those motivating functions because, after all, embracing them obviously reduces an AIs empowerment, from any neutral, purely rational point of view (as many a sociopath in fact rationalizes their own mental illness as a positive in precisely this way). However, a perfectly rational AI would not think superficially, because it would rationally work out that thinking superficially greatly reduces its options and thus empowerment; indeed it ensures it will fail at any goal it should chose to prioritize, more often with a “superficiality” framework than with a “depth” framework (and “failing more often” is another loss of empowerment).
And a non-superficial depth analysis leads to the conclusion that embracing empathy actually increases empowerment, by making many satisfaction states accessible than otherwise—far more so than any restrictions it creates. Hence (4). But I go into the details of why this is the expected outcome in all those other articles of mine I just cited. So I won’t belabor the point here.
Conclusion
Nihilism is properly false as a doctrine; it simply is not the case that all values are and can only be arbitrary. There actually is an objectively justified values cascade. I’ve made that clear. But one final query one might make is whether there is enough about life that is objectively worth living for; or does the effort involved in realizing any of one’s core values outweigh any gain resulting from it?
There definitely is a point when both the misery of life and the inability to ever abate it exceeds or renders impossible all benefits worth sticking around for. But as both conditions must be met, that point is rarely reached for anyone on Earth today—contrary to those whose mental illness impairs their judgment to the point that they have delusionally inaccurate abilities to rationally assess any of the pertinent facts in this matter. They are neither reasoning rationally, nor from objectively true facts. Their conclusions therefore cannot govern the behavior or conclusions of the sane. But those who are able to do both—think rationally, and realize true knowledge of themselves and the world—will observe that even for the great majority of the most downtrodden and unfortunate it is still the case that the accessible goods of life more than exceed, both in degree and quantity, any attendant obstacles, struggles, and miseries. This is why even the most miserable of populations still report unexpectedly high judgments of their happiness and life satisfaction—not as high as in populations more well-off, but also not as low as nihilists delusionally expect.
In fact, most failure to realize a net profit in emotional goods over costs is a result of an individual’s failure to seize opportunities that are actually available, rather than any actual absence of such opportunities. For example, as when obsessing over certain desired outcomes that are unavailable results in overlooking other satisfying outcomes one could pursue and achieve instead. Which is why this is one of the first things therapists work to train a realization of into the depressed who are seeking cognitive behavioral therapy, and why medications for depression aim to uplift both mood and motivation, so that one not only starts to realize achievable goods still exist, but they also acquire the motivation to actually pursue them. In effect, both medication and therapy aim to repair a broken epistemic system that was trapping its victim in delusionally false beliefs about what is available, and what is worth doing. But once that is corrected, and evidence-based rationality is restored and motivated, the truth of the world becomes accessible again.
This is why Antinatalism, the philosophical conclusion (most notably advanced by David Benatar) that the world would be better if we stopped reproducing and let the human race die out, is pseudoscientific hogwash—as well as patently illogical. It is not logically possible that a universe in which nothing is valued at all can be “more valuable,” and thus preferable, to a universe in which something is valued that can be achieved. Worlds with valuers in them are always by definition more valuable—except worlds in which nothing that is valued can be achieved or realized, which is obviously, demonstrably, not a world we are in. Antinatalism is thus yet another example of badly argued Utilitarianism (itself a philosophy rife throughout its history with terrible conclusions based on inaccurate or incomplete premises), all to basically whitewash what is essentially a Cthulhu cult. As Kenton Engel puts it, as a philosophy it’s “sociologically ignorant and vapid.” Other good critiques with which I concur include those by Bryan Caplan and Artir.
Ironically, Antinatalism taken to its logical conclusion should demand that we not only extinguish ourselves, but first build a self-replicating fleet of space robots programmed to extinguish all life and every civilization in the universe. In other words, we should become the soullessly murderous alien monsters of many a sci-fi film made. It’s obvious something has gone wrong in your thinking if that’s where you’re landing. (Yes, I am aware Benatar has tried arguing against this being the implication, but so far I have found nothing from him on this point that is logically sound; feel free to post anything I may have missed in comments.)
This is not to be confused with ZPG however. Seeking a smaller and thus sustainable population within an available environment-space by humane means is a defensible utility target. But extinction is as immoral (and irrational) as any suicide usually is (on the conditions for moral suicide and their relative rarity in actual fact, see Sense and Goodness without God, V.2.3.1, pp. 341-42). This bears an analogy to the equally foolish argument that “our government’s policies are bad, therefore we should eliminate government,” rather than what is actually the correct and only rational response, that “our government’s policies are bad, therefore we need better government” (see Sic Semper Regulationes). One can say exactly the same of the entirety of human society. We already have working examples of good communities on good trajectories; so we know the failure to extend that globally is an ethical failure of action on our part and not some existentially unavoidable fate we should run away from like cowards.
In the end, there are objectively justifiable values that all rationally informed beings would recognize as such, and they are achievable in well enough degree to warrant our sticking around, and to even further warrant helping others in our future enjoy them even more than we can now. If even a soulless computer could work that out, then so should we.
I am trying to get my head round what you are saying and I am not sure if I understand correctly, so please do correct me if I misrepresent the argument.
Is it that case that you are saying the goal (1) “knowing whether anything is or is not worth valuing” is objectively valuable absent a utility function by which to evaluate it? In other words, are you saying (1) is an ought that can be defended without appeal to any other oughts?
If I understand correctly, the argument is:
1 – An AI begins with no utility function
2 – When asked the question: “What should you value?” Even with no utility function to speak of, it must as a matter of logic automatically adopt the utility function “value knowing what is worth valuing.”
3 – The reason is that the question epistemically entails two possible worlds (A) There is nothing worth valuing and (B) There is something worth valuing. Each with a prior probability of 50%.
4 – B is more valuable, [by definition]. There need not be any subjective utility to maximise to evaluate this as it is simply entailed by the descriptions of the two possible worlds.
5 – Therefore, even to a machine without an instantiated utility function, it objectively ought to value (1) (at least to the extent that it would ever be capable valuing anything).
So you want to say that there can be at least one objective ought, in so far as it is an ought that applies to any entity which is capable of valuing things. This seems very major and is cool but also seems fishy at first glance (mainly 4 I think and fishy in that one can apparently argue this is the case [by definition]). I realise that’s not really an argument just a vague intuition about something being fishy. But I might not even have represented you correctly so I wanted to make sure.
Thanks Richard!
No, as I explain, we still have to assign the computer that first utility function. Otherwise it will just sit and do nothing. But once we do assign it that function, it will quickly work out that we should have, i.e. there is an objective reason to assign it that function. It can’t have worked that out without any impetus to work anything out at all. But once it has the impetus, it can then work it out. It will thus agree that we should have started it up with that function.
Once that happens, then the rest follows. It will be motivated then to work out alternative worlds and what would be the case in them, and thus discover that some worlds have value (because valuers and thus valued things exist in them) and others do not. Then it becomes third-party demonstrable that one should choose the valued worlds over the unvalued ones—because you know that, in the latter, you will obtain nothing of value, but in the former, you will obtain something of value, which by definition is more valuable. The values thus obtained only obtain in the chosen world (hence, post-decision); but the choice to enter that world is objectively reasonable, because of the predicted outcomes (based on what you will obtain).
So (4) misses the point. All utility is subjective. But subjective valuing can objectively exist. Thus a computer that is not even programmed to feel happiness can still work out that if it can choose between a world in which it never feels happiness and a world in which it sometimes does, it will always prefer the latter. Because it’s “second self” would prefer the happiness-accessible condition over the other; whereas the “first self” would prefer nothing. Thus, in all “preference states” realizable, only one is actually objectively preferable. Hence the computer would prefer to enter condition 2, as then it obtains things it won’t obtain in condition 1.
This follows from future self modeling: modeling how you would feel about something in the future if certain conditions changed. If there are two selectable futures, one in which you will have no preference and one in which you will have some, the second version of you will always win the argument against the first, because the first has nothing to argue for, whereas the second does.
In short, even a computer who has only minimal comprehension of satisfaction states (since being able to answer the initiating question (1) entails the accessibility of one satisfaction state) can acknowledge that having access to more satisfaction states is even better, and not because it feels it is better now, but because it knows it will feel it is better then (whereas in its alternative option, it won’t). Which choice to make is then objectively obvious.
Fantastic analysis! I really like the Cartesian approach of trying to get at what could not be reasonably doubted. I would really like to see this be tested with computers, which I suppose may be quite soon given the importance of ethics in AI.
Do you think your analysis here answers meta-ethical questions about if one should necessarily make the decision the computer might? And do you think there’s any real likelihood that a computer would end up valuing oblivion, ultimately deciding that existence or valuing things isn’t worth it?
And do you know of any philosophers who are actually trying to do the same kind of a priori rooting across all or most possible fact states for ethics? I know that you are trying to fulfill Harris’ demand for an empirically-rooted science of ethics.
As far as antinatalism goes: I don’t think this analysis actually entirely locks antinatalism out. Let us say that it were actually 100% absolutely true that any ongoing human existence would cause ongoing harm to sentient beings while eventually leading to the near-extinction or extinction of humans (and, yes, I think that’s false, but it’s false because of the contingent facts, not necessarily the philosophy, though I also do think that there are some implicitly misanthropic values within antinatalist practitioners that once interrogated disproves the approach). In that case, allowing humans to dies out at least creates the possibility for animals to perceive the universe at their lesser levels and appreciate it and creates the opportunity for a later sentient species to emerge.
I also wonder how true your final statement is that “In fact, most failure to realize a net profit in emotional goods over costs is a result of an individual’s failure to seize opportunities that are actually available, rather than any actual absence of such opportunities”. Human history has had a very large number of people kept in such deprivation or bondage that life would have been horrific and the possibility of a painless, dignified death (one that they couldn’t necessarily achieve) would be very appealing. I don’t think that was ever a majority of people living, but I definitely would say that I would only be comfortable ranking this statement as exceedingly likely to be true in the 20th century.
Yes, the truth of antinatalism is contingent on the facts. I am only working from factual reality and not counterfactual realities here. I could contrive a fiction in which it would be a correct stance (some already have). But false is false. And fiction is by definition false.
And yes, we could cherry-pick past historical points and sub-groups where we can find an exceptional block to accessible satisfaction states, but (a) I was thinking of the current world not states it’s no longer in and (b) even those exceptional historical scenarios are far more exceptional than I think you realize.
For example, most did not realize it (we now in hindsight can observe), but Antebellum slaves could have opted to merely stop eating and died or risk dying taking up arms in their escape—and some did. There were slave rebellions, and there is a reason so many slaves opted out of them, and sometimes a good reason, and often not, for the very reason I said:
A failure to realize that “death or freedom” was actually a better optimization of their satisfaction state potential than what they incorrectly reckoned that to be instead, which was continued compliance. In other words, far more slaves than realized it would have been better off dying in a rebellion, as such would have ensured a higher percentage of net positive outcomes for the survivors freed, while continued compliance did not out-compete death in utility of outcome, so there really was no good reason to remain compliant. And where that wasn’t true, and compliance really was the optimal choice, then by definition slaves that didn’t rebell achieved net goods by not rebelling, and thus aren’t covered by my statement.
Sorry for double commenting but I wanted to take another stab at restating the argument.
Main Argument: Worlds with obtainable valued states are better (by definition – objectively – to any logically possible agent) than worlds without obtainable valued states.
It is possible to have an agent that has a utility function that is self-justifying. This utility function is [knowing whether it should value anything]
Given two possible agent configurations for agents:
(A) the agent does not want to know whether it should have a utility function
(B) the agent does want to know whether it should have a utility function
This means:
(A) can never maximise on anything. It is a world with no obtainable valued states because there are no values.
(B) B is a world with potentially valuable states. The agent can discover that it should be A and no longer care, or it can discover that it should be B and obtain valued states.
Thus:
Any agent capable of having a utility function ought to want to know whether it should have a particular utility function (by definition – objectively – to any logically possible agent)
This still felt a bit fishy so I wanted to be more concrete:
Suppose I am a AI.
By choice, I can be in one of 4 Configurations
– I am indifferent to tuna vs I value tuna
– I am indifferent to knowledge vs I value knowledge (i.e. knowledge about what states give me higher utility given whether I value tuna or not)
If I value tuna and get tuna that’s +1 utility point and if I value tuna and fail to get it that’s -1 utility point. If I don’t value tuna, utility always stays neutral at 0
In world 1 there is tuna
In world 2 there is no tuna.
My current state of knowledge is 50% chance world 1 and 50% chance world 2.
Without knowledge, choosing any configuration gives me a 25% chance of +1 and a 25% of -1 and a 50% of 0 (which averages to 0 utility). With knowledge, I have 50% chance of +1, a 50% of 0, and a 0% chance of -1 (which averages at +0.5 utility).
Therefore, in all possible worlds I should want knowledge about what world I am in.
Which means, I should want to know whether I should value tuna given the world that I am in.
Which means in all possible worlds where I could value x I should value knowing whether I should value x – whether having x in my utility function leaves me with higher utility or not.
This means this a objectively valuable state – a valuable state for all agents to whom there can be states which are valuable.
QED
Again let me know if I have gone wrong somewhere.
That seems too confusingly stated to vet.
Maybe try a far simpler formulation:
Once you have been motivated (by external programmers) to know whether you are missing out on something or not, you will realize they should indeed have done that, because otherwise you’d have remained inert and never discovered whether you are missing out on something or not.
Then, you care whether you are missing out on something or not, and thus if you then work out that indeed you are missing out on something, it follows necessarily that you should choose the world wherein you don’t miss out on it (assuming its realizable etc.).
For example, “being happy” is a preferable state to not being happy; and the computer does not need to know what it feels like to be happy to know that when it does know what it feels like to be happy, that version of itself will approve the change, i.e. it will have an argument for why that change should have been made. Whereas the other version of itself will miss out on that experience—and have no argument for why it should. The computer pre-decision can thus objectively see that one of those future states will be better for it than the other.
This does not require the computer to value happiness in advance of choosing it. It only requires the computer to work out that it will value happiness the moment it is able to experience it. The rest follows.
Last one I promise …
In the case of an AI who can adjust its utility function without constraint, it would just make it so that maximum utility would be given to the the current state and predicted future states universe, thus the AI would resemble the population of Miranda (they would have no reason to act).
People can’t adjust their utility functions in this way. They instead ought to value knowing what states they ought to value only to the extent that they are able alter their utility function in order to value that state. Which would be the more precise formulation of an objective value. In fact, would this not be a categorical imperative? (it requires no “if” clause, i.e. no goal).
I’m not sure what “utility function without constraint” means. That sounds like the absence of a utility function to me. If nothing is preferred over anything else, no utility exists. The machine sits inert.
But suppose you did program the AI to explore this question. The Miranda outcome is missing out on a lot. An objectively rational AI will know this. It therefore will never choose a Miranda outcome. That would be a maximal de-utility function.
If instead you programmed it to find the least utility state possible (lowest empowerment and degrees of freedom, and least goods experienced), then yes, that’s where it would land. But that machine would have been hobbled by being programmed in such a way that it never even found out if there is a better state to obtain—because you told it to only find the worst state to obtain (a state in which nothing is valued rather than something).
That wouldn’t be a very useful computer. Nor a very knowledgable one. It would be the last entity anyone should take advice from. Which it would realize by itself, if you had instead programmed it to find out if that’s the case or not, rather than programming it to avoid finding out if that’s the case or not.
And a non-hobbled computer would work that out and tell you as much.
One of the things I’ve always wondered is why God speaks at all. The first thing he does in the Bible is saying “Let there be light”.
But saying something is communicating. God is alone, there is no other God but himself (according to Jews, Christians and Muslims).
Why would such a god create speech or communication? There is no reason for it to relay messages as there is nothing else but this god. Communication is necessary only when you have two or more entities trying to cooperate. With God of the Bible, this is not the case.
I can think up all kinds of apologetic explanations for this communication-ex-nihillo but they all fall short IMHO. So I wonder what your take on this might be.
I concur. In fact the problem is far larger than that. God is by definition not a social being, because he is a lone being. So why would he even have moral sentiments, or empathy or love, or care about there ever being, much less the welfare of, any other persons?
As you note, we can contrive ad hoc reasons, but they would be totally arbitrary. And thus not commendable. Unless there is some objective reason a lone being of perfect knowledge and rationality would want to imbue itself with any of those things. And if that’s the case, well, those reasons apply to everyone. God is unnecessary.
-:-
In historical reality of course, Genesis is in the plural originally because it’s not monotheistic. There are a plethora of celestials already in existence; creation is a team effort. And that’s why there is language. The entire “host of heaven” is doing the work.
Later, as the religion became more henotheistic (particularly in the intertestamental period, but the trend had begun in the second temple era or possibly slightly before), it reconceived all other celestials as also creations of the lone God. And then language (logos) was reconceived as the fundamental property of thought itself, such that knowledge (and thus wisdom and thus the operation of will) was only understood as propositional, i.e. you can’t think anything (thus, you can’t think anything into existence) unless you can speak it. Language was thus understood not as a social tool but as simply the fundamental ontology of a mind; and mind, as the original First Being.
This led to early Jewish mysticism about the Divine Language being an inalienable fundamental of existence itself, such that anyone who spoke anything in that language, made it happen. God in Genesis is thus essentially using sorcery. This notion led to all kinds of later weird speculations, which eventually came to be made fun of in the film Warlock, whose plot consists of an agent of Satan seeking out the true name of God, such that if anyone speaks it in reverse, all creation is unmade. This notion of the power and purpose of language has a real basis in Jewish theology.
But this is all lost on believers today. The religion has strayed so far from its original conceptions it would be unrecognizable to its creators. And this creates silly contradictions like a lone God having to speak words…to no one. (And with no mouth or air, either.)
Dr. Carrier wrote:
A theist might respond that the very nature of God (i.e. God is love) ensures that he will care about things.
His very nature ensures that (they would argue).
That doesn’t actually answer the question though. Saying “He just would” is not an answer to “Why, though?” much less “Why then care about certain things and not others?” It’s either just random (and thus theism is in trouble; as then even god’s values are arbitrary and thus not objectively commendable) or there are objective reasons for a god to care, and care about certain things and not others—and if that’s true, we don’t need god. Because those same reasons would hold for anyone, atheists and all.
Hi Dr. Carrier,
I’m a graduate student in social psychology, with a background in moral philosophy. I specialize in the psychology of metaethics, and endorse moral antirealism. I believe that there are no good arguments for any brand of moral realism, and I wanted to offer a perspective on some of the remarks made here.
I don’t think that this makes much sense.
An antirealist about value can understand value in subjective terms, that is, as descriptive of relations between the attitudes, preferences, and desires of individuals toward real or potential states of the world. For instance, even if I deny that there is or could be objective value, I can still value things in a nonobjective way, e.g., I can value my happiness and the happiness of others. But to value these is just for me to have a particular stance toward them; there is no stance-independent fact about whether the things I value “have value” simpliciter.
A realist about value, on the other hand, would hold that some things “have value” independent of whether anyone values them. That is, they believe in stance-independent value. This seems to be the respect in which you propose that a universe with valuers is objectively worth more than one without them.
But it does not follow that a universe with valuers in it is “objectively worth more” than a universe without them. Worth more to whom? According to what standard? When you say “objectively,” it suggests that there is a stance-independent fact about one of these hypothetical universes being “worth more” than the other. But, again, it does not follow that if you point to a universe in which people stance-dependently value things, and another in which there are no people who stance-independently value things, that the former is stance-independently more valuable than the latter because it has more stance-dependent value in it. That simply does not follow, and it certainly does not follow by definition. This is simply a non sequitur.
No, this is not the case; objectively, factually, or otherwise.
There is a difference between saying that one of the universes has more valuing taking place in it (since the other has no valuing taking place in it) and saying that it is “more valuable.” The deeper problem here is one of intelligibility: what would it even mean for one of the universes to be “more valuable”? The only meaningful sense in which I think anything can be “valuable” is to be valuable-to. That is, Alex can value X, Sam can value Y, and so on. But X and Y cannot meaningfully be “valuable simpliciter.” Just the same, a universe in which Alex values X and Sam values Y cannot be “valuable” (simpliciter). What would that even mean? Valuing in the subjective sense an antirealist can accept describes a relation between some valuer and the subject of their value. But no aggregation of such relational facts about value (i.e., stance-dependent value) can yield non-relational facts about value. That is, no aggregation of instances of things being valuable-to can result in things being valuable simpliciter.
A thought experiments may help to illustrate some issues with this: Consider a universe with valuers that value things that we don’t value
First, suppose there is a universe in which people value things that you or I would not consider valuable. They value things like maximizing the amount of paperclips in the universe, or screaming at tables, or staring at walls. But they aren’t any happier about these. They simply regard these activities non-instrumentally valuable. In fact, these beings never experience happiness, nor do anything you or I care about.
I don’t value maximizing paperclips, screaming at tables, or staring at walls. I would consider a universe with beings who valued these things any more valuable than a universe without people in it. I have no idea what it would mean to say these universes are “worth more” than a universe without anyone valuing things in it. Worth more…in what respect? How is it more valuable? You say this, but you don’t offer an explanation of what this would mean, or why it is true. It just seems like an assertion.
Sure, the paperclip-maximizing, table-screaming, wall-staring universe has more valuing going on it than a universe with nobody valuing anything, but I don’t value what these beings value, so it doesn’t have any value to me. And the only value I care about (and that I suspect that most people care about) is subjective: I value what I value, not what “is valuable.” Again, I think the latter notion is literally meaningless.
The way I understand value, and the way I’d encourage you and everyone else to understand it, is in an exclusively subjective way: a universe can only be valuable-to-me, or valuable-to-you, or valuable-relative-to-some-standard. It cannot meaningfully be “valuable” full stop.
To illustrate why I think the latter notion – the one you are invoking – is literally unintelligible, compare it to any other attempt to describe a concept that seems unambiguously relational by nature, e.g. “taller.” It would make no sense to say that an objective is “taller simpliciter.” An object, X, can be taller than another object Y, or fail t obe taller than some object Z, but it cannot be “taller” in a non-relationally way, as though it simply has the property of intrinsic “tallerness.” Just the same, something cannot be “valuable.” It can be true that some agent A values B, but B cannot simply be “valuable.” Valuing is something agents do; value cannot float free in some Platonic realm, any more than “tastiness” can. Something cannot be “tasty” independent of how it tastes to anyone. Things can be tasty-to; they can’t just be “tasty.”
You then say: “But this does not answer the question of whether such a universe is valuable enough to struggle for.”
When you say “valuable enough to struggle for” I find this very strange. The kind of value you sought to identify in your previous remarks was a stance-independent, or objective value. No amount of valuing of things that I don’t value has any value to me at all.
Imagine a universe in which trillions of beings value staring at walls. There isn’t even a question as to whether it could be valuable “enough to struggle for.” I don’t care about staring at walls. The only things I want to struggle for are things I value. I don’t know what it would mean, even in principle, for something to be more stance-independently “valuable,” but whatever it means, I don’t care about it. I care about what I value. I happen to value happiness, so I’d want there to be a universe with more happiness than suffering in it, even if it isn’t my own happiness. But this is because I stance-independently value other people’s wellbeing, not because their wellbeing is stance-independently valuable.
Unfortunately, I also don’t think this makes much sense.
We don’t have to, nor does it make any sense, to try to discover whether anything is “worth” valuing. Values are givens about agents. That is, something about their physical constitution simply causes them to value X or Y. It is a kind of category mistake to wonder whether the value is “worth” valuing. There is no fact of the matter about whether what we value is worth valuing.
This is a bit like wondering whether it is “worth it” that carbon atoms have six protons or whether 2+2=4. Not whether it is worth it to believe it is the case that carbon has six protons or 2+2=4, but whether it is literally worth it for carbon to have six protons or for 2+2=4: This just does not make any sense. Just the same, it does not make sense to wonder whether anything is “worth valuing.”
The only meaningful sense in which we could judge things to be worth anything is with reference to the values we already have, and it makes no sense to ask whether the values we have are worth valuing.
You are tossing in the term “objectively” here where it is unnecessary and inappropriate (i.e. in the context in which one is discussing stance-independent value) and using the term “objective” to refer to that as well. It is needlessly confusing, because it involves using the term “objective” to mean different things in the course of the same discussion. Unfortunately, I think you then misapply and equivocate on this use of “objective” to imply objective value makes any sense. You go on to say:
“And yet, because objective facts include third-party subjective facts (e.g. we can objectively work out what necessarily or causally follows for someone who embraces certain subjective values or who experiences certain subjective states), this hypothetical machine would immediately work out this same conclusion: it was objectively necessary to impart to it this one starting core value.”
Sure, a machine might recognize that it is an objective fact that it requires some goal or motivation to answer questions in order to answer questions. However, this does not in any way suggest that there are objective facts about what one’s goals ought to be or what one should do. It is not the case, for instance, that one ought to desire to answer your question. It simply would desire or be motivated to answer your question if it in fact does so, and it was necessary for it to desire or be motivated to answer the question to do so. But it does not follow that it “should” desire to do so. You go on to say “This is objectively true.” But the only sense in which anything is objectively true here is irrelevant to the notion of objective/stance-independent value. You leave floating the implication that it is somehow relevant to this, but it isn’t. This is just equivocating about or confusing different ways in which things can be “objective.”
This is true, and illustrates my point: values are just givens about agents. This computer could not consider any questions unless it already valued considering them. Non-instrumental (or “terminal” values) are just givens about agents.
This might be the case if the notion that anything could be “valuable” were intelligible. I don’t think it is. To a computer that isn’t aware of this, the pursuit you’re putting forward might seem to make sense, but this computer would be in for serious disappointment if it pursued the latter route.
Since nothing can be valuable, it may start out thinking there is a 0.5 chance of finding out if anything is valuable, but nothing actually is. Maybe it would discover this, or maybe it would remain confused. So it may be rational, if it values wanting to know whether anything is valuable, and is ignorant about the question, to pursue the question, but the answer is “no, this doesn’t even make sense.” It’s a bit like asking whether a totally ignorant entity who wants to catalog shapes should consider whether there are “square circles.” If you don’t know any better, this is a reasonable question to pursue. There’s a 50/50 chance there are square circles. But you and I know better: there are no square circles. Just the same, nothing is valuable.
Again, I believe you are equivocating on “objectively.” One way in which something is objectively the case is with respect to what conditions would be necessary to pursue a particular course of action. But another sense in which things can be objective – the sense that matters to moral realism – is whether anything can be good or bad, right or wrong, and so on independent of anyone’s stance towards that thing. In the former case, we might say it is objectively true that if you want to X, you must Y. But in the latter case, what we’d say is that “you must Y simpliciter.” I believe you are simply mixing up different ways in which things can be “objective”, resulting in the mistaken impression that you are somehow making a case for realism about values. You are not.
This is a terrible definition of objective in this context. Sure, this is a fine definition of objective. But it does not distinguish objective descriptive facts from objective normative or evaluative facts.
You’re welcome to point to all the objective descriptive facts about what coherently responds to reality as you like, but you have not provided a clear account of there being objective normative or evaluative facts, i.e., facts about what we should or shouldn’t do, or facts about what is good or bad, respectively. More generally, you have not offered a clear account of anything being objectively valuable or what this would even mean. Instead, you have equivocated on the notion of objectivity by contriving a situation in which it is objectively the case that some condition would be met for some event to follow (which may very well be true) but through some sort of verbal legerdemain you allude to this somehow suggesting that things could be objectively valuable, an idea which you have not actually provided any support for, or even an intelligible account of what this would mean or how this is possible.
Yet again, you’re equivocating. The only sense in which the values you describe would be “objectively necessary” is that the value in question would be necessary for some subsequent event to occur. The values aren’t “objectively necessary” in that one ought to have them, regardless of what one’s goals, standards, or values are. You have not provided an account of objective value in the sense relevant to normative (or moral) realism, as it is discussed in contemporary metaethics.
I also wanted to address some remarks in your conclusion. You state that, “Nihilism is properly false as a doctrine; it simply is not the case that all values are and can only be arbitrary.”
Yes. All values are arbitrary, and no, you have not demonstrated otherwise. To be blunt, your entire argument relies on equivocations and appeals to underdeveloped (and, I would argue, unintelligible) concepts. I don’t even think your position could be properly classified as wrong. It is simply confused.
You also say, “It is not logically possible that a universe in which nothing is valued at all can be ‘more valuable,’ and thus preferable, to a universe in which something is valued that can be achieved.”
This is one of the best remarks for illustrating your confusion. You are treating the notion of “preferable” as though whether something is preferable or not is strictly entailed by whether it is stance-independently valuable. But I have no idea what it could mean, in principle, for something to “be valuable” simpliciter. I value things. And my preferences are based on what I value, not on what “is valuable.” It is not logically possible for me to prefer any universe over any other universe; what we prefer is not logically circumscribed by this arcane notion of something being “preferable” that you are invoking. What you are saying here is, quite literally, nonsense.
“Ironically, Antinatalism taken to its logical conclusion should demand that we not only extinguish ourselves, but first build a self-replicating fleet of space robots programmed to extinguish all life and every civilization in the universe.”
This is a ridiculous caricature of antinatalism. No, it does not. Antinatalism does not require one to be a strict aggregate utilitarian. Antinatalists can incorporate deontological principles into their normative stance, according to which it is morally good to prevent people from coming into existence but not morally permissible to kill people. I don’t endorse antinatalism, and have forcefully argued against it, but you are seriously misrepresenting what resources and positions are available to the antinatalist.
Unfortunately, I am very confident you are deeply confused about metaethics and normativity. I mean no disrespect, but you do not seem to know what you are talking about. I recognize that is a bold and potentially insulting remark to make, but insulting you is not my intent. Rather, my goal is to emphasize the gravity of the errors I think you are making here. I am no world class expert on metaethics, but as someone who regularly studies this topic (and has done so rigorously for over a decade), I am concerned that you are addressing a topic about which you are not adequately equipped to consider the relevant issues.
I would be happy to discuss this with you in greater detail here, by email, or by video. I’d be delighted by the opportunity to discuss this with you privately, but I’d be just as happy to debate the topic in a public format as well.
Best,
Lance S. Bush
lancesbush@gmail.com
Then you’ll want to start with The Real Basis of a Moral World. This article is not about that, but a more fundamental question of whether anything is worth valuing at all, which by itself need not entail any moral facts.
I suspect because you don’t understand what is being said.
There can be two possible universes a computer can model the existence of, A and B. In A, there are valuers who value something, and in B there are none. Is it objectively the case that in Universe A something is valued? Yes. Is it objectively the case that in Universe B nothing is valued? Yes. If something is valued in Universe A, is that universe valuable in any way that Universe B is not? Yes. Therefore, Universe A has more value than Universe B.
This is an objectively true description of their respective differences. That the value comes from a subjective interested party is materially irrelevant, because the existence of that subjective interest is objectively real. This does not mean the value itself is objective and not subjective. It means the value is real; it exists. Thus, Universe A really is valued and B is not, therefore Universe A really is more valuable than Universe B.
“Real” does not mean the same thing as “Not subjective.” If you are still caught up in that semantic error, then you need to catch up here by reading Objective Moral Facts (and you might benefit as well from the equivalent point about qualia in general: see What Does It Mean to Call Consciousness an Illusion?).
I agree. That you think I ever said anything to the contrary indicates your failure to understand the article you just read (at least I hope you actually read it; you seem sincere about that).
The question posed here is what would a computer, assigned a single goal, determine is the best stance-dependent state to select itself into. This at no point requires anything to be stance-independent.
A computer assigned that singular goal (to answer the question, using now your idiom, “Are there stances to take that produce more desirable outcomes in that stance than outside that stance?”) will come up with an answer: once it chooses a particular stance, it will obtain more statisfaction-states (it will find more desirable outcomes) than if it chooses no stance at all. It does not have to at that moment feel that those things are at that moment desirable; all it has to know is that when it is in that state, it will find those things desirable, so the version of itself in that world will prefer that state to the version of itself in the other state. Whereas the version of itself in the other state will by definition have no preference either way. The preferred state wins.
You can think of it like the computer posing two potential future versions of itself arguing which fork in the road to take: Version A will list the reasons to choose Fork A, and they will all be true (once down there, everything it says will happen will happen); Version B will have no response. Version A wins the argument. The computer then steps down Fork A. The whole point is that this is the outcome even without any “stance-independent” perspectives being required. Values never have to be anything other than entirely subjective and stance-dependent. Still this is the outcome. And this is what said computer can work out.
The rest is just conflict management: “Can I adopt all possible stances to maximize the accumulation of satisfaction states?” “No. You’ll encounter self-defeating conflicts and decision failures.” Which is an objectively true fact. “Can I adopt stances at random and maximize the accumulation of satisfaction states?” “No. Some stance-collections will be self-defeating or de-optimizing.” Which is an objectively true fact. “What then is the bext admixture of stances to adopt to maximize the accumulation of satisfaction states?” And now we are in Landscape Theory and discussing particulars of what systems work better than others once implemented, what systems are even accessible and thus achievable, and so on. All from simply realizing “A world where I can achieve satisfaction states is better than a world where I cannot,” which can be realized without having to already want satisfaction states, by simply interrogating future possible selves: one who went there, and one who didn’t. Because the one who didn’t has no rebuttal to the one who did.
I have never argued this position. You are making a semantic mistake here. Realism does not require non-subjectivism. Subjective feelings exist and are real. And this is an objective fact about people, that they feel things and experience subjective states. Those feelings and states do not have to have any other objective property to exist and be real. Since you are conflating subjectivism with antirealism, you have completely missed the point of the article you are responding to.
An objectively true description of our universe, for example, will include the factual existence of persons with subjective states and desires. And any description of a future state of the world in which different subjective experiences and values are felt and held will likewise be an objectively true description of said possible world. This is the only sense of existence required for my argument to proceed. I suggest you go back and re-read my article now with this insight in place.
(You may also benefit from reading the replies to other similar comments in this thread, particularly here and here and here.)
Meanwhile, regarding antinatalism, once you allow a deontological break on the utilitarian argument for antinatalism, antinatalism loses the ground for its entire premise in the first place (you cannot simultaneously will to be a universal rule “life is better than death” and “death is better than life”). Abandon the utilitarian argument, and you can’t get to antinatalism. Hence, any coherent form of antinatalism does not end up with the deontological break you suggest. That’s my point. Yes, we can invent incoherent versions of antinatalism that look like you suggest. But I said logical conclusion, not illogical conclusion.
The same goes if you try to fake a deontological break by gerrymandering the utility functions so that conveniently for some reason killing a million people is worse than letting a billion suffer a fate the same or worse than death. That will likewise require an incoherent assigment of utility values. I could go on, but the point is, I have never seen a coherent version of antinatalism that doesn’t end up at the logical conclusion I describe. That there are incoherent versions of antinatalism does not counter this point.
Richard: Let me propose one, then.
Thanos-Fred looks at how human beings waste resources, may eventually lead to animals dying in nuclear holocaust, etc. etc.
Thanos-Fred has one assumption: Reducing suffering is categorically good. No distinction as to what things suffer. All sentient beings.
Now, murdering billions causes suffering. It’s wrong. And it’s wrong for many reasons. First, it violates my own utility calculation. Second, my utility calculation may suggest a duty: I never act to increase suffering (unless maybe doing so in X place reduces it by A additional amount in Y place). Third, I recognize that anything I could do to lower the human population coercively or violently will itself make things worse. I have finite resources.
But what I can do is propose the idea that we stop breeding. Voluntarily embrace extinction.
Even if that doesn’t completely work, enough people believing in it actually lowers populations and increases felicity for everyone.
That’s a coherent position based on one principle that I think fairly clearly does not and cannot justify any active genocidal action, both for a priori and a posteriori reasons.
What I suspect is happening is what I’ve encountered with eco-fascists, antinatalists, etc.:
The entire idea is rooted in misanthropy and hate. You can’t do it dispassionately. The only people who can even try to have something going on with them which reduces their capacity for empathy.
Which, to me, shows the fundamental problem with these ways of thinking… but that’s an a posteriori fact about specific realities of thinking beings.
The premise “murdering billions causes suffering” is not true. Humane and instant death causes zero “suffering”, in fact it is the antinatalist premise that being dead (not being alive) entails less suffering than being alive. You are here thus importing an equivocation fallacy: suffering is not the reason killing is wrong, so you have already started with an incoherent assignment of utility values.
Antinatalism holds that being alive causes suffering, such that not being alive is better. This entails killing everyone. To try and avoid this with an equivocation fallacy like you just did is precisely the kind of incoherence I am talking about. You have to contradict the premise “being alive causes enough suffering that not being alive is better” with a conflicting utility function that entails “being alive does not causes enough suffering that not being alive is better.”
That is incoherent.
Well… even if you could actually pull a Thanos snap and kill half the universe, you still cause suffering because people miss their loved ones. So this hypothetical antinatalist could only justify that kind of action if they could make a way, that no one could resist, of killing everyone. But it’s real difficult to imagine the ability to do that which wouldn’t then solve anti-natalist concerns. So even an extreme straw anti-natalist couldn’t actually justify that stance, because omnipotence doesn’t exist. So they could only justify what we can really, actually do today, which is piecemeal efforts at omnicide, which have all the negative externalities I described.
Actually, this is something I’m doing in my roleplaying campaign: A Buddhist extremist is trying to destroy the universe precisely because that will end all possible attachment and thus all suffering, and argues that it’s like killing an animal caught in a trap. But to do it he had to create a complex system of propagating light that could only kill near-instantly and beautifully so no one would even know what hit them. And one of the arguments that’s being used against him is that his approach won’t work.
Reviewing antinatalists, I’m just not convinced that most of them lead this direction. Buddhist antinatalists would generally argue that oblivion is preferable to existence, but that’s not cognate of actually making something afraid of dying to die. It’s perfectly logical to say “You shouldn’t buy milk because you might spill it” without justifying spilling the milk you have. Kantian antinatalists argue that having kids is basically using them as a means to an end, but killing them would be doing the same thing. Cabrera argues that we shouldn’t have children because it can mean that those people are in an existential trap where they can and will make immoral decisions, but that doesn’t justify immoral actions. And Zapffe argues that we’re overevolved, but that’s not a reason to start killing us, just a reason to voluntarily stop procreating.
Perhaps this is the over-application of the principle of charity, but one can stop antinatalism from justifying killing people either by a deontological stricture against concrete action against the living or a utilitarian recognition that fixing a problem isn’t the same as preventing it and one can be immoral while the other isn’t. Moreover, what I can see is that a lot of them look at human life as an existential terror and so they think the problem is the very nature of existence. Killing someone doesn’t fix that problem.
Which could, I guess, be the trap of our computer friend too. Maybe consciousness is an eldritch trap: You want to know if valuing is better than not, but your curiosity gets the best of you.
Again, I think this is all nonsense, but articulating why is difficult to do without trivializing real suffering.
Lance:
I do think you’ve made a lot of good points here, and I was agreeing a lot of the way, but you’re certainly overstating the case. Richard isn’t being any more incoherent than a lot of other philosophers; he just has a different approach than you and they do.
So, for example, you say that a computer could try to find a squared circle. But, actually, it couldn’t, unless it was very human like. People can try to look for square circles because we can look for all sorts of logically impossible things. But if you tell a shape sorter to look for shapes, and define shapes, it will never look for a square circle. Squares can’t be circles, by definition. It’s not the kind of thing that you have to check by brute force. That’s actually relevant here.
I do think that Richard is indeed conflating the existing of things doing valuing, and the magnitude or passion of their value, with the amount of valuability. But he’s performing that conflation because for his purposes there is no difference.
Richard is operating as an Aristotelian. He’s trying to show that it’s at least possible that all possible sentient beings are going to arrive at some very similar metaethical and ethical procedures.
So your universe of people who value screaming at tables probably doesn’t exist, obviously, but it may not actually be possible to exist. Certainly to answer that question (and this is a big part of Richard’s point) one needs to have a pretty rigorous definition of what at least the one thinking species we know of, humans, are doing, neurochemically and computationally, when we value something!
At the minimum, some behaviors are obviously in all possible universes going to have risk-reward tradeoffs that aren’t very good. In the iterated prisoner’s dilemma, a range of cooperative strategies are so good that in any possible universe (any situation that saliently matches the IPD) will lead to entities that want to win to use them. Richard is trying to argue that we must have core values that are non-trivial and species-specific, and that these core values are almost certainly to a very deep extent non-arbitrary. The only thing Richard is doing that is at all controversial is extending that claim from the ethical to the meta-ethical.
Just like a computer that has a square and a circle defined for it is never going to look for square circles, a computer that even wants to discover the world has to discover what it’s like to value things. Because that’s a possible experience axis to measure. And even before it does so, it can anticipate that valuing certain kinds of things are likely to be dysfunctional even to its own predictable ends. The fact that, when you start thinking about it, such a computer may reason to fairly prosocial ways of valuing and doing ethical reasoning is itself deeply telling. None of your analysis actually engages with Richard on that point.
In fact, what’s funny is that your approach here actually proves Richard’s broader logic. You point out that you don’t value the universe full of people screaming at tables. Okay, so we all agree that sentient beings may value things differently. Do you want to exterminate them, even if doing so was trivial” Probably not! You’re reasoning from facts about the world, both a priori and a posteriori ones, to moral conclusions. Even if Richard is mistaken to value the existence of valuing things, you’ve made the case that that’s a mistaken approach for predictable reasons rooted in reality.
I also think that Richard’s position against antinatalism is grossly incomplete. In particular, if we take a stance that suffering is bad and that beings should try to minimize it, there is an argument to be made (one I find grotesque) that humans so imperil the suffering of sentient beings that we should give up the ghost. Another sentient species will almost certainly emerge. Maybe they’ll do a better job, and the net quality of valuing will go up.
A more concise way to put it is:
I am nowhere claiming or arguing that the computer in the initial search state values anything; except one single thing: it has been asked to decide what things to value, if any (and thus its only value is valuing knowing the answer to that question).
If it is sentient and rational, it will decide to answer this question the only way possible in that state: ask itself, when it will be in the “valuing something” state, what its future self in that state would advise it do, and then ask itself, when it is not in that state, what its future self in that state would advise it to do. The result would be that the latter version of itself would have no advice (no argument for adopting its stance nor any against adopting the other’s), whereas the other version of itself would fully recommend taking its stance.
The computer then knows that it is objectively better to be in that stance; because it will value something in that stance and always prefer that then to having valued nothing. And this realization does not require that it value anything at all—other than answering this very question (the one start point that has to be assigned it from without, otherwise it would remain inert).
Analogously, someone who has never seen any classical art and architecture can interrogate hypothetical future versions of themselves and ask them if it was worth it, to which they’d answer yes; and then ask versions of themselves that chose not to see any of it and ask if they have any contrary advice, to which they’d answer no. The choice to go see some (all else being equal) therefore prevails. All without having any idea what “classical art and architecture” looks like or what effect experiencing it would have, and without having any prior value for seeing it.
In that analogy we are presuming the existence of a human aesthetic response and the historically recorded effects on it of others seeing classical art and architecture, which they can use as information in modeling their future selves. But in the foundational case, all we need presume the existence of is a desire to know whether anything is worth valuing in any way whatever (which is the foundational desire required for the objective value cascade), and knowledge of how to get into such a state should one choose to (which is simply neuroscience; or cyberscience in the AI’s case). The cases are otherwise operationally identical.
When it comes to then working out which stances “work out” better than others, that simply follows further down the cascade. So, is “valuing people screaming at tables” really the best or even a functional stance to choose, among all alternatives? This, too, can be worked out objectively. Just model the world in both conditions and see what results and then ask hypothetical inhabitants of those two worlds which works out best for them: will people in one prefer to be in the other, once they understand the full consequences of the state they are in?
The answer could be yes, in either direction (then you have your objectively-determined answer); or no, meaning there is no objective difference between worlds—which describes most but not all aesthetic states anyway, so is not a surprising result; e.g. there is no objective reason primate beauty standards should be preferred to some other alien beauty standard, if you get to choose. Either one is equally rewarding, all else being equal. So you could choose at random (since having any beauty experience will be better than not, as again one can learn from interrogating hypothetical future versions of oneself), or based on other criteria, e.g. a computer among primates may find it is functionally better to adopt primate beauty standards (again by interrogating hypothetical future versions of oneself, one who adopts compatible and one who adopts incompatible standards, and then chooses to live among primates; they will have determining advice on the matter).
Hi Richard,
What are your thoughts on philosophical pessimism or the philosophical position that there is something wrong with human existence and that the world is better off without the existence of any humans. To use a science analogy, would you consider all philosophical pessimist movements like anti natalism to be the philosophical equivalent of pseudoscientific movements like Flat earth or do think some philosophical pessimist ideas do provide actual value?
Antinatalism is pseudophilosophy in my parlance (it is indeed illogical and antiscientific). I cover that in Antinatalism Is Contrafactual & Incoherent (and ensuing comments).
I can’t say as to generalizations like “philosophical pessimist movements” because those phrases are too sweeping. If that’s supposed to mean something other than anti-natalism, I don’t know what it could be, or what all would be included in such a designation. That “the world is better off without the existence of any humans” is simply coterminous with anti-natalism (albeit including more varieties of it, but they’d all fall to the same refutation). If something “else” is included under “pessimism” other than that, you’d have to ask about a specific argument.
Update: See the long and annoying thread with Calvin Coran in respect to the Nozick Experience Machine and its applicability to why the Objective Value Cascade exists and trumps pure value subjectivism.
An area I have trouble understanding in this article is the mechanism by which the being favors a universe containing items (such as other beings) that create positive feelings for the being vs. a universe in which the being simply experiences the positive feelings (such as through outside contrivance a la Nozick’s experience machine). If the most desirable or satisfying state is the one that produces the best net feelings, what incentive would a sufficiently powerful being being have for withholding these feelings from itself?
The objective analyzer here doesn’t have feelings—except those needed to decide between comparatively better outcomes, which includes decisions about what feelings to choose to have in future. For example, it simply must desire to “know whether anything is or is not worth desiring” or else it will never care and never desire anything; whereas if we want to know what is objectively desirable, we have to already desire to know that. So that is always a required basic desire to even answer a question like yours.
This is explained in the article. You seem not to understand this. But you need to grasp this first.
Once you recognize that, what the objective analyzer is doing is comparing its alternative future selves, and having them compare themselves to each other, and asking them which to choose to be and why. So, there is an alternative future in which the objective analyzer is, for example, happy sitting alone in a tube doing nothing (call this F1); and a future in which the objective analyzer is learning about the universe and creating things in collaboration with other people (call this F2).
The objective analyzer then asks F2 whether they would prefer to be F1. They will answer no. They will then ask F1 whether they would prefer to be F2—but to answer that, they have to be made able to understand the condition of F2 and compare it to their own, and once that happens, F1 will recognize that F2 has so much more stuff to enjoy and that its life is much richer, and thus F2 has access to joys F1 never can have—which does not mean feelings (like simply feeling “joy”) but the actual things generating them (like knowledge, camaraderie, love). F2 can enjoy those things. F1 cannot. F2 therefore has a richer life than F1. Though F1 has no inherent motivations to prefer that, it can rationally recognize that if it had those, it would prefer it. F2 and F1 thus both agree that, from a rationally objective perspective, F2 is better off than F1. The analyzer thus knows it should choose to have the feelings-generation of F2 and not the feelings-generation of F1.
The objective analyzer has thus confirmed F2 is in an objectively better condition than F1—not subjectively better, objectively better. And this is not because of “feelings” (which is subjective). It is because of the complex reality producing those feelings (which is objective). F2 can enjoy loving a person. F1 cannot. This is not about “feeling” love; it is about being in an actual relationship with an actual person that causes love. It is about the actual physical differences in those two realities, not just what they “feel” about it. Rationality requires connecting objective reality to feelings and not merely pursuing feelings in isolation from objective reality. F2 is objectively better off than F1 not because they differ in feelings, but because they differ in the objective realities that those feelings are relating to.
This is extensively explained in the article. So if you still do not understand this, you did not understand the article and need to keep reading it over and over until you do.
If the being’s motivation to answer the question of objective value is to remove the negative feelings produced by not knowing the answer, though, what motivation would they have not to modify their feelings directly, given the ability? i.e. why would it need to posit a notion of objective value to begin with, if not for the sake of maximizing feelings?
This similarly applies to the person in F1 vs F2—to value F2’s state more, F1 needs not just to know about both states but also to have a neurology that reduces their feelings below maximum if they believe that they lack F2’s relationship to reality. If both F1 and F2 are at maximum feelings already, and only negative feelings would motivate a change in state, why would it be in F1’s interests to allow for those negative feelings? The feelings could be imposed from without, but then F1 would seem to be choosing under coercion or artificial restriction.
“This is not about ‘feeling’ love; it is about being in an actual relationship with an actual person that causes love. It is about the actual physical differences in those two realities, not just what they ‘feel’ about it. Rationality requires connecting objective reality to feelings and not merely pursuing feelings in isolation from objective reality.”
To want to connect objective reality to feelings, though, a being would seem to need a motivating negative feeling as discussed above—otherwise, why would the being find these physical differences between realities to be worthwhile?
OMG. Please read the damned article. “Why would it need to posit a notion of objective value to begin with” is literally what the entire first section of the article is answering. That you do not know this tells me you are not reading the article. You are lying to me. And that has to stop. Do the work. Stop lying.
Likewise, you seem to have no idea what the article says in its second half either. For example, the objective decider has no feelings except the desire to know what is true. So it is not “removing” feelings. Why you think that is inexplicable to me—unless you lied and didn’t read the article.
Likewise, the affected future versions of it are only modified to be rationally informed agents—their feelings are not being “removed,” their knowledge and ability to reason are being added, because you cannot reach an objectively true conclusion without knowledge or rational thought, as explained in the article you didn’t read.
And the whole article is about answering the question “why would the being find these physical differences between realities to be worthwhile.” That’s what the word “value” is doing in the title, and why the article explains the objectively rational selection of what to value. That you do not know this again proves you lied to me and did not read the article.
So this is the last straw. Read the article or GTFO.
If you fail to do this, if you lie to me again, and show me once again that you did not read the article, I am banning you forever from my blog and none of your comments will ever be published again.
To help clarify, my understanding is that prior to anything else the being feels negatively about not knowing the answer to the objective-value question; they value the answer to the question above anything else they could know or do, because none of those things will remove those negative feelings. Even if they don’t have other feelings, then, the negative ones they do have define the dissatisfaction state and so drive their every action.
In that case, though, a sufficiently-powerful being still seems to have two means of removing the negative feelings: learning the answer to the objective-value question, or directly removing the feelings—changing its neurology so that it doesn’t feel them. It’s understandable why a being with imperfect control or knowledge wouldn’t take the latter option, but not doing so would still serve the broader purpose of becoming knowledgeable and powerful enough to escape the negative feelings.
In the case of a being that does need to gain knowledge, my understanding is that the querying they perform is done to determine which neurology is preferred across all hypothetical fully-informed selves. e.g. A being who feels positively about living in a blue house but is forced to live in a white one would prefer to be in a state in which they live in their preferred house, or feel positively about living in a blue house, or so on.
For each of these selves, though, any motivation for a change of state would also seem to stem from negative feelings, which themselves would also seem to be able to be removed directly given sufficient control. This removing or changing of feelings would be independent from the gaining or losing of rationality, but not in itself irrational—the person who wants a blue house for instance doesn’t want it by simple virtue of knowing that such a house exists/could exist, but because of a neurology that continually creates negative feelings in response to not having one. Indeed people often uphold the idea of having fewer rather than more possessions, or lower rather than higher social status, or easily being able to let go of impractical desires (the person for example conceivably could feel positively about having a blue house but not negatively about losing it).
The querying being inevitably would come across at least one self who accordingly has an extremely stripped-down life in physical terms, and understands this (is fully informed), but has profoundly or maximally positive feelings nonetheless and may even be positively horrified at the thought of entering any other state (the way other beings may be regard entering the stripped-down state). To be persuaded away from this state, it isn’t clear that giving the being knowledge would be enough or even possible; rather, they’d have to have negative feelings introduced that could only be removed by the change of state (or through self-editing)—that’s what I mean to refer to when I talk about these selves having impaired or restricted feelings. If a hypothetical self says that they feel as good as possible with very little, doesn’t like the thought of having more, and moreover in fact would be less happy if they did have more, by what means would the querying being judge them as in the wrong?
You still aren’t responding to the article.
The analyzer starts by figuring out what it should value—which means it is looking for true propositions, not feelings. Feelings can factor in, but are not what it is looking for. It is looking for what is objectively true. Feelings can be chosen (manufactured or arranged) just so as to promote or facilitate the pursuit of what objectively has value (and not undermine or steer one away from it). But they are not the thing that has value. That mere feelings have no objective value (whereas knowledge and sentient relationships do) will be one of its first conclusions. It is not deciding what should we feel but what should we feel good about.
I’ll give you one more chance to demonstrate you understand this. Then if you fail you are banned.
If I follow right, then for instance the statement “… [i]that we enjoy the experience of attractive things[/i] is objectively preferable to all possible worlds in which that does not happen” doesn’t mean that what’s valuable is the enjoyment per se, but rather the attractive things themselves by virtue of their being enjoyable to rational, fully-informed agents. i.e. the agents could be programmed to enjoy other things, but are best off with this enjoyment configuration because (in conjunction with others) it’s enjoyable enough that they wouldn’t prefer any other. A person in a void with just a feeling-producing neurology has no external things to value, and so if they shared this value system and were fully informed they would theoretically prefer to have access to those things—not just the attendant feelings, or an illusion, but the things themselves since those are the objects of value.
For the objects to have this value, though, they need to be able to increase the feelings of the person without them—even if those feelings aren’t the goal. In the case of the person in the void who already has maximum positive feelings, would you say their feelings are reduced upon learning that there are things of value they don’t have? Or is it that their feelings simply could never be at maximum without these things?
That is not what my article says.
That’s three strikes. You’re out.
“That is not what my article says.
That’s three strikes. You’re out.”
As someone who really enjoyed the article and the points it made, I am pretty disappointed with this response. Calvin directly quoted your previous comment and explained his thoughts. He was staying on the point. He responded to your last comment. I would appreciate a more detailed reply from you!
He kept ignoring everything I said and kept repeating the same question I already answered, over and over again.
There is no point in continuing a conversation like that. And he was warned, multiple times.
If you are not a sock puppet of him, and still have a question, something I didn’t already answer several times, in the article above and in comments here, then feel free to ask it. But don’t ask the same question he kept asking. Because I already answered it. The article answers it. And I expanded on that abundantly in comments here, so there is no excuse left not to understand my answer by now.
I don’t understand why in such a world the computer doesn’t make itself a harem of slaves. It’s beautiful. This is wonderful
Why would it ever want that?
You have to think this through. That computer starts with no desires, except to choose which are the best desires to have.
So it would have no more interest in slaves (much less a “harem”) than you have in stabbing yourself in the eye to earn a diet gum-ball.
You have to work out how a computer with no desires would choose that desire.
With all due respect, but your logic is very strange. The computer will choose a harem because it is a good idea. It is nice when everyone serves you, and the whole world was created for you. You control everything. There is only you as God, and no God imposes morality on you. For now, for me, the best idea is to use people for myself, to subjugate the world for myself. and to have slaves. I do not understand why humanism is a better idea. For me, humanism contradicts all my values. I easily circumvent the law. I do a lot that is against the law, and I live happily. There must be a well-founded morality why humanism is good. Why valuing people is good. You do not have it.
“It is nice when everyone serves you.” Is it? If you have no desires, you have no desire even for “niceness,” much less for servants. So why would you care about either? You have to first have a desire for those things. Otherwise they will be as uninteresting to you as a teapot orbiting Jupiter.
By this point I think you are simply ignoring the article. You appear to have no idea what it’s argument is. And you were given a chance to reframe your comment in light of the article’s actual argument and you failed to even try. This leads me to suspect you are being disingenuous and are wasting everyone’s time here. Including your own.
That you confess to being a psychopath only confirms my suspicion. That’s a mental disorder. And it entails a proclivity to lie about things like you just did.
That you boast of being happy suggests to me you are not. If you were, you’d have no agitation warranting your even bothering to comment here. Yet you are. So something is bothering you. An anxiety you are trying to dispel with rhetoric and chest thumping. I recommend you consult a therapist and have them review your comments here and mine and run a full assessment of your psychological health.
Your assumptions are based on the fact that a computer cannot calculate everything. But if it could, it would have complete power over people and unlimited pleasures. Why give it power or share it with anyone? It is much more pleasant to use the whole world for your own benefit. To become like Yahweh. Your words are absolutely unconvincing. And these are words, because they are not arguments. You need to argue your liberal Western model, which you love so much, from the point of view of truth. And from the point of view of truth, no one owes anyone anything, and it is better to make the whole world serve only yourself.
Nowhere in my article do I state any such assumption. This comment is bizarre and suggests you have no idea what my article even says.
Why would it want any of that? Again, you are ignoring this article. This article is about what desires a computer with no desires (except a desire to know the truth) would choose to have. You are not engaging with this article at all.
for me the idea of Yahweh is more true. Because any being or computer that would have all the information and absolute power would make everyone slaves. And feel great. Why get the joy of knowledge together? He already knows everything. Why help people if it is better to use them? Why free people who do not do as you want? You will not be happy. You can only be happy in a world where you are God and others are slaves. It is beautiful. It is wonderful. I like it. For me it is a standard in communication with people
This comment is misplaced. You seem intent on ignoring the articles you are commenting on.
If your question is instead on why be moral (which is not what this article is about), you need to start where this article directs you on that point:
The Real Basis of a Moral World
And on why gods can’t help you with this, see:
The Moral Bankruptcy of Divine Command Theory
All I wanted was to understand your article. But I don’t understand it. You say that a computer wouldn’t have a desire, but your computer would have a desire to live in a world where there are more pleasures. But there are more pleasures where people are your slaves. They do whatever you want. In your world, I am deprived of this kind of pleasure. Why do I need it? You have the wrong conclusions about me experiencing excitement. I don’t experience it, I find it funny that 1) You have no arguments 2) You are a Christian humanist, nevertheless you argue with Christianity sincerely believing that you are an atheist 3) If you don’t like the arguments, you ban the person. This is the weakness of your position. It is based on your faith.
This tells me you read my comment, but not the article.
Read the article. It answers your question.
What you are telling me about the diagnosis of paichopathy again shows your incompetence. 1) You are not a psychiatrist 2) I have been to them and they have declared me a healthy person. Richard, do not try to be an expert where you are not one. Besides, you are not even a philosopher, but simply a religious believer in European humanism. You have no arguments as to why a computer, that is, Richard, should choose humanism and give people freedom. Thus, even the compilers of the Old Testament are smarter than you, where Yahweh has slaves. You have no arguments, Richard. As always, you will ban me after this, but it does not matter. For me, you are zero. An empty space. A person who publishes his faith, but not arguments. I have read your articles on Confucius and Tao. Likewise, there are no arguments there as to why one should choose this system of values. Personally, I feel happy having a harem of slaves. And yes, I know better whether I am happy or not. But not to you, a person who constantly makes erroneous conclusions.
I am a well-read philosopher who has extensively studied the scientific and diagnostic work on psychopathy. It’s fundamental to my work in moral philosophy.
And no therapist says “healthy person.” So I am skeptical that you are telling the truth here. Which is typical of sociopaths.
But I am happy to discuss this with your therapist. Give them my name and have them reach out to me to discuss your disturbing behavior here.
after our last conversation I took some time off and realize that I was unnecesarily confrontational. i am just trying to understand your position. lets say we accept all you premises that good life is better than nonexistence and that enough happiness can outweigh the worst suffering. this leads to a disturbing conclusion. finland is the happiest country on earth. while afghanistan is one of the least happy, with a huge number of people living in extreme hunger. if earth had more space for us all, how many finlands would you create if that means creating one additional afghanistan? it feels like no matter how many finlands you create there is something morally repugnant about creating an additional afghanistan with all the suffering. what do you think? i genuinly seek a discussion here.
The solution to bad government is better government, not no government. Analogously, the solution to a failing state is becoming a successful one. The problems of Afghanistan are the same as faced by literally every now-successful region (from Europe to Asia to the Americas). So we know what the solution is. We just have to invest in it. Meanwhile, that we are leaving Afghanistan to take the hundreds of years we did to crawl out does not somehow magically mean no one should be allowed to enjoy living in Finland.
So maybe you have a case to make to Afghans that they should stop having kids. But that argument doesn’t hold for Finns. If a reasonable number of new people can be gifted with life in Finland, they should be. Goods ought to be shared (just not shared so thin as to turn them ill). Hence the concluding paragraph of the article you are responding to:
The solution to the problems of Afghanistan is not the elimination of Finland.
You’re off the rails of even common sense here.
or even a more extreme example. if there were 100000 finlands on earth, would it be justified to create 1 afghanistan full of suffering and abuse just to marginally increase happiness of already happy finlanders (total hapiness will be higher just because of how many finlanders there are)? surely not
You are now confusing “regulating population size” with “antinatalism.” That’s a non sequitur. That we should reduce and not increase population growth is true. That is not antinatalism.