I will be answering in my next article the new questions posed in the 2020 iteration of the PhilPapers survey (a new development I just wrote about). But one of those new questions requires a separate article on its own: the one written, “Experience machine (would you enter?): yes or no?” This refers to a poorly articulated thought experiment contrived and badly run by Robert Nozick. Philosophers have a tendency to hose their own thought experiments. This is no exception. So it is difficult to really use the data on this, because I don’t know if the PhilPapers respondents are doing the same thing Nozick did, and misconceiving and thus incorrectly running the experiment, thus dictating their answer differently than if they ran the experiment correctly. So the results here are largely useless, not least because it is not explained why they answered as they did, which is the one thing Nozick was trying to discern.
The basic gist of the experience machine is to ask, if you could go live in a simverse where you could experience all the same pleasures as in the realverse, would you? That isn’t the way Nozick words it, but it distills what he is actually describing; and rewording it thus I believe would change people’s answers, yet without changing what Nozick actually meant (or had to mean, given the argument he tries to make from it), which illustrates how he is inadvertently manipulating results with semantics rather than more clearly describing the scenario he wants to explore to get an accurate and useful answer. Crucial to his experiment is that the “experience machine” can reproduce all pleasures of the real world (so that there is no pleasure-access reason to not plug into it). But this crashes into a tautology when pleasures are only caused by believing certain things are real.
Nozick would certainly try to salvage his intent by specifying, let’s say, that you would be alone in your simverse, and thus all pleasures deriving from interacting with other people there would be fake. But this would undermine his argument: if you know it will be fake (as the experiment requires that you do, certainly at the time of making the choice, as in all Magic Pill thought experiments, cf. my old discussion under “The Magic Pill Challenge”), you will be incapable of deriving the same pleasure from it, yet that is a required condition of the experiment. Hence the machine can’t produce “the same quality” of pleasures, and thus it won’t meet the one essential condition his experiment, and entire argument, requires. Because apart from the question of the reality of human interaction, we already know from VR machines today that at the level of sophistication Nozick’s machine is supposed to obtain, there is no pertinent difference between, for example, climbing a real or a virtual mountain. In both cases you are accomplishing a task by interacting with a presented environment to within your developed abilities.
Really the difference is even less substantive than that. Because there actually literally is no pertinent difference between, for example, “fake simverse sugar” and “realverse sugar,” because this is not a thought experiment: we actually are in that relevant simverse already. Human experience is a simulation constructed by the brain. “Sweetness” does not exist outside our minds; sugar molecules have no such property. It is entirely a fabricated experiential response. Likewise every other aspect of pleasure. And it’s actually impossible for it to be any other way. Experiential pleasure does not and cannot exist in the physical world but as the computed product of information processing: i.e. of an experience machine, in our case the brain. So in actual fact, we are already in Nozick’s “experience machine.”
This would mean the actual options for Nozick’s thought experiment really are: would you prefer to live outside your brain (which is physically and probably logically impossible: experience does not exist anywhere outside some “experience machine” or other) or inside it? No rationally informed person would answer anything other than “inside it, obviously.” Because the alternative is literally choosing to be dead—to unplug from all experiences whatever. Nozick did not realize (nor evidently have most philosophers answering this question realized) that he is simply describing our current situation: we live consciously only because we live inside an experience machine, of just exactly the sort he describes, and we could not live consciously any other way. Hence there is no pertinent difference between, for example, Los Angeles out here, and Los Angeles inside Grand Theft Auto: both have fixed, explorable parameters, from geography to resources to sights and venues; both can be interacted with and changed; and so on. So the only pertinent difference between a simverse and a realverse is merely one of real estate. Is it better there? That’s the only question that matters.
It is clear that Nozick wanted to intend his “experience machine” to be a deceptive device, whereby you aren’t even making decisions but being tricked into thinking you are, and people don’t exist there, you only think they do. And so on. But he doesn’t clearly frame the experiment in those terms—and couldn’t, because it would expose a fatal flaw in it, insofar as it’s supposed to prove something he wants about why people do things. So this is bad philosophy. Running the experiment correctly (the machine can reproduce any real-world pleasure), my answer for PhilPapers here would have been “yes,” a genuine simverse would be better real estate, so I’d certain immigrate, along with 13% of other philosophers apparently, possibly the few who actually noticed what I did about all this; the other 76% are being snowed by Nozick’s faulty semantics, and really answering a different question than we are: whether they’d consent to be deceived into pleasurable cognitive states—as opposed to merely simulated ones, which is not the same thing. But Nozick’s description of the experiment never mentioned being deceived, but hinges entirely on knowing what’s really happening and choosing it anyway. Assuming deception is happening (and thus being chosen) is to run the experiment wrong—or to run a different experiment than described.
The whole experiment should thus be trashed as framed and the actual questions Nozick wanted to answer should have been asked instead: do we prefer mere pleasure as an experience disconnected from what produces it, or does the pleasure we derive from something depend on our beliefs about it being factually true? This is a more interesting question, and more easily answered. Though it is properly a scientific question under the purview of psychology, and not really a question philosophers should claim to be able to answer on their own, there’s enough science to back an answer here: we do indeed derive pleasures from our cognition of circumstances that cannot be obtained without it.
Nozick wants to separate the mere experience of pleasure (like an arbitrary orgasm machine) from the cognitive side of understanding what is producing the pleasure (like sex with an actual person, with whom you are sharing an understanding of their mental states, desires, and pleasure-experiences), so as to argue that, because these are not one-to-one identical, our motivation to do things is not simply pleasure, and therefore “utilitarianism is false.” But this is a string of non-sequiturs. That the cognitive side of what causes a pleasure matters, does not replace pleasure itself as the goal; it merely constrains what things will cause us pleasure (or pleasures of certain kinds and degrees). So the first step in his reasoning fails. You can’t separate pleasure from cognitions about its cause; cognitions about its cause are a source of pleasure. And no form of utilitarianism disregards this fact. So the second step in his reasoning also fails.
Basically, as folk would say, “You can’t get there from here.”
To be clear at this point, I also find all this talk about “pleasures” bad form anyway. What we really prioritize are satisfaction states; which is a pleasurable state in and of itself, but all pursuit of individual pleasures is derivative of this, not fundamental. We pursue pleasures in order to obtain satisfaction states (and there can of course be greater and lesser satisfaction states, hence states that are “more satisfying” than others). Thus “desire utilitarianism” is closer to a correct evaluation of human axiology than traditional utilitarianism, meaning Nozick isn’t even on the right path to any pertinent conclusions about anything here, even from the start. But we can set this aside here, because the same conclusions follow (or don’t) even if we replaced his “pleasures” with our “satisfaction states,” so for convenience I will continue in his idiom.
Like all bad philosophy, Nozick constructed his experiment to rationalize some conclusions he already started with and wanted to be true (in effect, that “pleasure is not our sole reason for doing things, therefore something else motivate us”), which are represented in his given reasons for “not” wanting to be in an experience machine:
- We supposedly want things to be real, not just pleasurable (e.g. we want to “actually” win at a game of cards, not merely feel or falsely remember that we did);
- We supposedly don’t want to just be floating in a tank or something (e.g. we want our physical bodies at the card table, or to be actually heroic; we don’t want to virtually be there, or to fake it);
- Simverses are more limited than realverses (e.g. there might be things in the realverse we can discover or do that weren’t thought of so as to be made possible in the simverse).
But (1) does not contradict the thesis that pleasure is what we seek, as it only ramifies what we will find pleasurable; (2) is demonstrably false (people enjoy “sitting at virtual tables” so much that an entire multi-billion-dollar industry thrives on it: we call them video games, in which we can genuinely “be” honest, clever, heroic, anything we like); and (3) is contradicted by his own thought experiment: he himself stated as a condition that there can be no pleasures accessible in the realverse not accessible in the simverse; in fact his entire experiment depends on that condition. So (3) cannot be a reason not to plug into the machine he described, as it by definition can never be an outcome of doing so. In my experience, Nozick is a rather bad philosopher (this isn’t the only example). Indeed, he has also confused in case (3) yet again (a) a ramification of what we find pleasurable with (b) a reason other than pleasure to pursue something. So he simply isn’t really getting the conclusions he wants; yet, ironically, he is deceiving himself into thinking he has. He’s stuck in his own experience machine.
Of course Nozick may have wanted to specify instead an experiment where, really, the main concern was with whether merely the pleasure alone mattered (the experience of it), such as we derive from human interactions (the only thing that would be meaningfully “absent” in his scenario, as our enjoyment of virtual worlds in video games now proves), or if it mattered that the interactions be real. For example, as with any Magic Pill thought experiment, the notion is whether you would choose to live a lie if it could be guaranteed you’d never know it (though obviously you must know it at the time you choose this state, like Cypher in The Matrix when he asks Agent Smith for this very thing). That does not actually address Nozick’s interest, because if the “comparable pleasure” requires you to falsely believe you are interacting with real people, then his claim that our goal is not pleasure is not supported; all he has shown is that we do set pleasure as our goal, and can merely be tricked into it.
That is uninformative. Think of a romantic relationship, which brings you great pleasure and which you pursue for that very reason, but then you discover it was all a lie, and they were conning you. It does not follow that, therefore, you were not pursuing that romance for pleasure. That conclusion is a non sequitur. So, too, with “Nozick’s” experience machine. It simply can’t get the results he wants. And he fails to detect this, because he can’t even run his own experiment correctly—forgetting that his own description of the experiment rules out his third reason for refusing to plug in to it, not discovering from self-reflection that simulated experiences entail constructing the same explorable environments and the same opportunities for realizing the person you want to be as the real world provides thereby ruling out his second reason for refusing to plug into it, and not realizing that cognition of a state is itself a source of pleasure, or that the two are not properly separable, eliminating every other reason for not wanting to plug into it. One does not pursue the cognition, if the pleasure does not result; and fooling someone into the cognition so as to produce the corresponding pleasure would be rejected as unpleasurable by anyone aware that is happening. Deceiving someone into feeling a pleasure does not demonstrate they pursue anything for reasons other than pleasure; to the contrary, it only demonstrates more assuredly that they pursue things for no other reason.
This holds even against Nozick’s shallow declaration that the momentary displeasure someone would feel upon choosing a life of being deceived for themselves would be outweighed (in utilitarian fashion) by the ensuing life full of fake pleasures. This forgets self-reflection is a thing. Think it through: you could be this person right now. So it is not the case that displeasure at choosing such a condition would be limited to when the choice was made. The moment you lived at all self-reflectively you would continue to be horrified by the prospect that everyone you know is a fake automaton and your entire life is a lie. As Gary Drescher points out in Good and Real, the only way to avoid being perpetually stuck in that dissatisfaction state (after already accounting for the scenario’s inherent improbability) is to assure yourself that you would never have chosen such a thing; which requires that you be the sort of person who wouldn’t. Ergo, you’d never choose such a condition. Hence, your answer to this scenario is, “No.”
The heart, I think, of Nozick’s intellectual failure here is to confuse pleasure with its causes. He wants to think that the causes matter more than the effect. But that isn’t the case. The causes only matter because of the effect; which is precisely the conclusion he is trying to refute. Yet his own experiment, properly conducted, only reinforces that conclusion; it doesn’t undermine it, as he mistakenly believes. There is really only one useful takeaway from all this, which gets at least somewhere near a point Nozick wants to make: that merely feeling pleasure, divorced from all other cognitive content, is not a sustainable human goal. We would, ultimately, find that dissatisfying, and thus it would cut us off from much more enjoyable satisfaction states. I discussed something like this recently in The Objective Value Cascade: if we were rationally informed of all accessible pleasure-states, and in one case all we would have is the contextless feeling of pleasure, while in the other case we would have context-dependent pleasures, we would work out at once that the latter is the preferable world (our future self there would win any argument with our future self in the other as to which future self we now would want then to be). I think this is sort of what Nozick wants to get as the answer. But he mistakenly leaps from that to “pleasure is not our only reason for doing things,” which is a non sequitur. He has confused “we will prefer more to less pleasurable states” with “we do not pursue pleasure-states.”
The error in his experiment thus turns, really, on the role of deception. Nozick can’t even superficially get to his conclusion without it. As I just noted, apart from deception, we are already in his experience machine: all pleasure is a virtual invention of an experience machine (our brain, presently). So that can’t get us to his conclusion. His conclusion thus depends on the assumption that something remains intolerably fake, and there really is only one thing that could be (as I just noted): fake human interaction, tricking us into thinking we are experiencing interactions with real people, when we aren’t. He mentions other things (like achievements, e.g. my example of “really” winning at poker vs. being tricked into thinking you have), but even after we set aside all the counter-examples disproving this (e.g. people actually do enjoy and thus pursue playing poker virtually, even against machines), the remaining cases still all boil down to the same analysis: once you become aware that it’s fake, the pleasure is negated, and once given the choice, you would not choose the fake option; because the real option is more pleasurable. And you know this, so you know you can’t have chosen it in the past, and therefore you won’t choose it in future. That Nozick can conceive of tricking people into not knowing this, does not get him the conclusion that pleasure is not why we do things. All it does is reveal that we can produce pleasure by deception; but it still remains the reason anyone is doing anything.
The convoluted way Nozick is trying to get around this inescapable revelation is by contriving a Magic Pill scenario, in effect asking whether you would choose now to be deceived in the future, e.g. tricked into thinking someone genuinely loves you rather than is conning you, merely to achieve the corresponding pleasure-states of believing someone genuinely loves you. No rationally informed person would choose to do that, and for the quite simple reason that it displeases them to think of themselves now being in that state in the future. And this is not just experienced upon choosing, as Nozick incorrectly asserts; as I just explained, you will be existentially confronting this possibility, and its undesirability, every day of your life. Thus pleasure is still defining the choice.
Bad philosophy comes in many forms. Here, we see it characterized by: (1) reliance on fallacious and self-contradictory reasoning (rather than carefully burn-testing your argument for such, and thus detecting and purging any such components); (2) not carrying out a thought experiment (especially one’s own) as actually described, or not describing the experiment you actually want to run; and (3) starting with a pre-determined conclusion, and contriving an elaborate argument by which to rationalize it, rather than doing what we should always do: trying, genuinely and sincerely and competently, to prove your assumptions false, and only having confidence in those assumptions when that fails (see Advice on Probabilistic Reasoning).
For instance, here, Nozick wants to think that because cognitive content matters to whether something is pleasurable (which is true), therefore something other than pleasure is what we actually pursue (which does not follow). But this can be tested, by simply removing that single variable from the control case: if you could choose between an unknowingly-fake love affair that gave you pleasure and a genuine love affair that didn’t, would you choose the latter? The rationally informed answer is always going to be no. Someone might answer yes, by thinking “at least in the genuine case I’ll have some genuine pleasures,” but then they’d be doing the experiment wrong, because the stated condition rules out that outcome. You are supposed to be comparing two conditions whereby the second contains no produced pleasures, not “some.” Bad philosophy. Good philosophy would apprehend this and thus correctly run the experiment. And its result would disprove the null hypothesis that “we don’t pursue things for pleasure.” This would not be the result Nozick wants. But truth does not care what we want.
More to the point of getting at least a usable conclusion in this subject, someone who was posed the binary options “an unknowingly-fake love affair that gave you pleasure or a genuine love affair that didn’t,” most people would apprehend an excluded middle here: why can’t we have a third option, a genuine love affair that pleases us? (Or any other genuine state that does.) Obviously that’s the thing someone would choose over both other options, if it were available. And there is no other option left to consider in the possibility-space (e.g. “a genuine love affair that made you miserable” would still satisfy condition two, “a genuine love affair that didn’t give you pleasure,” as would “a genuine love affair that brought you neither pleasure nor misery”). But this still disproves the null: the reason someone chooses “a genuine love affair that pleases us” over “an unknowingly-fake love affair that pleases us” is that our cognition of the difference brings us pleasure. It does so not only when we choose it, but also every moment we continue to enjoy the product of that choice. Because the only reason it brings us pleasure is our knowledge of its genuineness.
As I wrote once with regard to a different Magic Pill thought experiment:
Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they’ve done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever?
The same follows for Nozick’s machine. If what we are really talking about is a machine not that merely produces pleasure without context or creates actual contexts similar to those in the real world (like video games aim to do), but a machine that deceives us into experiencing a pleasure we would not experience if we knew the truth (a machine that convincingly lies to us about the contexts we are in), the question then is no longer whether we pursue objects for pleasure, but whether we would be pleased or not to be deceived into pleasure-experiences (now or ever). The answer to that question is: no, this would not please us; hence we would not choose it. This is why, I suspect, 76% of philosophers did indeed answer “No” to the question. But that doesn’t get us to Nozick’s conclusion that pleasure is not what we pursue objects for. And insofar as we see it that way (and thus, run the experiment differently than it was described), I would agree with them and likewise have answered “No.” Thus, how one answers this question depends entirely on whether you correctly run the experiment as described, or not. Which you cannot tell if anyone has done merely from what their answer is. And this is what makes this thought experiment bad philosophy.
I’ll reiterate in the end that we can throw one bone to Nozick, which is that his intuition was correct that we do not find contextless pleasures to be comparable to contexted ones. People generally don’t want to just stimulate the pleasure centers of their brain; they want something more, because they can work out that it is far more satisfying, for example, to interact with real people than fake ones, and with explorable worlds than scripted ones. Which simply translates into Nozick’s vocabulary as “they find that more pleasurable.” Which means a machine that, as stipulated, can give them that pleasure, can’t be doing it by deception. Whereas any machine that can’t do that, won’t be preferred to the real world by any rationally informed decision-maker—simply because it can’t give them the pleasures they want, not because they pursue aims for reasons other than the pleasures they can derive from them.
Good article, and I agree Nozick’s argument doesn’t work. But is there a different reasonable argument that you’re aware of that adequately supports the conclusion that pleasure is not our sole reason for doing things? Or is motivational hedonism our only true reason for doing anything?
I can think of one thought experiment where personal pleasure/pain avoidance might not be the only motivation: Imagine a Sophie’s Choice scenario in which a parent must choose one child to live, and one to die. However, in this case, child A brings the parent more personal pleasure, but child B has a better chance of accomplishing great things (as a scientist, artist, etc.). Could you see a case where the parent would select child B to survive, even though it would bring that parent greater personal grief in the short and long run to lose child A? Or would every parent always choose to save child A for their own pleasure motivation?
I am not aware of any argument that concludes without fallacy from actual facts that satisfaction-states are not the sole reason anyone does anything. I am not sure such a conclusion is even logically possible. It is inherently self-refuting to suggest anyone would pursue a less satisfying state over a more satisfying one; even people who find satisfaction in dissatisfaction, are in that very fact pursuing the state most satisfying to them.
If we swap “satisfaction states” out for “pleasures” maybe you can get to a different conclusion, by trading on some difference between the two you have contrived in defining them, but that would be little more than a semantic outcome. You can make anything into anything else by simply redefining every word the way you need to get the result you want. But in the end you can’t change what things are by changing what you call them. This is why I added the caveat that I don’t think the word “pleasure” is well-chosen here. It’s too vague and variable in meaning to carry a coherent conclusion.
Case in point:
In your scenario, what possible reason would the imagined Sophie choose B other than that she deemed that outcome more pleasing (more satisfying) to her than the other? In other words, she would have to believe (falsely or not doesn’t matter for the point) that she will be more satisfied knowing B’s life outcome has occurred than she will suffer from the loss of A; if she didn’t, she would have no reason to prefer it (and with no reason to prefer it, no motive to ever choose it).
In short, you cannot say she would be well-motivated to choose the least satisfying outcome. To the contrary, from one choice to the other in the hypothetical all we are doing is changing what she deems more satisfying. So what she does is still in the end what she deems more satisfying; indeed, that appears in fact to be tautologically the case. To desire a thing more just is to believe it will satisfy you more. And choosing a thing just is the act of desiring it more.
There is a side problem with such scenarios however, which is that the choice itself can be self-rationalizing, i.e. the grief at losing A will actually be reduced by commitment to the outcome of B (e.g. by repeating the very consolation that motivated the choice in the first place). In other words, the choice itself mediates its own differential satisfaction.
This is why counterfactuals require more careful analysis than most people think. For instance, you have to remember only the differential matters. The negative is not “the grief” at losing A, because there would also be grief at losing B on the alternative choice; there is only an actual difference if the grief at losing A would be greater than the grief at losing B. But why would the grief at preventing the “great things” of B be “less” than the grief of losing some more direct enjoyments of A? Particularly as they could be replaced with other children or friends…or even a renewed relationship with B.
Remove one thing in a counter-factual, and something tends to move into its place. It is rarely the case that you remove a piece, and the causal spot it occupied stays empty. This comes to the principle of opportunity cost; and in that respect there are more variables than just “A” and “B” in cases like this. You would lose the company of either, either way; so if all that differs is that one somehow will bring you more goods of some kind than the other, how is it that those goods can’t simply be made up in some other way? And indeed, why would someone not then fill that gap with other goods, quite deliberately as a consequence of the original choice? And the same is the case the other way around (if you choose A over B). So it isn’t simply “lose A or lose B.” The outcomes are much closer in merit after all transformations are considered.
No matter what you try to do, all of these considerations seem always to end at the same foundation: navigating to the most satisfying choice available. In every case, one is always simply weighing different degrees of pleasure, and only choosing at random between them when indeed their degrees are actually equal (or are equal “so far as you know”). That “B will do great things and I care more about that than enjoying the company of A over B” is simply another description of what pleases someone. If it didn’t please them, then they wouldn’t care more about it, and so wouldn’t choose it.
Hence I don’t think there is any logically possible way to escape some fundamental hedonism as the only existing motivator. Every attempt to get around it just ends up inserting new sources of satisfaction; it never gets away from satisfaction itself being the only actual motivator.
I find Nozick to have the same kind of astonishingly dull (given his obvious intelligence and reach of thought) set of whoppers of conclusions as right-wing libertarian philosophers generally do.
In this case, the way I think about it is this. What if I found out, right now, that everyone around me was a p-zombie (assuming that p-zombies can happen, which I agree with you is logically flawed, but let’s say that they were very good automata this whole time)? That would suck. And if I found out the universe was not real? That would suck. But I could hypothetically find that out now. What would suck then would be the fact that I discovered something about my experiences. The pill doesn’t change that. Worse, I think a rational person would conclude (indeed, doing this is a good way of not being constantly so angry) that I felt the good times sincerely, and if I had no way of knowing I was being duped it was okay that I was, so while I should change my behavior now it’s okay that I enjoyed myself before. (And how I’d change my behavior is… unclear).
In other words, if someone was in this machine and found out that a certain thing they could have done in the real world they couldn’t do here, and they didn’t know that the real world would allow it, they would just have to assume that it wasn’t a possibility. I don’t know what’s possible to experience. I can’t lose out on things I don’t know about. Again, these conditions accrue to the real world, so they can’t really show anything.
I haven’t read that part of Nozick, but I’m guessing he didn’t really put the boots to his thought experiment by, say, imagining if people might pick this machine if they knew that it had most of the obtainable pleasures in or world and a ton more. If many would and I suspect the answer is “Yes” (especially if we can stipulate that leaving doesn’t leave loved ones behind, which is another pretty important part of the experience machine that I suspect he disguised from the calculation), then all he’s shown is that the utility calculation people do is complicated, not that pleasure (or more accurately satisfaction state) isn’t the intended outcome of the calculation.
There’s another rub. Let’s say you tell me that you can give me a pill where I can be a superhero in another world protecting people. My first thought isn’t “Are those people real”? That only dictates if I think about it as a duty or a game. And it will be more meaningful to me, though perhaps a lot less fun, if they are real people. (Which, again, shows another way this thought experiment is hosed). My first thought is “Did you just poof those people into existence, you crazy person”? That is, does the machine take me to a world that exists (in which case going to protect things there that can suffer becomes morally obligatory), or does it make a new one? If it makes a new one, then most non-psychopaths will tell you “No” if there is the tiniest chance the sentient beings in there are remotely real (that is, even if they’re not as sapient as us they may experience pain, so we are again back to p-zombies) because you just blinked into existence life forms that are suffering. I certainly won’t have you make stuff suffer for me so I can then clean it up.
Good thought experiments, like the Trolley Problem, force you to answer a rarefied question where the rules are coherent even if unpleasant. In bad ones, people are actually debating the rules of the experiment. This is a bad one.
It’s another in a long example of the failure to recognize OR reconcile subconscious (SC) urges with conscious thought or mind (CM). All social scientists fail to do this as do most humans.
I would argue both that social scientists, even behavioral economics (where economists have been for ideological reasons committed to rational choice perspectives for so long), have actually had many members who deeply attentive to non-conscious aspects of behavior and cognition, and that that isn’t the problem here. Notice how my point to Nozick proceeded entirely from asking basic questions about the thought experiment, questions that actually reveal the problem with it. Those were all consciously held ideas. It is wholly possible that subconscious elements refute the experience machine in other ways (maybe we need to be subconsciously convinced that we are interacting in real environments because we are predisposed to recognize and be concerned about deception and fraud, which is a prerequisite for us enjoying anything, but that isn’t an indication that satisfaction isn’t our goal), but it also is refuted by conscious thought.
Do any scholarly works exist that discuss whether or not an existence of only pleasure.. even contextualized pleasure is for lack of better word, rational? Meaning can we experience pleasure without having some absence of pleasure? Can pleasure-states exist without anti-pleasure states?
This may be off topic but it’s where my mind followed.
By asking whether a pleasure-only state is “rational” (and I understand you were searching for the right work there), you might be pursuing the wrong path. There is nothing contradictory (logically) about a pleasure-only state. However, there are certainly biological limitations along the same lines. And there are certainly psychological and pharmacological studies that address that idea (though probably not to the extreme you may be going for – for ethical/practical reasons).
Joe, I am not sure what your question is.
If you mean, can we experience pleasure and displeasure at the same time, I should think so. Just think of someone with a tooth-ache eating a delicious meal. One might then evaluate the net value as the differential between them: someone might stop then eating the meal as the pain overwhelms, i.e. exceeds, the pleasure, or continue the meal because the reverse is the case; e.g. think of a mild tooth-ache vs. a severe one.
But I don’t see the connection of this observation to my article.
Or if you mean the opposite, whether any pleasure experienced entails the absence of other or greater pleasures, then also I should think yes; and still also don’t see the connection.
Or if you mean to ask whether experiencing pleasure “is rational,” then that’s a category error (experiencing is experiencing; it is not a logical relation or inference).
Or if you mean to ask whether enjoying pleasure (as distinct from merely experiencing it) “is rational,” that would depend on what you mean by “rational.”
In the usual sense, any behavior “is rational” that comports with the reality and conduces to the agent’s overall best interests (which includes moral interests), insofar as the behavior really is either what one ought do in such circumstances or is among any behaviors equal in such degree but otherwise interchangeable with each other. Which all gets into one’s analysis of imperative propositions; I cover the logic of these cases in my chapter on moral theory in The End of Christianity.
Or if you mean to ask whether it is even possible to experience contextless pleasures, I would say for the purposes of the distinction made in the article the answer is yes. Think of merely “riding” inside someone else’s mind as they climb a cliff vs. actually climbing the cliff yourself. Both can entail pleasures (the former even becomes an actual addictive drug in the films Brainstorm and Strange Days), but a person for whom doing it themselves is the source of the pleasure would thereby observe the latter is the greater pleasure, and one which the former deprives them of (think of the ending of Being John Malkovich as an illustrative example of how that, generalized, becomes hell).
The more usual example used in the literature is an orgasm machine or sex doll vs. actually having sex with a real person (hence the example I included). In the former case, the apparatus (the context) is irrelevant to the pleasure (one does not care in such a case what is causing the pleasure, only that the pleasure is being caused); whereas in the latter case, the apparatus (the context) is essential to the pleasure (it is the very thing from which one is deriving the pleasure).
That is why to get the latter without the actual context requires deception; whereas the former instance does not; one does not have to be “tricked” into “not knowing” it’s just an orgasm stimulator causing the result, and still one can enjoy the resulting pleasure—it just will usually be deemed a hollow and thus insufficient pleasure compared to the alternative. In short, real sex is more fun. And that is why people prefer it.
Contextualized Pleasure > Contextless Pleasure. I understand this.
What if our context is only other pleasure states?
To try and better formulate my question. Can someone just always be in a pleasure state? Would they really appreciate the pleasure without some context of non-pleasure at some other point in life? In an existence like this I could imagine the less pleasurable states maybe seeming like the non-pleasure states at some point and only the most euphoric of states being even recognizable as pleasure.
To use the orgasm machine example, it would seem at some point if you were just constantly orgasming you would start to experience it as torture. And even if these were many different contextualized pleasures just constantly with no counter balance it may be very similar?
I grew up being taught that someday in the future we would live in a Paradise Earth where God would provide everything for us and there wouldn’t be any sadness. Everyone would just be happy all the time. As I’ve taken some time to analyze this concept, something about it just seems empty. A life with no struggle and no sadness at all, although happy, doesn’t feel like it would be as fulfilling.
Again sorry if this is off topic or just doesn’t logically follow your post.
This idea has a lot of credibility, but I don’t think it’s particularly compelling, either psychologically or philosophically.
Philosophically, when I think about what happiness means, it’s not some kind of state that comes about from some kind of conscious measurement. I’m not holding it up against something. A true state of pleasant serenity seems to be one where my mind isn’t comparing anything at all.
Yes, of course I feel a great moment of relief when I’m done with some great trial or tribulation… usually. But sometimes that’s an annoyance that sticks around even when it’s done. We as humans can easily resent the fact that a day was ruined even after the event that ruined it has run its course.
Obviously our brain will do things like make us really appreciate a meal when we are incredibly hungry… but we get fairly close to that level of enjoyment from a meal that we enjoy when we are merely normally hungry that is a supremely well-prepared meal, especially if we also have good company.
Psychologically, if it were true that true happiness required having experienced suffering, we would expect to see some kind of correlation between, say, traumatization and long-term happiness, or some kind of indication that happiness and suffering tend to both be directly correlated (which would mean that you would see, when graphing total life satisfaction, that the people with the lowest lows would also see the highest highs).
But… while satisfaction research can’t get that detailed or longitudinal, what we do have shows that that’s just not how anything works. Obviously, people with extreme trauma tend to live lives that are materially worse off than those who haven’t had it, even given how much better we are at treating trauma these days (for those people lucky enough to get that kind of care). Lots of sources of suffering seem to stick around and cause permanent reductions. So, at the most, we can clearly see that the kinds of suffering that may help us understand happiness better by contrast would need to be the ones that can’t cause permanent mental or physical harm or discomfort.
We also know that there is a good reason to suspect that that feeling that we didn’t understand how good we had it until a bad event is a cognitive illusion. I highly recommend Gilbert’s Stumbling On Happiness. The happiness literature has surely advanced since he wrote it, but he still makes fairly clear that happiness is a complex thing. For one, I think his evidence shows pretty clearly what REBT and Buddhist thought have shown: our lack of happiness is rarely a result of bad events alone, or of too few good events, but our own relation to our lives and our cognitive structure around our present state. We get unhappy when our mind shows us a future that we think will be unhappy, even when that isn’t a likely future: Our brains model the future with consistent cognitive biases. If we can learn to not need to be happy with an imagined future state and be satisfied in the present, that really doesn’t need to be related to any past or future suffering or happiness: that satisfaction can stand on its own. In any case, we also know that our brain will take any bad thing that happens to us of a great magnitude and, whenever possible, give us rationalizations about how “it was the best thing that happened to us”, precisely because that’s the kind of hack one would put into a brain to make sure that awful events don’t cause us to be unable to have future happiness.
One can also look at cognitive biases like duration neglect and see that the way that we process negative events isn’t some objective accounting of how we felt at each moment but a very inaccurate gestalt.
So… I am deeply skeptical of the idea that we need dissatisfaction states to model satisfaction states.
Joe, your comment definitely seems on point to me. And important to work through here.
What Fred already said in reply is all correct, IMO.
But this is what I’d add:
Taking your question literally, it is factually never the case that “our context is only other pleasure states.” If your mind contained only pleasure states, then you wouldn’t exist to appreciate them. Remember, “you” are much more than a pleasure state, and “you” are always an inalienable context of anything you experience. And though it might be possible for only you to exist (ontological solipsism), so that “you” are the only context for your experiences, we have ample evidence that’s not the case. You also exist in a vast real-world context, which you also can’t get away from. So there is never any such thing as “our context is only other pleasure states.” And this matters because most (and all the greatest) satisfaction states come from our understanding of the larger context, which plays a key part in your orgasm machine example I’ll get to shortly.
But taking your question figuratively, i.e. where all that other context exists and is being acknowledged but what you mean to ask is a situation where, ceteris paribus, we never experience non-pleasure states, there are two points to observe.
First, there would still be degrees of pleasure state to navigate, and the evidence is conclusive that that would not somehow automatically recalibrate low pleasure states as displeasure states. It can do (for example, someone who experiences good wine after liking bad wine may grow to dislike bad wine thereafter), but doesn’t have to (for example, a rich person who grows too accustomed to luxury to enjoy “slumming it” actually could learn to enjoy the latter again, if they are willing to change the way they actively experience it, changing essentially how they think about things).
Second, the science seems pretty much to establish we don’t need displeasure states to enjoy pleasure states. Fred covered that adequately already. But it’s quite evident that someone who could switch between a life with no displeasure states and abundant pleasure states, and a life with the same quantity and degrees of pleasure states intermixed with displeasure states, ceteris paribus every rational agent would choose the former over the latter.
One might call up exceptions for the few cases where someone derives pleasure from certain displeasure states; but most displeasure states don’t produce that effect, and even if we rearranged the world so that they did, we’d just be tautologically back in a world with no actual displeasure states, and thus no dissatisfaction. For if a state is producing pleasure dependent on some discomfort—think of the pleasure of exhaustion after satisfying effort, or sexual masochists, and so on—it’s really just a pleasure state, full stop.
This is one of the reasons I prefer the vocabulary of satisfaction and dissatisfaction states; “pleasure” creates too many semantic conflations in practice, and what we are really concerned about is satisfaction anyway, not pleasure; the latter is just one means to the former, and given a choice, everyone would choose the former over the latter if they had no other option. So it’s more fundamental. And as I note in my article it better captures what Nozick wants to be talking about.
So, “To use the orgasm machine example, it would seem at some point if you were just constantly orgasming you would start to experience it as torture.” This I think is true (even as a matter of scientific fact). So the analysis has to be of why. And this gets us to the importance of context.
On the one hand, an actual person’s brain is programmed for evolutionary reasons to grow in displeasure from repeated singular pleasure states (for computational and survival reasons, that’s a lethal failure mode equivalent to the fate of Buridan’s Ass). In other words, constantly orgasming would progress into torture, because we are programmed to want to vary our activities and interests and not become fatally paralyzed in some minute obsession.
Thus, as a matter of contingent fact, humans just aren’t built to find satisfaction in such conditions; and no one could be who has decided survival and accomplishment, including love and friendship and the acquisition and application of knowledge—knowing and creating—are essential to satisfaction, because then that circumstance would again be a failure mode. Which is a context realization, i.e. knowledge of context causes that understanding and desire (see The Objective Value Cascade).
Which is why no one is likely to rewire their mind differently on that score, at least not informedly. But it’s in principle possible to. Hence…
On the other hand, what if we could rewrite the programming in our brain so that we never have this outcome, and orgasm machines, for example, become perpetually satisfying—in effect, choosing to be Buridan’s Ass?
This is where context-dependent satisfaction states become key to why no rationally informed person would do that. They would effectively be killing themselves, by locking themselves in an unevolving stasis in which no progress, accomplishment, knowledge, love, friendship, anything would ever be realized. It would just be repetitious pointless pleasure.
While it would be possible to erase one’s ability to know that (that’s what we’d have to do to put ourselves in a “Desire to Be Buridan’s Ass” mode), someone still possessed of the ability to know that would not choose that mode for that very reason: they’d know more and greater satisfaction states can be realized, and appreciated, outside that stasis. (This is more or less the point of my article on The Objective Value Cascade.)
BTW, this scenario is actually realized in fiction: an important scene in the film Brain Storm shows a man unwisely rigging himself (trapping himself) in an endless-orgasm machine, and somewhat depicts why he is better off outside of it (once rescued), once he is able to comprehend the different life states available to him. (Although the film depicts him benefiting from the experience, he still chooses not to resume it. And although the film ends with a pro-supernatural-afterlife message, this doesn’t detract from the philosophy worked out in the remainder of the film.)
Either way, the conclusion is not that we need displeasure states to enjoy pleasure states, but that we need variation of pleasure states (and intelligent control over their navigation) to unlock access to greater satisfaction states. In other words, the reason a perpetual orgasm machine is objectively dissatisfying is not because we need to experience dissatisfaction to appreciate satisfaction, but because it locks us out of far more satisfying states. And though it is technically possible to trap someone in such a state (by wiring them so that they never want to leave it), the fact that that is tantamount to killing them as a person (the outcome, of a pointless perpetual singular sensation absent anything else constituting life, barely differs from death) is reason enough not to trap oneself in such a state (and why evolution has already counter-wired us so that we don’t).
Which then gets us to your last question:
Note that if it were the case that some state felt empty, then you are tautologically describing it as dissatisfying. It therefore could not also be a satisfaction state at all, much less a maximal one. So one should analyze why it would be dissatisfying. Good philosophy requires first working up from particulars to generalizations and abstractions, not the other way around. So one should isolate specific particular things that would be dissatisfying, and experiment with making adjustments to remove them, to see what happens in your conceptual space.
For example, suppose you look for what would be dissatisfying and among various things you identify “I can never lose at a game,” everything is rigged so you always win, as among them. This would produce the context awareness that winning is then pointless; it signifies nothing and therefore there is no reason to derive any satisfaction from it. Here it is not that you need the availability of a dissatisfaction state, but that you need the context to support the satisfaction state (because it is the very thing that produces that satisfaction state).
So, to illustrate from my own life, I actually still enjoy playing games that I lose. Which is a form of attitudinal change. Of course I enjoy them even more when I win (though less when I always win, as that makes them too easy to be challenging, and unchallenging games are not satisfying to play), but here we are now talking about navigating a spectrum of satisfaction states. No net dissatisfaction state is needed to do this.
And I think this is the analysis that would come down for any other example. For example, I suspect sadness is actually in many (but not most) cases a satisfaction state. It can be satisfying to enjoy an occasional state of melancholy, provided it’s not too severe (actually and contextually—e.g. not severely felt and not caused by severe loss, which in a well regulated emotional system would be the same thing), and in that sense it might be dissatisfying to never experience such a thing.
At the same time, even when sadness is a dissatisfaction state, one might still need it to achieve other satisfaction states (e.g. we can learn from tragedy etc.), but this isn’t necessarily true, only contingently true (there are other ways to learn the same things; thus, that we can make lemonade out of lemons doesn’t mean we can’t just use a Star Trek replicator to make lemonade—if the replicator is available; hence the difference between necessary and contingent needs).
Even insofar as there might be some satisfaction states that depend on dissatisfaction states (e.g. to enjoy recovering from sadness requires first experiencing the dissatisfaction-state of sadness), it does not automatically follow that we need them (there are plenty of greater satisfaction states to pursue that require no such context), so even those exception cases would only sit in our life repertoire as minor options, not something we “need.”
And in any event, we’d not choose a life solely consumed by only those satisfactions anyway. There are many other greater satisfaction states we’d definitely want to be sure to include. And no rationally informed person would want severe versions of these dissatisfaction-satisfaction sequences if they could help it (hence the “human happiness is impossible without the mass torture and rape of children” line some Christian apologists will actually declare is most definitely bullshit).
So in the end, I would say we almost certainly need life to be varying and challenging and offer opportunities for accomplishment (progress, knowledge, creation, friendship, and so on), but this does not require anything to be net dissatisfying. Much less severely so.
To add onto Richard’s examples:
Some people may find that, after having had cheap wine, they’re ruined for the good stuff. To some extent, though, one can argue that this was a result of ignorance; they may actually not have had the palate to recognize why the cheap stuff was actually not as pleasant an experience it may have felt at first glance. Our pleasure states are contextual in the sense that we learn to evaluate our pleasure and experiences in the light of a growing body of knowledge.
But there’s actually lots of times where that doesn’t apply, and in fact cases where the expanded knowledge can even help you. Some people think of a Big Mac not as a bad burger but as a good Big Mac, and view it as a separate experience. Cheaper alcohol can sometimes have a lot to suggest it: It may be cheaper because it’s more overt in a particular flavor profile, but you may like that bluntness. Certainly, for a lot of applications, the cheaper alcohol is going to be better even putting aside cost concerns. A cheap champagne will probably be a better cocktail topper than an expensive one because the expensive one’s strengths of complex notes are just going to disappear. A really great example is pizza: even when you’ve had a really good wood-oven pizza or a great Chicago or Detroit-style, you may still end up craving a Little Caesar’s. An American who discovers a margherita pizza and other more sophisticated combinations doesn’t necessarily stop liking a classic pepperoni or Hawaiian.
The very fact that you can’t just pile good stuff on top of each other and get something good is itself illustrative.
To use examples from intoxication: Many people when they’re high on weed may naively think that they’re going to enjoy a really great meal, but find that it’s not as good as they were hoping. There’s a reason why snacks for getting stoned tend to be really blunt (excuse the pun), your classic Cheetos and french fres and cereal: the experience tends to elevate really simple flavors. Of course, there are probably people who have had different experiences, which just goes to show how heterogeneous and complex the issue is.
Richard points to the example of sadness. I’d add on fear. Horror fans want to be afraid. They don’t usually want to be mortally afraid constantly, but they want to enjoy the state of heightened arousal alongside the mental exploration of the macabre. That’s a different state than “happy”, but it’s pleasurable to them.
So the “Always happy” state is nightmarish in part for the same reason that the “All my meals are vanilla ice cream!” fantasy is also nightmarish. “Happy” is a nice base state, but people crave some degree of diversity in experience. So the correct thought experiment to run is, “Would I rather have a life with the kind of challenges and traumas that lead to actual extreme inescapable unpleasantness, or would I rather have a life with my preferred mix of emotional states?” Once you phrase it that way, the latter becomes pretty clearly the preference. Which indicates that what we want isn’t one simple kind of satisfaction but a complex set of them. And the happiness research shows that the value of novelty is sometimes overstated. People often don’t mind having a few preferred options. How many people do you know who go to a restaurant and get the same thing almost every time?
As Richard points out, we often like to be in a game where losing is a very real option. And while most people tend to prefer to win rather than lose, there are many, many cases where a really great, well-fought match where everyone got to make really exciting plays and we happened to lose is very preferable to the alternative. Sometimes a big loss can even reinvigorate us and get us to fight again with some hunger. Last night I was playing Coup with friends, and while I may have had less of a good time had I not won a fair share of matches, some of the most satisfying matches were the ones where I lost, just because other people had bluffed so impressively or calculated their moves so impressively.
In my personal experience, the state of quiet serenity that I experience in meditation is one I wouldn’t mind never leaving, or at most leaving very rarely.
I think that, when you control for the fact that some negative experiences can give greater context to our pleasure but so can many positive experiences (like when you watch a favorite movie again after some years and you seen an entirely new interpretive framework or catch some character or plot beat that you missed before and it makes you appreciate the movie in a whole new light), a lot of the apparent benefit of negative states evaporates.
//Human experience is a simulation constructed by the brain. “Sweetness” does not exist outside our minds; sugar molecules have no such property. It is entirely a fabricated experiential response.//
I beg to differ. Colours, sounds, odours, and yes, even tastes exist out there. They are constitutive of the external world.
Sorry, they aren’t.
You evidently need to catch on on the science here.
Please explain how sugar tastes without a tongue and nose.
To be clear, neither tongue nor nose has actually anything to do with “how sugar tastes.” Those are just lattices of reaction cells, to inform the brain what molecule is present. “How sugar tastes” instead has something to do with computations performed deep in the brain. We don’t know how that works yet, though we do know where those computations occur in the brain (and can in principle remove them, or stimulate them in the absence of any sugar molecules).
And we have good evidential reason to believe the taste of sugar is entirely dependent on those computations being made and integrated with an active world model. So there is no reason to believe “the taste of sugar” exists anywhere in the universe, potentially or actually, other than as the output of such a computation, potential or actual.
It seems conceivable though that an experience machine could directly instill in someone the sort of profound satisfaction associated with complex context-dependent satisfaction states with no need for the states themselves — if not even enormously more satisfying states. After all, as Carrier points out, our brains necessarily mediate between reality and our perceptions and emotions — if we accept the premise of the experience-machine thought experiment that emotions can be detached completely from reality (however much modification to our existing brains this requires), why would sophisticated states of simulated reality be necessary for greater levels of satisfaction?
I don’t follow what you mean.
There are only two pathways:
(1) Lying (creating states through deception), which no rationally informed person would choose. Because they would always prefer to know they are in a simulation and thus what its rules and opportunities are, because greater satisfaction can be achieved through knowledge rather than aimless wandering, by the basic principle that an informed agent can pursue all goals more quickly and effectively than by a drunkard’s walk.
(2) Not lying (creating states honestly), which every rationally informed person would choose. Because it is possible to know you are in a sim and more reliably achieve maximal satisfaction states (see How Not to Live in Zardoz and Ten Ways the World Would Be Different If God Existed).
It is unclear what you are advising other than this. “I feel good but I’m all alone and not doing or learning anything” cannot even in principle achieve any maximal satisfaction state, but can only inevitably lead to mind-numbing horror, as one realizes the pointlessness of mere stimulus. You can remove the feeling of being trapped (unable to do or learn anything) and aimless (alone and without any goals or diversity of experience) only by lying (deceiving the brain/mind so as not to notice these objective facts). Accordingly, no rationally informed being would choose such a lonely dead-state (see The Objective Value Cascade).
To me it isn’t obvious that experiencing displeasure from solitude, lack of purpose, lack of stimulation/variety of stimulation, etc. is more or less artificial than the opposite — being bothered by these things and feeling satisfied by their opposite is conducive to survival in our world, but theoretically your survival would be ensured by a sufficiently capable machine. How could we say then that someone who’s happy/satisfied doing something that seems monotonous, unstimulating, or downright unpleasant, in or out of an experience machine, is deceived rather than just “differently-wired”?
That is the question I answer in Cascade.
Once you posit the condition (a rationally informed agent), the conclusion follows that any such agent can work out that they would be objectively better off with a richer satisfaction condition than “doing and being nothing.” In short, a rationally informed agent will always be able to work out that living in a perpetual orgasm tube is too objectively pointless to be satisfying. It can produce mental pleasure, but only intellectual horror.
The only way to prevent this outcome (other than deception) is lobotomy. Which renders the thing in the tube no longer a person capable of contemplating their condition.
This is the part I have trouble understanding — that there’s an inherent link between intellectual/sophisticated stimulation and increased satisfaction. If humans have a craving for sophisticated and novel stimulation that would cause them to become bored with a single, simple repeated experience, it seems they’d have an interest in switching this craving off if they could do so without endangering themselves — and in the world as we know it people often perform these sorts of situational “self-lobotomies”: drinking or taking drugs for instance, engaging in meditative activities like knitting or gardening, or outright meditating. These can even be as simple as consciously suppressing information — e.g. not thinking about world hunger while watching a movie, or not thinking about the litter box in the corner while eating a sandwich.
If an experience machine can help people become happier by similar means, how would we conclude that it’s harming people by destroying (parts of) their minds rather than healing them by removing unneeded cravings/intolerances?
Correct. You almost are getting it.
So you can lobotomize people (remove all their interest in anything substantial) and thus remove any inclination to prefer a better state. But if you allow them the ability (and that means adequate rationality and information) to evaluate whether they would choose to remain lobotomized or de-lobotomized if they can compare outcomes hypothetically, rational persons will never choose the lobotomized state. That’s the argument I develop in Cascade.
When comparing more substantively achieved satisfaction states and vegetable states, there are objective (not just subjectively preferred) attractions to the former over the latter. Yes, you can block someone from realizing that by lobotomizing them (removing all ability to discover this, or the motivation to use it). But that is not what we are talking about: we are talking about what people would choose if rationally informed; not what they would choose if you surgically removed from them all possibility of evaluating available outcome states.
Make someone only want to be a vegetable, and they will only want to be a vegetable. That is self-evident to the point of being trivial. But if you ask someone capable of rationally considering alternatives if they want to be a vegetable, and no rationally informed person will say yes. This does require a set of motivations and desires—like the desire to rationally consider alternative available states before being able to directly experience them, which pretty much defines the entire operational function of human consciousness.
So if you remove “the desire to rationally consider alternative available states before being able to directly experience them” you are essentially killing the person and replacing them with a vegetable. That the resulting vegetable will be content with that is not relevant to the question we are asking here, which is not what vegetables want, but what rational, conscious beings will always want when given a choice, sufficient information to choose by, and no deceptions or coercions preventing them coming to a logically sound conclusion.
I agree that a given person probably would be repulsed at the idea of plugging into the machine and becoming maximally-satisfied without the need of novel/complex stimulation, and would choose not to plug in for that reason — it seems though that by your argument this would be a rational/appropriate response only if the person in fact would not be happier (not just happy) in the machine, and that’s specifically what isn’t obvious to me.
It seems to me that any activity a person could perform in reality for X satisfaction could be beaten by the machine simply offering X+1 satisfaction for the same activity, whether or not the subject knows they’re in the machine at the time. By the same token, it isn’t clear to me that people’s real-life experiences, relationships etc. produce satisfaction expressly from their realness (or complexity or novelty) but rather because people’s brains “artificially” produce satisfaction in response to the associated stimuli. i.e., who’s to say that we’re more deceived inside the machine than out of it? It seems common after all for people to use emotional intensity as a guide to what’s true or valuable — like when someone says in a love song, “I’ve never known anything as real as this”. If a person, say, develops a brain tumor one day and loses their romantic feelings for their spouse, have they “woken up” from an illusion or been given a new one?
With respect to reality, this is a given—hence I think ultimately we should go live in these places: see How Not to Live in Zardoz. But that would not be the “experience machine” (which by definition is faking everything). In virtual worlds we will have real relationships with real people and do “real” things (within the context of the sim).
But internally, all metrics will hit diminishing returns and thus have a ceiling (every possible pursuit will have a max satisfaction point beyond which there will be no especial value adding more; and diversified pursuits have a max at time allocation, e.g. you can’t do everything simultaneously; and if we are talking about the absence of the supernatural, there will be a max memory load, so you will only be able to remember so much about your past adventures, although that load point is hell and gone beyond current human lifespan).
As to your last question, you’ve veered off subject into something too incoherent to answer. You seem to have changed subject into the ontology and epistemology of emotion, which is disconnected from the question we are exploring here. Whether (and when and how) emotions accurately or inaccurately assess circumstances is the same issue regardless of whether we are in the real world or a sim.
But in an “experience machine” (which by definition is not a sim but a “fake” sim, e.g. it fakes experiences rather than letting you explore them, it fakes people rather than letting you interact with them, etc.) any rational agent not deceived will be dissatisfied, knowing it is all fake. Unless you trick them into not knowing it is fake, but then they are just a deceived puppet, which no rational agent would choose.
And then you run into the Cartesian Demon problem of how to keep them that way, which is hard to do for a clever person, hence Hal 9000 eventually had to just kill the crew to prevent them discovering the secret thing, Vanilla Sky failed eventually and its occupant had to call customer service and get out, and Total Recall couldn’t run forever but had to be just a one-off vacation package, otherwise the resulting existential paranoia would have driven the subject mad. And in your scheme, you’d eventually just have to lobotomize the subjects to downgrade their intelligence so they never figure anything out, essentially resorting to Mengele-scale brain damage which no rationally informed agent would sign off on.
Hence. Lie to them or lobotomize them or kill them—those are your only recourses, all just to prevent rational beings from having what they will all actually want, which is to live in a real sim, not a fake one.
(replying to above)
To clarify, I’m imagining a version of the machine that doesn’t just control what one perceives visually, tactilely etc. but also one’s accompanying emotional experience. e.g. it doesn’t just show someone a sunset; it shows them the sunset and stimulates their brain so that they find it profoundly beautiful (as if every conceivable emotional response had a catalog number or recipe that could be called up on demand), even if they know full well it isn’t real and/or are seeing it for the trillionth time.
People may object to plugging into such a machine for a number of reasons — that it would deceive them, that it would destroy them as a person… — but what validity would these objections have if we assume that 1. happiness/satisfaction is these people’s main goal and 2. the machine would in fact make them more satisfied than they’d be otherwise?
This does I think overlap with questions related to ontology/epistemology since it concerns whether and to what degree a person has to be mentally “harmed” in order to be satisfied from interactions with things they know are virtual. If that goes beyond the scope of Nozick’s thought experiment, it still seems relevant to the underlying question of how happiness relates to value.
You aren’t discussing Nozick’s machine, then.
You are just describing simverses (which Nozick could not conceive of when he wrote; the idea had existed in obscure fiction for half a century by then, but wasn’t brought to general public consciousness until, I think, the movie Tron in 1982).
The key element of Nozick’s machine is that you aren’t doing anything. You are being tricked into thinking you are. Like, instead of playing an all-sensory MUD, you are watching someone else play it, and then being fooled into thinking you are.
So maybe this confusion has set you off on the wrong tangent.
The issue is not whether simverses are comparable to real worlds as far as satisfaction accessibility (in fact they are in every way superior on that metric), or whether emotions can be “real” there (of course they can; emotions are just evals of sensory-intellectual assessments of circumstances, which one has whether coming from photons or electrons).
The issue (in the article you are responding to) is whether a rationally informed agent would choose to enter a Full-Deception-MUD (and be fooled into thinking they are living a life they are not, and making decisions they are not, and meeting people they are not) or whether they would prefer to enter a No-Deception-MUD (a simverse in which they get to make choices and meet real people and so on).
If you think of The Matrix, which Nozick could not anticipate when he wrote this up (that movie came out twenty years later; although he was then alive at least), it is neither (people are meeting real people and making real decisions in it, but are in many respects deceived about where they are, what they actually can do, how they are being exploited and abused, and so on). So that is not a Nozick Experience Machine. Nor a simverse anyone would prefer to live in. Although if those were their only choices, they would choose The Matrix over his machine.
If what you mean is “What if we could change people, completely rewire their brain so that they are completely different people with completely different desires, such that they would want to be in Nozick’s machine?” you are creating a different scenario than Nozick was. You are then dealing with a complex contrafactual that questions whether the objective can even be achieved without just ending, lobotomizing, or lying to the person in question. I aver that is impossible (you will have to do one of those three things instead). For the reason I think that, you need to consult (and then should probably be commenting on) my other article, The Objective Value Cascade, not this one.
I get the impression that Nozick would consider a full-deception MUD as better at producing satisfaction than a no-deception one, since in a no-deception sim not only would one be vulnerable to the sorts of social perils one finds in reality — not being invited to a party, being insulted or gossiped about … — but many real-world vocations would be impossible or radically different in a sim (e.g. a doctor in a world with no diseases; a scientist, researcher, or engineer in a fully-plotted universe; an athlete in a world of equally-abled people). Nozick’s elaborate deceptions serve the same function as the “ready-to-order” emotions I described earlier, and to me seem qualitatively very similar: they distract from or conceal the lack of physical danger or limitations in a (sufficiently advanced) virtual world. In a virtual world, after all, you could always pull the emergency brake and get out of any difficult or unpleasant situation, social or otherwise; to be in such a world at all is to participate in some level of self-deception.
Deceptions like these may not work indefinitely, but per Nozick this wouldn’t be necessary: a person would program for instance several years’ worth of illusory experiences, then after experiencing them exit the machine to program the next few years’ and so on.
My question would be, if we assume that a comprehensive deception of this kind (consensual, self-imposed, all actions preprogrammed) could work even temporarily to make a person more satisfied, would it be rational for a person to refuse it?
That’s not what Nozick is talking about.
Nozick means you go in and stay there.
In his experiment: it is not logically possible to create the condition (achieving perfect results) without lying to the agent; which is precisely why no agent would choose that condition—for both the standard Epicurean reason (succeeding at keeping a smart person fooled is terminally unlikely, to the point that only a fool would arrogantly think they could reliably succeed) and for the deeper existential reason (that this is not what any rationally informed person wants, which is proved by the fact that you would have to deceive them—if they didn’t care, you wouldn’t have to).
So either all you are talking about is just another form of television; or a Magic Pill.
In the first case, you are simply asking if people would like to watch TV on occasion (only enhanced). People watch shows, then do other things with their life, knowing they are just watching shows. Which fails to meet Nozick’s condition. Since you can leave, it’s just entertainment; it has no impact on your statisfaction outlook, because you can pursue other things. Obviously people would buy and spend time in that machine (for diverse examples explored in fiction, Total Recall and Strange Days).
Nozick’s actual experiment is not that. It would be more analogous to Clockwork Orange or the concluding fate of the protagonist in Being John Malkovich. Like being forced to do literally nothing and meet literally no one, and watch Seinfeld for the rest of your life. No sane person would want that. And rewiring their brain so they become a completely different person who wants that, is destroying them as a person, and replacing them with a functional vegetable. Which is a surgery no rational person would consent to.
If you maintain Nozick’s condition, therefore, then you are asking about what is called in philosophy the Magic Pill problem, in which case, see my discussion of that in the unrelated subject of moral knowledge in Goal Theory Update (see § 2b).
Do you think it’s illegitimate in that case for Nozick to stipulate that the machine does in fact increase one’s satisfaction? (“a lifetime of bliss” as they put it.) If it does, but only on the condition that the subject never learns the truth, that would seem to create a rational incentive to plug in and never learn the truth (one could for instance choose sim experiences that don’t beggar belief while still being greatly enjoyable).
In the magic pill discussion linked, you say it’s preferable not to exist altogether than to commit a reprehensible act and then erase the memory with a pill, but that would seem to appeal to a metric other than personal satisfaction (since someone who doesn’t exist couldn’t have any satisfaction, even as much as a reprehensible person). Without satisfaction as the bottom line, how do we determine that the choice not to take such a pill, or spend the rest of one’s life watching Seinfeld, or undergo the brain-rewiring needed to make one permanently overjoyed in a world of illusions, is irrational or insane?
The “rational incentive” is removed by the knowledge condition. This is always the problem with Magic Pill scenarios.
No one would informedly go into Nozick’s machine, but they would go into a real one (an actual sim).
The analogy is completed by “existential dread” of being a vegetable in a fake world vs. being a murderous psychopath.
In the moral philosophy Magic Pill, the dread is being the worst possible person, which dread one can experience even now (knowing they could have taken the pill), hence we know we would not take it (as being the sort of person who never would is the only guarantee of not being in the pill scenario even now). In the Nozick scenario, the dread is everyone you know being non-existent fake and nothing you do really being you doing it (there are literal clinical insanities that consist of this unshakable dread, and it results in medication or institutionalization).
The only way to be sure you are not a deceived end-state Craig Schwartz (as opposed to an aware one) is to be sure you are the kind of person who would never choose to be. Otherwise you will always suffer the existential dread that you are that person after all, and in exactly that nightmare scenario.
If the Nozick machine determines every aspect of your virtual experience, though, down to your tiniest actions, it seems as if someone plugging in could specify that their virtual life not include existential dread — they could go through this life never having an experience that would prompt them to consider or dwell on the possibility of they’re living in a sim, or even find the prospect attractive. (Just as a hypothetical murderer might take a pill to remove not only their memory of committing the crime, but any memory of why they’d want to.) If this compromises their rationality, but increases their satisfaction, how do we judge the decision to plug in as against their interest on balance?
You are now lobotomizing the victim. See what I mean?
Yes, if you destroy someone’s entire personality, changing them into a completely different person bereft of basic intellectual capabilities to evaluate their situations by, so that they actually want to be a vegetable watching Seinfeld forever, then you will be able to do that. But no one would let you.
That’s the point.
Of course, that we (now) can experience this existential dread at the thought entails we didn’t let you. Which is the point about all Magic Pill scenarios.
As to satisfaction states, read Cascade again: any rational, informed person faced with the choice of a satisfaction-state pursuit-zone, one being fake (a mere pleasure-causer) and the other real (where more satisfaction states are achievable by definition, because they are the result of real action and not deception)—and thus one requiring destructive lobotomization and the other allowing one’s faculties to function—that person will always choose the real one.
Otherwise there is no difference between doing the lobotomy-Seinfeld thing and just being a mindless vegetable in a perpetual orgasm tube. Rational agents will recognize that if they could choose between that outcome and the other, the other is always objectively better (they will always be more satisfied knowing they didn’t do the lobotomy-Seinfeld thing and are in a real world where they are meeting real people and actually doing things).
This can be tested by the required condition for objective analysis: wake up the lobotomy-Seinfeld person, give them back the faculty to understand the difference between the two states and choose, and then ask them if they’d rather be the other person (all else being equal), and they will always say yes; while the person in the real sim already has that faculty and thus can already report to you that they like where they are and would not prefer to be the lobotomy-Seinfeld person.
In essence this is how all imperative propositions work: for an imperative (what one ought to do) to be actually true, it cannot be based on any false information or deception (because falsity in, falsity out); thus to know what one really ought to do (and not what one has been tricked into thinking they ought to do), you need to know the true facts of the situation. Thus anytime you have to deny that to an agent, to trick them into wanting a particular outcome (like destroying their faculties as you suggest), you have already thereby admitted that it is not an objectively desirable state.
And remember, in the Cascade experiment there are no predetermined desires (so no “existential dread” mode has even been installed yet; that is actually what the agent is deciding between: an outcome where they have that, and it works correctly, i.e. it reacts to the actual situation it is meant to signal, and one where it is suppressed, i.e. it will not activate even when in the situation it is meant to signal). The agent is deciding which set of conditions (desires, values, emotions) it would prefer to have (if it had a choice), by querying its hypothetical different selves in the future (the one who chose one way, and the other who chose the other).
How would we determine though that a preference for non-illusory vs. illusory experiences is rational in the first place, given a sim that could reproduce any given set of non-sim stimuli? For instance, we might imagine the case of someone who has a paralyzing fear of spiders, and who would be unable to sleep at night knowing there’s a spider under their bed. Conceivably this person might choose to take a pill that makes them forget (or simply not care about) the spider’s existence; they lose knowledge but gain satisfaction. Another person might have a sim-life so fulfilling that were they to discover its reality they would fall into permanent despair — how would we judge in these cases that the people are better off undeceived vs. deceived, taking satisfaction as the bottom line?
In practical reality, a strong aversion to illusions makes sense given that illusions present possible threats to our safety (and so satisfaction); in a machine-facilitated world free of these kinds of threats, it isn’t clear that this aversion would be rational or worthwhile overall (which could also be said of many near-universal psychological features, such as developing a dislike for something/someone simply because they aren’t novel, being uncomfortable in solitude, or (like you note in Cascade) enjoying risk-taking for its own sake). We may always know what we want, but simply wanting it doesn’t make it what’s best for us — even when the thing is truth itself.
No rational person would do this.
Take the spider case:
The rational thing to do would be to personally dial down the reaction-setting to spiders, rather than create an ignorance of them.
Because in the former case: you are modifying yourself; you know what you’ve modified (including after the fact); you know it is in line with rational objectivity and what you want; you could reverse it if ever you need; and it does not destroy your rationality or knowledge.
Whereas the latter procedure is literally dangerous (some spiders you really do need to not sleep over); and any rational person who has not been lobotomized and thus their rationality removed would suffer the converse outcome: they would grow anxious that they took the pill and there might yet still be spiders there and now they’ve deleted their ability to find out.
You can only “fix” that outcome by now doing brain surgery not just on their knowledge of spiders, but their entire ability to understand or care about all magic pills. You are thus destroying rationality; not creating a greater satisfaction state for rational agents. And rational agents will not choose that (not rationally; and not knowingly).
That this is a billion times worse when it’s not just spiders but your entire knowledge of every aspect of existence (all the people you think you meet don’t exist; nothing you think you are doing or thinking you are doing or thinking, it’s all just a TV program you are sitting in a tube watching) only further diminishes the Nozick test. All rational agents will prefer a real sim to that one. And the only way to change that is to lobotomize them: to remove their rationality, and thus degrade them into passive animals rather than intelligent agents.
By contrast, the dialed-response option is what we actually do (all treatments for mental disorders, pharmaceutical or theraputic, involve reaction-reduction, not ignorance-creation), and is how rational agents would adjust themselves in any real sim to avert counterproductive outcomes (like disproportionate phobias or discomfort at the existence of other people you could visit). They would not destroy all their knowledge and even rationality, becoming literally subhuman and rendering their very existence individually pointless.
If the subject became less rational/informed (re: the spider and pill) but gained satisfaction, would you say that’s still undesirable on balance? If this requires that their life unfold in a very particular way (e.g. so as to keep their mind off pills), and wouldn’t provide the more general benefits of directly adjusting their fear response, that would seem to illustrate an advantage of a Nozick machine: in the machine all of your experiences are fully determined and you’re protected from all external threats. In the unpredictable and dangerous non-sim world reason and knowledge obviously are essential as a heuristic, but what utility would you say they carry over into a determined and danger-free sim world?
Yes.
You are basically asking “is it better to be an immortal pig than a mortal human.” Ask any rational human being who actually understands what that would entail (the complete destruction of their selves and their reason and even their capacity to cognitively know things about the world) and they will answer “No.”
Likewise if you dial it up to “deceived prisoner whose reason and awareness that they are not meeting anyone or doing anything is kept from them but they are tricked into misbelieving otherwise for all eternity” and the answer (on the same conditions of understanding) would be the same.
What anyone would want instead is not Nozick’s machine, but a real sim.
Would someone agree to live forever in a Hayao Miyazaki cartoon, if they were not lied to about it and were actually free to act in that world on their own initiative and reason, and the other people they met there were real people, just like them (whether original AIs or transitioned humans, either way a genuinely conscious independently-acting person)?
Yes. The only objection anyone would raise is to the aesthetics (maybe someone would prefer a more realistic sim, like space opera or fantasy adventure), but if they couldn’t skin their world the way they wanted, but got to live forever nonetheless in a non-Zardozed sim, all rational agents would agree (at the very least, to end up there in their retirement or upon their natural death).
That the real world is more dangerous and less predictable is of no relevance. Just as no rational person actually in that world chooses to watch Seinfeld 24/7, no rational person would choose to do that forever either.
If maximizing satisfaction is a person’s end goal, though, on what grounds would they choose to keep their identity, reason, or capacity for knowledge if doing so meant sacrificing overall satisfaction? In order to make good on its promise after all a Nozick machine would need to be able to provide any life experience imaginable, not just that of a pig (although people might choose differently if it were discovered that pigs experience nothing but maximal joy their entire lives).
In the spider example for instance, it makes sense for a person to give up their old spider-fearing identity in exchange for a new one that doesn’t react excessively when there’s no danger — in a Nozick-machine world though there would be no danger of any kind (even mental discomfort wouldn’t be “part of the program”), and so no obvious reason to retain things like a strong attachment to reality for its own sake, or interactions with real vs. simulated people.
There’s no obvious reason in fact why someone in the machine would need even the simulacra of other people — they could for instance choose to replicate the experience of someone who meditates alone in a cave until they achieve what’s been described as ultimate spiritual enlightenment. If this involves a sacrifice of their old, extra-machine identity, how do we determine that this sacrifice is harmful and misguided vs. a healing one akin to removing/reducing an unwanted phobia of spiders? We can’t necessarily rely on the person’s emotional reaction when presented with the choice, since for all we know they just don’t “know better” yet.
It isn’t true there is no danger in a Nozick machine. You are literally a chained prisoner in it, forever alone, and unable to ever make a decision.
No rational person would describe “if I go into that room I will be tied to a chair alone and forced to watch Seinfeld for all eternity” as “no danger in that room.”
That’s why you keep trying to dodge this with a Magic Pill. But that doesn’t evade the problem.
No rational person would describe “if I take this pill I won’t know that I went into that room and was tied to a chair alone and forced to watch Seinfeld for all eternity” as “no danger in that room” either. That’s even worse: because now their rationality is being destroyed (they are being prevented from ever discovering their nightmarish predicament), and they are being tricked into being a prisoner (rather than living a life they can be confident is theirs, meeting people they can be confident actually exist).
Indeed, there is a movie (almost) exactly about this: THX1138.
So when that tactic fails, you resort to brain surgery—a literal lobotomy:
Now you want to completely destroy most of a person’s brain so that all they want anymore is to be alone in a room doing nothing. But no rational person would consent to being almost entirely destroyed and thereby turned into a drooling vegetable.
It is objectively obvious that a person who can choose between that and a richer more complex life of association, choices, and multiple pursuits and pleasures will choose the latter, because there are more satisfaction states achievable there, and they aren’t being lobotomized or deceived in achieving them. The former, by contrast, is objectively a nightmare; that’s why it can only be achieved by deception and destroying its victim’s ability to reason.
Anything that requires such lengths to trick someone into clearly is not what any informed rational person would choose. “But they would choose it if we destroyed their knowledge and rationality” only proves the point.
If this state doesn’t cause suffering, though, or reduced overall satisfaction, how would we determine that it’s negative or dangerous? If the advantage of living outside the machine is being confident that one’s surroundings are real, what advantage does that give one over someone in the machine who has the exact same confidence and a life full of experiences at least as rich and varied, only artificially induced? (In contrast to someone with blunted or eliminated emotional capacity like the subjects of THX1138.)
It would seem though that a machine advocate could argue in the reverse: someone with such a strong preference for a non-sim life that they’d be willing to sacrifice satisfaction for its sake must not be rational, and so isn’t rejecting the machine on rational grounds. i.e. if a reasoning process leads you to a less-satisfied state, it wasn’t reasonable to begin with.
First, suffering is not the definition of harm (otherwise killing people humanely would not constitute harm).
Second, it would produce reduced overall satisfaction to deprive someone of their capacity to reason and understand their condition, and to deprive them of genuine interactions with people and genuine achievement and decision-making (imprisoning someone is a harm, even if you trick them into not noticing you’ve deprived them of everything they actually want in order to enjoy living).
Third, it would also produce reduced overall satisfaction to lobotomize someone so that they are incapable of realizing they might have been magic-pilled. Whereas leaving them that capacity allows them the existential dread that they might have—which can only be overcome by their personal knowledge that they will never have taken the pill.
So you are really asking whether it is okay for you to lobotomize and imprison all human beings as long as you can sufficiently trick them into never knowing this has happened and thus their entire reality is fake and they are actually making no decisions whatever and are utterly alone and never interacting with anyone else. And the real question is: why do you want to do that to people?
As for the people themselves, no rational and informed one of them would let you because they recognize its intrinsic horror (regardless of whether they, once lobotomized and imprisoned, will remember that), so why do you care whether you “can” do it to them?
It would seem though that, assuming our only access to reality is through our brains, we can only react emotionally to our perceptions of reality rather than reality itself. E.g. if someone with a phobia of spiders believes a spider is crawling up their back, they feel the same fear whether or not the spider is actually there.
For that reason, it isn’t clear to me why the mental experiences of a person who is convinced they aren’t in a simulation (rationally or otherwise) couldn’t be copy-pasted wholesale into someone else’s brain such that that second person effectively lives person 1’s life (as private experience goes) and has all the same satisfaction states. They never feel existential dread because person 1 never did; they never learn they’re being deceived because person 1 never did (possibly because person 1 never was), and so on. If person 1’s life then is even slightly more satisfying on balance than person 2’s would be, how could person 2 justify not “hopping lives” in this way if their ultimate goal is maximizing satisfaction?
This is why the matter of objective analysis matters, which for some inexplicable reason you keep skipping over and ignoring.
If you want people who lack the ability to objectively analyze their condition to be killed off, so that “people” in our understanding of the term no longer exist, then I’m back to wondering why you want that.
As for rational, informed persons, none of them want that. So that you do is moot to this entire conversation. Your approach requires lobotomization and deception. The very fact that it does discredits it as anything any rational, informed person wants. Otherwise, you’d not have to resort to such tactics to “trick” and “force” them into the scenario you imagine. And I have explained this over and over again.
I am done talking in circles.
The reason objective analysis leads to my conclusion and not yours has already been explained. I wrote an entire article on it, and have instructed you to read it several times now.
Go to it, man: The Objective Value Cascade.
Everything else has been answered here. Multiple times. If you continue to ignore me, I will cease interacting with you.
To me it isn’t obvious why objective analysis produces the conclusion that real experiences/people are superior to illusory ones, assuming that they provide indistinguishable experiences.
“Your approach requires lobotomization and deception. The very fact that it does discredits it as anything any rational, informed person wants.”
That’s specifically what I have trouble understanding — how can a person claim that they wouldn’t experience more satisfaction in a Nozick machine, if the Nozick machine hypothetically can reproduce any imaginable satisfaction state (including that of being convinced, rationally or otherwise, that one is not in a Nozick machine)? The person may argue that it isn’t just the experience of things/people that they want but the experience of real things/people, but how is that claim coherent if the two types of experience are identical?
If on the other hand the claim isn’t coherent, it isn’t clear to me why being deceived into satisfaction by the machine inherently discredits it as a rational choice, any more than for instance taking an antidepressant or ADHD medication to modify the way one perceives the world. The aversion to simulated experiences may be universal among rational people, and useful, but it doesn’t seem to me that those facts alone establish it as rational.
Then you are irrational.
Rational people see things exactly the other way around: a real life is always better than a fake one. A fake life (being imprisoned in a room watching a TV show of non-existent people and things while being deceived otherwise) is literally horror to every rational human being. None would choose it.
That you would only indicates that you have lost any comprehension of what the difference even is and why it matters. You are therefore outside of rational thought.
I cannot help you.
And I can only hope you never do this horrible thing to anyone.
“A fake life (being imprisoned in a room watching a TV show of non-existent people and things while being deceived otherwise) is literally horror to every rational human being.”
I don’t know that that horror stems from purely (or even mostly) from an evaluation of expected satisfaction outcomes, though. A person may reason that:
Rational behavior seeks to maximize satisfaction
Knowledge always increases satisfaction
The machine withholds knowledge
Therefore, plugging into the machine is irrational
But this would seem to conflict with the belief that if the person, while in the machine, were to learn the truth then they would become distressed (lose satisfaction). i.e. the person would need to reconcile the proposition that “knowledge always increases satisfaction” with “learning the truth can lower satisfaction”.
This conflict wouldn’t seem to exist if the person rejects the machine on the grounds that knowledge is intrinsically good or that real experiences are intrinsically better or more dignified, which I assume is Nozick’s position — though that of course raises the question of why we should believe that intrinsic property exists and how exactly it manifests.
That is just synonymous with:
The person rejects the machine because being in a condition of knowing the truth is more satisfying than being in a condition of being denied the truth or experiencing real people and really making one’s own decisions is more satisfying than experiencing only fake people and never really making any decisions.
Which is my point.
“Intrinsically good” is just code for “More satisfying.” If it wasn’t, it wouldn’t be describable as “intrinsically good.” No one describes a dissatisfying state of affairs as somehow better than a satisfying one—unless they are reaching their conclusions fallaciously or from false information, which is, again, the point (because true beliefs can only follow from true premises without fallacy).
Nozick’s mistake is to think there is some ontological category of “intrinsic good” that relates to no other actual evaluation of what is good. But that is the position that is illogical. Because there is no other way to derive the conclusion of “good” for any agent if it is not agreed to be good by that agent.
This is why we have to persuade people to do things they don’t initially deem to be good. Persuasion can be honest (we can present true facts, and a conclusion that follows from them without fallacy) or dishonest (we can lie to them and try to manipulate them with false information or fallacious reasoning). But true conclusions for the agent can only come from the former (because everything following from the latter is by definition false).
So Nozick cannot get to “x is an intrinsic good” except by an honest act of persuasion toward the agent: he has to list true facts, and reach a conclusion from those true facts without fallacy, that the agent would agree with. But then it cannot be an “intrinsic” good he is talking about, as always one must appeal to what the agent deems most satisfying to itself. Good therefore can never be intrinsic. It can only be false (a false conclusion of what is good, arrived at by false premise or fallacy or both) or true (a true conclusion of what is good, arrived at by true premises without fallacy).
Nozick is therefore confusing truth with intrisicality. It can be true that avoiding the machine (except for occasional entertainment) is good. But it is not good “intrinsically.” It is always only good because the agent in question agrees it is good (and to be true, that agreement must come from true premises without fallacy). Which therefore always reduces to “because the agent deems it most satisfying.”
Nozick thus conflates erroneous conclusions (“because the agent deems it most satisfying” owing to a falsity or fallacy) with correct ones (“because the agent deems it most satisfying” owing to valid reasoning from true information sans distortion) to argue the latter are therefore “intrinsic” when in fact they are still as derivative as anything else, the only issue is whether they are concluded to be good from true information, not whether they can be good without being satisfying.
How does the person know that the one is more satisfying than the other, though? They can assume that what they experience at a given time is unsimulated (for e.g. the reasons given in your Not in a Simulation article), but it would seem that the moment at which a simulation becomes dissatisfying isn’t the one in which their reality is replaced by the simulation, but rather the subsequent one (if any) in which they learn that this has happened.
This would be like the case of a person who unknowingly eats an apple with a worm in it, and would be disgusted if they knew this had happened. The disgust doesn’t occur when they eat the apple, but after the fact when, if ever, they learn about the worm.
I do agree that “intrinsic value” doesn’t have an obvious coherent meaning without reference to satisfaction states. It isn’t clear though that comparing satisfaction states alone encompasses a person’s motivation in rejecting the machine. In evaluating the machine, a person compares these states:
Having an unsimulated experience and believing it’s unsimulated.
Having a simulated experience and believing it’s unsimulated.
Having a simulated experience and believing it’s simulated.
— the machine offering “2”. They may reject the machine on the basis that “3” is less satisfying than “1”, but by that token “2” would also be superior to “3”, and if “1” and “2” are identical subjectively, on what grounds can the person favor “1” over “2”?
Asked and answered: The Objective Value Cascade.
They are not identical objectively. And intelligent beings are capable of knowing that. Unless you cripple their mind. Which intelligent beings are able to appreciate the negative value of.
“I will reduce you to a pig. You’ll like it. Trust me.” is not a convincing argument to any rational being.
To clarify, what I mean to compare is the experience of the one vs. the experience of the other. If an intelligent being were in a simulated world without being informed of it, what would be the tipoff to them that their world isn’t real? Or, alternately, the point at which their subjective experience would diverge (in a less-satisfying direction) from its non-simulated counterpart?
You are still stuck on “But I could still cripple and trick them.”
Which means you are still missing my point.
Ask a rational person “Can I cripple your brain and its capacity to reason, deprive you of all choices, and turn your entire existence into a sham, as long as in that state you will agree it feels good?” They will always answer no. And until you understand why, you will not understand anything I have said here.
If this person said that the reason for their saying no is that having their brain damaged, their choices removed, and their existence made a sham would lower their net satisfaction (at least long term), what I’d be curious to know is what reason they’d give if they believed it would raise their net satisfaction instead? (even if minutely — e.g. allowing them to live one day longer.) Would the reason involve comparing satisfaction states? If not, what place would it have in the system of objective analysis you describe?
Because they can only be satisfied knowing they aren’t that person already now. And the only way to be sure they aren’t is to be sure they’d never answer yes. Then the probability is reduced to that of a Cartesian Demon (that someone else did it to them without their knowledge or consent), which is a negligible probability.
This is the fundamental undermine of all magic pill arguments.
You cannot escape this by destroying their rationality, because then you are no longer asking this of a rational agent.
This is why querying and comparing possible future versions of yourself is essential to all rational decision making, as explained in my Cascade article.
Read it.
It isn’t obvious though what would stop the machine from being able to simulate the life of someone who is indeed sure that they aren’t in a simulation, on the basis of as much evidence or conviction as they could desire. (e.g. they may live in a world where simulations aren’t possible, or they may live the copy-pasted life of someone who would never even consider entering a sim.)
If the rationality of a choice is evaluated solely based on its impact on net satisfaction, though, what would stop the agent from plugging in on rational grounds if they queried their future hypothetical self in vs. outside of the simulation and determined that the former was more satisfied? They may still object out of instinctive aversion, but it isn’t clear that that objection would be based on comparing satisfaction states alone.
You aren’t listening. You keep repeating the same folly: asking what would be the case if you can lie to people and destroy their capacity to reason.
That is what no one will consent to.
And “but I could trick them into never knowing or remembering that they did” has no effect on this conclusion because rational people know that—which is why they will reject all magic pills, precisely because they have to to be sure they never accepted one (and thus are not there now).
Read the damned articles I directed you to.
I thoroughly discuss magic pill scenarios in Goal Theory Update and objective querying of future selves for comparing outcomes in Cascade.
Stop ignoring me and just repeating yourself over and over.
Read the refutations I have already made of these ideas.
That goes to the question of why they’d be so opposed to the machine in the first place, since they’d be choosing between these states:
Outside the machine, sure they’re outside, satisfaction level X.
Inside the machine, sure they’re outside, satisfaction level >X.
If anything, state 2 would provide the stronger safeguard against anxiety, since the person would only feel as much as they programmed—in state 1 they may still feel it on an irrational basis.
From Goal Theory Update:
Adapting that to the Nozick machine scenario in which the cost is contact with reality, a person using this reasoning would seem not to be taking into account the much greater amount of satisfaction the machine would grant over time vs. the temporary distress they’d feel over plugging in. Later in the article you suggest any temporal difference should be discounted since every individual moment of a pill-free life is more satisfying than a pill-affected one, but that doesn’t seem compatible with the stipulation that the machine increases net satisfaction (whether by simulating the life of a machine-rejecter or other means).
If a person’s aversion to plugging in then isn’t based on avoiding anxiety or the temporary distress over plugging in, how do we distinguish it from an irrational bias? (Even if that bias is beneficial in a way other than to the individual, e.g. to the human race.)
You seem to literally be ignoring everything I say.
Rule Number One: True conclusions cannot follow from false premises. Repeat that out loud until you understand it.
Now follow along.
An objectively informed and rational person knows that the two options will be either “an essentially drugged-to-bliss prisoner in an isolated cell never meeting anyone or doing anything” or “less happy free person who gets to be with people and do things.” They know, from that POV, the former would be worse (they can tell that the person in future 2 has the better outcome than the person in future 1, which is more satisfying to the person choosing which future to have).
You want to bypass this by adding “we will destroy your capacity to reason and thus figure out or suspect that you are in that horrifying state.” But the person deciding between the two outcomes will not see that as more satisfying. It’s terrifying. So they will not choose it. “But you’ll feel better” is not persuasive, because feeling better in a sham state is not satisfying. “But I’ll trick you into thinking it’s not a sham state” is not persuasive, because you have to destroy them as a person (by destroying their reason and even their agency) to produce that outcome.
True conclusions cannot follow from false facts. A large amount of sham satisfaction cannot trump a small amount of real satisfaction. Because the horror that you might be in the sham state will reduce your satisfaction below it. This is knowledge possessed by the person choosing their future. And you cannot deprive them of that knowledge.
The person in future 2 will also objectively be horrified at the fate of the person in future 1. That the person in future 1 is rolling entirely on false beliefs and a destroyed reasoning ability (they are, essentially, a lobotomized lunatic hallucinating in a cell) will be evident to the person in future 2, and so the person in future 2 also knows they are better off than the person in future 1. They are therefore more satisfied in that state.
Then if you explain to person in future 1 the position of the person in future 2 if they would rather be them, even they would answer yes. So both future persons agree person 2 is better off and therefore in the more satisfying state. This is true even if you prevent person 1 from realizing that they are person 1. They will still agree that they would rather be person 2. You are just preventing them choosing (and from having the information requisite to choose).
That they falsely believe they are not person 1 when they are does not make them not be person 1. So the satisfaction state of person 1 is not comparable to the satisfaction state of person 2, because person 1’s satisfaction state is fake—it is literally false. And true conclusions (such as, which person you would rather be) cannot follow from false premises. This is why the objective value cascade always trumps the subjective beliefs of outcome states—because those can be false; whereas the objective choice won’t be.
Hence you cannot appeal to the deceived person in future 1, because their opinion is false. And all the other persons in the model know this. Since no true conclusion can follow from a false premise, the opinion of the person in future 1 is not informative to either of the others. They know their opinion is based on true information and therefore their conclusion is true: person in future 2 is better off than the person in future 1, and in no way would the person choosing those futures want the horrifying fate of the person in future 1.
Appeals to pleasure (“but you’ll be a completely contented mentally crippled lunatic alone in a cell for all eternity”) do not work here. Because you cannot remove that knowledge from the person choosing the fate, or from the person who can congratulate themselves from having dodged it—and for that very reason can be satisfied they did: because they know they would never have chosen it.
This is why no magic pill argument works.
The only person whose satisfaction is based on the truth is the person in future 2—and the person choosing it. And therefore their conclusions from their satisfactions are the only ones that are true. And therefore only their opinion matters to the person deciding between them.
We can illustrate this with a real world example: it is literally medically possible right now to lobotomize someone, surgically destroy all their external senses, and stimulate the pleasure-satisfaction centers of their brain electrically with a computer 24/7. This would subjectively provide more felt satisfaction than not doing this to them. So why does no rational person on Earth agree to do this to themselves? Because they objectively know that would be horrifying and not genuinely satisfying. All the satisfaction would be pure sham. And therefore utterly pointless. They may as well be dead. And true conclusions about what choices are best cannot follow from false premises.
It isn’t obvious to me how that is the case—while true conclusions can’t follow from false premises, experienced satisfaction can still be produced from false beliefs, and under a hedonistic model experienced satisfaction is the only available criterion. Person 1 may be wrong in believing that they’re person 2, but that in itself wouldn’t seem to lower their experienced satisfaction level; only the chooser knows the difference, and they also know that as soon as they plug in that knowledge will be moot and they’ll be as satisfied as person 1 and at least as satisfied as person 2.
The chooser however would also know that they’d be able to program person 1 to have all the confidence in their reality as person 2 (more, even). Any reason to be confident in the reality of person 2 could also be given to person 1—the chooser wouldn’t be motivated to program a feeling of horror for themselves.
This doesn’t however destroy the experience of reason and agency, and that experience would seem to be the only mechanism by which the chooser, person 1, and person 2 are satisfied in the first place.
That same level of conviction is also available to person 1, however (at the chooser’s discretion); as false as it is, it would be experienced the same way, and the experience is what the chooser is judging.
Attempting to stimulate someone’s brain this way meanwhile with existing technology would appear to fall well short of producing a Nozick-like experience; a person’s brain would quickly be desensitized and they’d lose the capacity for elevated satisfaction along with their ability to function in the world (they need to be fed, protected etc. while hooked up and someone needs to pay for all the drugs and maintenance). People seemingly have good reasons to reject this kind of life—though the addictive nature of opioids etc. illustrate that they frequently decide otherwise.
You are still trying to reduce value to mere felt satisfaction rather than genuine satisfaction. Which is irrational.
I’ve explained this to you five different ways now.
If you still don’t understand what I am saying, I cannot help you.
I only hope you never do this horrible thing to anyone. Even if you cannot understand why they would deem it a horrible thing to do to them.
My understanding is that under hedonism, genuine satisfaction itself is only valued so far as it produces felt satisfaction — given that, like this article observes, people only have access to what they feel. If we say that person 1 and person 2 have (at least) identical felt satisfaction but that only person 2’s satisfaction is genuine solely because of person 2’s external environment, that would seem to be an argument similar to Nozick’s — that people value something beyond “how [their] lives feel from the inside”.
I’m not a philosophical hedonist.
My understanding of this article’s argument though is that even someone with the simple goal of maximizing felt satisfaction wouldn’t be justified in plugging in, since they’d be able to feel more satisfaction outside of the machine. (Because, for instance, by not plugging in they wouldn’t have to be uncertain as to their reality, and would be able to enjoy knowing the full context of their experiences.) Would you say that the benefit of not plugging in extends to or depends on something that the person can never feel firsthand?
No. It depends on what the person making the choice knows (because they are the one choosing). Not on what the victim of that choice knows. That can factor into the decision, but they are not the one deciding.
And a correct decision for them (a true imperative proposition for them) can only follow from true premises.
Feelings cannot negate truth. Hence mere feelings are not the sole goal of life, since “life” here entails cognitive awareness of living and thinking and deciding. Kill that, and you de facto murder the person. That isn’t about “feelings.” It’s an objective fact of what happened, appreciable to anyone with a cognitive awareness of living and thinking and deciding.
Satisfaction is meaningless if the person experiencing it no longer exists (being functionally destroyed) or is being lied to (and their satisfaction is a sham).
How would a person appreciate/experience this awareness, though, other than as a feeling? A similar question would be apply to the qualification “meaningful”—would a statement like “the fire is hot” have meaning if it weren’t relevant to one’s potential feelings?
We could say that person 1’s felt satisfaction shouldn’t be a factor at all in the chooser’s decision, but that would seem to conflict with the anti-machine argument from the article that says:
My understanding of that argument is that the choice is indeed based on felt satisfaction: person 1 simply experiences less of it overall due to the distress of being unsure whether or not they’ve plugged in. If life in the machine is considered undesirable even in a case in which net satisfaction is increased (vs. a life outside), that’s what suggests to me that the choice is made based on something that can’t be felt. (Which would seem to agree with Nozick’s argument, i.e. that deception isn’t worth an increase in felt satisfaction, even a tremendous increase.)
The person making the decision does experience this awareness as a feeling. You are again confusing the person making the decision with their victim.
This is the same mistake: confusing the feelings of the decider with their victim’s (the feelings of their future self).
Obviously the decider is making a satisfaction-based decision: they will not be satisfied being that victim, nor obeying what they know to be a false imperative (whereas they will be satisfying obeying a true imperative and avoiding that victimization).
When they query their two future selves, the victim and the free person, all three agree being the victim is the worst state of the three. Which makes that the wrong decision. And indeed, all three if them will agree it is (even the victim who remains unaware they are in that worst state).
So the satisfaction occurs then: at the decision. It is at that moment the most satisfying decision. Which it has to be. You cannot be motivated by satisfaction that does not yet exist. The satisfaction has to motivate the decision, so the felt satisfaction determining the decision has to occur there. The future potential satisfaction is a metric (it is one of the factors at play in weighing options), but it is not present to cause the decision. What causes the decision is the current satisfaction state.
This is why Nozick’s argument doesn’t work for real sims—as my article pointed out. There the decision matrix is reversed: every rational person would prefer to live in a well-run sim. Hence one must look to the reason why the rational decision differs between those two conditions. One involves the horror of totalizing deception; the other does not.
Hence as I wrote in this article:
And hence:
If the decision isn’t made (at least mostly) on the basis of future net satisfaction, though, how would someone be able to determine which choices best serve their longterm wellbeing? Someone for instance might refuse to try a new and strange-looking type of food, even though they might like it or come to like it. To get the future satisfaction of enjoying the food (which is presently only imagined and hypothetical), they’d have to make a choice that was dissatisfying in the moment.
Being informed of all of the facts however wouldn’t in itself remove the chooser’s irrational biases, or biases that would be irrational in the context of a Nozick simulation. (The same way that the person with the spider phobia may have a better night’s sleep when not informed that a spider is under their bed.) In the context of a simulation, the horror at being deceived wouldn’t correspond to any impending harm or danger other than that of the deception itself, which would seem to make the horror’s motivation circular and unreasoned—the chooser doesn’t like it because it would displease them because they don’t like it… This would seem to apply similarly to a preference for real vs. simulated experiences: how do we assess it as self-justifying vs. self-undermining?
The decision is based on the reality of future net satisfaction.
Even the hypothetical future deceived self would agree with this. Which is why the decider makes the same decision they would: to not imprison themselves in a system of fake satisfaction.
Feelings do not trump reality. Reality trumps feelings. Only genuine satisfaction is satisfying to pursue.
In theory though the chooser’s goal is to maximize the satisfaction they feel firsthand, in all future moments. Choosing to pursue simulated over real satisfaction only produces felt dissatisfaction until the simulation begins—after which they feel satisfaction in excess of what they’d feel in reality (even if they believe, while in that state, that they’d feel more satisfaction outside the simulation).
This doesn’t satisfy the initial desire for real experiences, but the chooser’s goal isn’t necessarily to fulfill that desire but “hack” it in pursuit of maximum felt satisfaction. (If the desire itself can’t be directly altered.) This is similar to how people approach many experiences in reality: a person smells a flower, or paints a painting, or cooks a meal not for the sake of the activity per se but the felt satisfaction accompanying it.
But not in defiance of factual truth. True conclusions cannot follow from false premises. So when a rational agent deciding a future for themselves knows a future self will be deceived, they will not find that future as satisfying as one in which they are not. Indeed, even their deceived future self will agree with this and tell their past self not to choose that (they will simply not know they are already in the deceived condition).
Thus present satisfaction of the decider motivates the actual choice. Future satisfaction is only a target, and depends on true conclusions, which can only be what follows from true premises. This is the entire point of my article The Objective Value Cascade (the present self queries its optional future selves and seeks a consensus among them on the best future state) and my discussion of Magic Pill scenarios.
Now read those. Actually read them. Stop arguing in a circle and constantly repeating yourself and ignoring everything I say.
If the chooser does defy factual truth for the sake of increased felt satisfaction, though, what would you understand the consequences to be? Those are the consequences that would generate the dissatisfaction in the chooser—“The thought of being deceived dissatisfies me because of X”. (e.g. “the spider I don’t see might bite me.”) If there’s no “X”, i.e. if being deceived in and of itself is the source of dissatisfaction, it isn’t obvious that the chooser isn’t choosing based on a maladaptive or harmful bias. If the chooser is choosing out of a bias, so would the future selves they’re querying—they’d seem to need to justify this bias externally. (The same way that for instance someone would assess their aversion to the thought of trying an unfamiliar food or taking up an unfamiliar hobby.)
The consequences would be that they would be functionally killed and imprisoned in a lie forever.
Which is not satisfying to the chooser. Nor would it even be satisfying to the victim. If you asked them if they’d want that to be what happened to them, they’d say hell no. That the victim is deceived (and thus doesn’t know this has happened to them) does not change either fact.
As I understand their logic, the reason they attach that value to contact with reality is that it provides the most reliable access to the greatest amount of net present and future felt satisfaction (vs. a “drunkard’s walk”). Theoretically, then, the satisfaction they feel whenever they choose truth over deception (in any context) would be derived from that belief. Being presented with a Nozick machine would seem to require the belief be reassessed—the chooser can’t reject the machine simply because it horrifies them, because that horror is derived from the belief in question. Similarly, the chooser can’t disregard or devalue the potential future felt satisfaction to be gained through the machine, since they would be doing so on the basis of the same belief. If they do retain the belief, meanwhile, and experience less net felt satisfaction as a result, it would seem to fall short by its own standard—in which case, why adhere to it so extremely?
Now you are simply making no sense.
This has nothing to do with “beliefs.” It has to do with what is objectively actually the case.
Obviously a rational person (indeed even themselves as a future victim) does not consider fake satisfaction actually satisfying in any sense warranting choosing it. Hence they would and do never choose it. That is an objective fact. Not a mere belief.
The satisfaction state of the chooser always decides the choice (that’s literal neurophysics). Their assessment of possible future satisfaction is then negatively assessed for being fake, an assessment even a future victim would share. So it doesn’t matter that fake satisfaction feels nice. It is objectively horrifying and thus does not feel nice to the informed chooser.
This is the whole point of the OBJECTIVE value cascade, and why rational agents always reject magic pills.
You do not seem to be acknowledging or responding to any of this. Yet I have repeated it at least half a dozen times now. If you still fail to grasp my point even now, you are clearly not competent to ever grasp it.
That the machine objectively involves deception however doesn’t itself seem to dictate that the deception is objectively horrifying—unless the chooser regards deception as horrifying regardless of its consequences (i.e. impact on future feelings), a stance which wouldn’t seem to have a rational basis. This would still be the case even if the negative reaction to the deception were universal—just as for instance people’s finding human babies adorable or rattlesnakes terrifying may be universal but not necessarily rational reactions.
By the same token, simply gaining information, truth, or contact with reality doesn’t seem to be enough to increase a person’s felt satisfaction—that requires a specific emotional/psychological mechanism that conceivably could produce desirable emotions in response to any input. (Which goes to why people need rationality in the first place.) Presumably a person would be very reluctant to gain knowledge in exchange for becoming depressed or anhedonic, and if I understand right it’s taken as a given in Nozick’s model that sham satisfaction is preferable not authentic misery or torture. If knowing the truth then isn’t its own imperative, or the most important one, why sacrifice feelings for the sake of the truth—especially at the behest of short-term feelings (the “discomfort of the moment”)?
So, I’ve already answered this. Half a dozen times now. You are just talking in a circle.
No feelings are being sacrificed. The feelings that motivate are the decider’s, not the victim’s. And the decider consults the prospective feelings of future selves, but both of them would agree the condition chosen is horrifying and advise him/her not to choose it. So s/he and they are unanimous as to what the correct rational decision is.
That’s it. That’s the entirety of why that is the only rational decision. For the one doing the deciding. It literally does not matter that the potential future victim does not know they are the victim they would loathe to be; they still would not want to be that, and would still tell their past self not to choose it. Which is how that victim can know they aren’t in that chosen condition, and thus need not dread it, given their confidence that they would never have rationally chosen it—unless they have been lobotomized and their reasoning destroyed, which is precisely the outcome they would refuse to be subjected to if allowed to rationally decide.
To clarify, I understand the decision as being the result of these steps:
The person determines that knowing the truth is always preferable to being deceived.
The person is presented with the Nozick-machine choice.
The person feels horror at the prospect of plugging in.
The person queries their future selves and determines that they also would prefer not to be plugged in.
The person rejects the machine.
My last questions about the source or rationality of the chooser’s horror are with regard to (1), since that steps seems to be necessary for (3) and (4). It’s on the basis of (1) for instance that the chooser disregards their future selves’ positive feelings, which otherwise would be a crucial factor in any decision. Earlier you gave this account for why a person would prefer not to be deceived:
“Because they would always prefer to know they are in a simulation and thus what its rules and opportunities are, because greater satisfaction can be achieved through knowledge rather than aimless wandering, by the basic principle that an informed agent can pursue all goals more quickly and effectively than by a drunkard’s walk.”
I understand that as appealing to net positive feelings; i.e. a person feels more of them if they hold (1) than otherwise. If in a contrived Nozick-like situation this isn’t the case, though, how does the chooser justify holding to this principle?
I have, again, answered this question half a dozen times already.
“The person determines that knowing the truth is always preferable to being deceived” because that outcome is objectively less satisfying to them (the person making the choice), and it is objectively less satisfying to them because both future selves will report their own subjective dissatisfaction with that state.
That one of them is deceived is irrelevant, because un-deceiving them would not change their assessment, and their being deceived is precisely the outcome all three find dissatisfying (the deceived victim, the undeceived future self, and the one deciding, whose net satisfaction after this analysis is the only one actually physically causing the decision, because their potential future victims don’t have time machines).
In step 1, though, it isn’t clear that the person can reach this conclusion from querying their future selves, since they haven’t yet established the (universal) undesirability of being deceived—if the person doesn’t have that stance yet, their future selves wouldn’t either. (Much like how the person may not know that they don’t like a particular food, for instance, until they’ve tried it.) It isn’t obvious meanwhile that the person’s objection is purely to the abstract idea of being deceived, since presumably they’d be much more likely to choose deception as a way to avoid misery or severe pain; indeed favoring truth over deception in practical reality is a good way to avoid misery or pain in the first place. If the person isn’t choosing out of pure axiomatic aversion to deception, though, on what other basis do they arrive at the position of “step 1”?
You seem to be confused now. The only decision that matters is the rational agent making the choice. To speak of “universal undesirability” is nonsense here.
The only undesirability relevant is to the agent making the choice. And a correct decision can only be a decision reached from true premises without fallacy. It thus does not matter what fallacious or uninformed conclusions would be, because those will be false. And all we are interested in is what is true.
The only “universally correct” conclusion is therefore what follows from true premises without fallacy for all agents similarly situated: what they themselves and their future alternative selves would all conclude, given true premises and no fallacies.
That will be universal (the same conditions, always entail the same conclusions). And that is what has been proved. No rational informed agent will choose the horror of such a prison. You have objected by introducing falsity (lies, fallacies, false premises), but that can never produce a rational conclusion. So you are defending irrationality. I am defending rationality.
And again I pray you never subject anyone to this outcome. Because your insistence on it is terrifying.
That would seem to follow from the position of “step 1”, however, rather than being a reason for forming that conclusion in the first place. In theory, the agent is interested in what is true, and is motivated to choose truth moment by moment, “universally” in all scenarios, because this leads to the maximum amount of future positive feelings. If the future feelings are facilitated by false beliefs, this doesn’t specifically impact the rationality of the decision to enter the machine, since that decision is made based on the feelings and not the validity of the beliefs.
We could say that the validity of step 1 doesn’t depend on the chooser’s feelings (i.e. that the truth is always the desirable option even if it produces lowered/negative feelings), but then that would seem to require a rational agent to choose for instance a lifetime of non-simulated torment over simulated positive feelings. It even becomes difficult to define what would make an outcome desirable or not in this model. If a life of true experiences isn’t a life of net-positive-feeling ones, what makes it attractive to the chooser?
This is a false analogy. Obviously, if you only have to choose between two hells, your choice is bogus. A rational person would prefer death (and even more prefer just an ordinary real life of net positive satisfaction). If that is being denied them, then they are not choosing at all—they are being coerced.
We are only talking about rational choices between real options. Not coercion compelling no good option.
This is why it matters that the triad of the present chooser, and his two future alternatives, all agree on the best outcome. That decides. This would hold even for choosing between optional hells in a state of coercion—it’s just that, there, all three agree their choice is bogus because they are being coerced away from choosing what they actually want. They all still will agree on what they actually want. It will just not be either of the options they are being coerced into.
You do seem to do this a lot. You keep conflating real choices with choice under coercion (whether via lobotomy or, now, being denied even death, much less just an ordinary real life). This is perhaps why you keep failing (or refusing) to understand anything I am saying.
You thus continue not to listen to the lessons of the Objective Value Cascade or the Magic Pill Paradox.
Hence, for example, because satisfaction is an explorable space, it is extremely hard to design any uncoerced outcome that is reliably net negative. A person trapped in a tv show unable to ever act or genuinely interact with anyone for eternity has zero degrees of freedom; they have almost no explorable satisfaction space. They are in prison with a TV set and can’t even change the channel. But a person you just drop into a random jungle has a very large satisfaction space to hunt down and explore; the more so if there are real people there with them.
Anything you’d deem negative, actually becomes positive with a suitable change of behavior, location, attitude, or goals, which are all possible when you have real control over your decisions and real people to share them with. Because of vast degrees of freedom in such a space, it is extremely hard to situate someone where they have so few degrees of freedom they cannot even escape a net negative life no matter what they do or even what they think about what they are doing. It would have to be intelligently designed to be miserable (an intentional hell) or a rare accidental congruence of bad luck (an accidental hell), and it is hard to explain why that would be the only situation a person can choose for themselves. They can get stuck there (not by choice at all), but then they can exit (this is the entire logic behind voluntary euthanasia).
You want all of these facts to disappear by designing a hyper-bizarre scenario where no one can choose anything but variously worse hells. But that isn’t how logic works. The un-bizarre scenarios still exist, and in fact are the scenarios almost everyone actually is in or ever will be in—particularly anyone who will ever actually be faced with the option to enter a Nozick machine or not. The probability approaches zero that only Nozick machines will exist, and not genuine VRs (because it is almost impossible to have the one and not the other); and every rational person will choose the genuine VR over the Nozick machine. And for all the reasons I have explained to you a dozen times now.
You cannot escape this conclusion by inventing ridiculously bizarre life-boat scenarios.
The reason I bring up scenarios like these is to determine:
1) what leads a rational person to the a priori stance—which seems foundational to the rest of the reasoning you describe—that deception is so universally undesirable that to choose it requires coercion/lobotomy/etc. in the first place, and
2) why the choice would be sensitive to this type of coercion at all if, in theory, the agent isn’t choosing based on the net positivity/negativity of their future feelings and/or discounts all feelings produced by deception.
To help clarify point 1), we might consider the case of a person who spends years raising a child and enjoying being a parent to them, but under the false belief that the child is related to them genetically. If this person were to learn the truth, they may lose all interest in parenting the child and regret the trouble they took to do so, or they may not care at all, or they may be glad at the news. Whatever their reaction, though, the person’s ignorance or knowledge of the child’s genetics would seem to be a separate consideration from the person’s rationality, i.e. lack of bias. In fact, the rationality of their reaction may be assessed very differently depending on their cultural environment. If a person rejects the Nozick machine out of pure revulsion, similarly, how do we determine that the revulsion is an expression of reason rather than bias?
Such a person however would still seem to be able to experience positive feelings, even great ones—and as long as those feelings’ net positivity exceeds what they’d feel in realty, this would seem to require that, if they choose against the machine, they do so on the basis of something other than “feelings optimization”.
No, they won’t. After weeks of that, realizing their abject loneliness that will never end and their utter lack of control they will never have again, the horror will swamp any possible pleasures (this is exactly represented in the final scene of Being John Malkovich, which is precisely Nozick’s scenario). That is why no one of the objective-valuing triad would recommend it be chosen.
Don’t confuse entertainment (just touring a Nozick machine for a few hours or days, a la Total Recall) with the actual thought experiment (permanently and irrevocably living there, a la Being Joh Malkovich). We are not talking about the former. This entire article and discussion is about the latter.
You try to dodge this with coercion (you first tried lobotomization, a la Magic Pill scenarios; then just recently you tried denying them all other options). Which dodges the entire question Nozick’s experiment was supposed to be testing. That’s where we are in this conversation. You keep evading what we are supposed to be talking about, in order to make no point relevant to it.
And again, nothing else you said just now is relevant, either. For example, your “raising a child” analogy bears exactly zero relevance; nothing of the horror of being trapped in Nozick’s prison relates to merely being mistaken about irrelevant trivia.
That would seem only to take place though if the person programmed themselves to have that experience, i.e. the experience of horror and loneliness. In the Malkovich case the person after all isn’t deceived; they experience being trapped in a body that isn’t sensitive to their volition. Someone in a Nozick machine by contrast has an experience that would be indistinguishable from anyone’s outside of it (though, per stipulation, better-feeling overall than what they’d feel otherwise).
This is why I ask about the foundation of the chooser’s stance that deception (or lobotomy, etc.) isn’t worth any amount of positive feelings—in the “raising a child” scenario for instance one person may regard the news about their child as irrelevant, while another may find it extremely important; in each case their reaction seems to depend on their preexisting stance as opposed to their level of information. By my understanding that’s relevant to the issue of whether the chooser should choose based on what their future self would want in an undeceived state—as with the parent, simply not being deceived doesn’t qualify the person as rational.
Calvin, you are back to describing a lobotomy again. This is not the thought experiment. Nozick is not describing coercive horrors. He’s describing informed decisions.
“What if we lie to them so that they never know they are in that prison and that everything they experience is fake and no one they meet exists and none of the decisions they think they are making they are ever actually making.” You are describing a horror. “But the victim doesn’t know that!” is moot because the victim isn’t the one deciding to mutilate their brain like this. The one deciding this knows this. They have not been lobotomized yet.
Indeed even the victim of this lobotomy would sternly advise the one deciding this not to do it, because otherwise the victim knows they might in fact be in that horror, which would negate all positive feelings about their condition—unless you lobotomize them so completely that even their reason no longer functions. But the victim would never advise their future potential imprisoner to mutilate their brain so completely that they can’t even reason anymore, and will just be the equivalent of a pig trapped in an orgasm machine, never able to discover the useless horror of their fate. They may as well be dead.
“But the decider can choose to victimize themselves this way” is true (just as the decider can choose to set themselves on fire or chop off their own limbs), but this is moot because no rational being would choose to do that to themselves. They will always prefer a world (even a virtual one) where the people they meet really exist, and the decisions they make they really are making. They will always be more satisfied with actual degrees of freedom to explore; they will never be satisfied with zero degrees of freedom, trapped in a prison, forever alone and completely crippled, physically and mentally, beyond any ability to ever reason or act.
So as long as the decider knows this will be the outcome, it will never satisfy them to choose it. And indeed their own future victimized self would agree with this assessment entirely, and advise precisely the same decision: to not go there (it won’t matter that they don’t know they are already there: they would still advise sternly against being sent there). And since this will be the common opinion of all three people (the one deciding, their future victim-self and their future non-victim-self), this is the only decision that can be described as objectively rational.
To me it isn’t clear though that deceiving someone so throughly that they don’t believe they’re in a simulation would in itself lessen/negate positive feelings—the person in the simulation for instance may be living out the life of someone much more intelligent than themselves, or who has learned more about the world than the original person ever would. (Those conditions don’t necessarily entail positive feelings, but the person presumably wouldn’t create a simulation that didn’t entail them to begin with.) At the very least they could be living the simulated life of people in a world like ours, in which people have every subjective reason not to believe they’re in a simulation. What the person would lack isn’t necessarily a capacity for positive feelings or reasoning, but the knowledge of their true reality.
That goes to the question of why the decider (and their future selves) would choose this knowledge if it would result in a worsening of their future net feelings, and thus why they’d regard the alternative as horrifying. (A person indeed might regard the prospect of gaining total knowledge about reality as horrifying itself, since it would leave them with nothing to explore or discover.) Their future deceived self would be in a similar position: going by feelings produced, why would they choose to have the illusion broken knowing that this would make them feel worse overall?
You just literally ignored everything I said.
The question is not whether a deceived person can be fooled into being happy. The question is whether a deceived person would rationally agree to that—whether knowing they are (or even likely are) actually an isolated, disempowered prisoner would be satisfying to them. Not whether the deceived person would feel satisfied. That is irrelevant to all rational decision-making.
The decider is not in that state. And so that state is irrelevant to their rationally informed decision. The victim who is in that state, meanwhile, will report their desire not to be in it (even when they don’t know they already are), so they are not reporting back to the decider a go but a pass. And the alternative person is grateful not to have been in it, so they also will report a pass rather than a go. All three thus agree as to the objectively rational decision that is the most satisfying to them all.
That you can bypass rationality with horrific deception and crippling brain surgery cannot escape this fact. Indeed, the horror of that (and thus its undesirability for them) is precisely what all three agents agree on. Not going in is therefore the only rationally informed decision. That you can give irrational reasons to go in has no bearing on this.
If being aware of the condition is what causes negative feelings, though, on what grounds would the decider object to entering a state in which they’re unaware? It would seem that all they’d be giving up of final value would be an obstacle to their positive feelings—and indeed if their current knowledge dissuades them from plugging in, they’d be giving up any amount of positive feelings. How would they justify using this standard if it requires that they choose based not on feelings they actually will feel in the future, but hypothetical ones that would only be felt under circumstances they know won’t take place?
Because rational decisions must follow from true premises. This is why even their future victim would tell them not to victimize them; this is the only way that person can rest assured they aren’t in that negative-feeling generating nightmare.
Everything else (like lobotomization, preventing any such realizations; or denying them alternatives) is not a rational choice, but coercion, and thus moot. A rational agent will not be satisfied destroying their own rationality to live in an orgasm machine for eternity.
This is why it matters to people in relationships, for example, whether the partner they are living a happy life with actually loves them or is only being paid to pretend to. “But if they never find out, isn’t that a good outcome?” never gets a positive answer from anyone in that situation, much less anyone able to rationally contemplate it.
This is why you really need to read my discussion of Magic Pill paradoxes. I covered all this. You just keep ignoring me.
To make a decision at all however requires the person to do so under the influence of biases or preferences, which aren’t necessarily rational or removed by having full knowledge. A person who chose exclusively on the basis of which outcome produced better future net feelings may be very unlike anyone who actually exists, but by the standard of “feelings experienced” would seem to have an advantage and wouldn’t clearly be operating under a false premise. I assume they would reason along these lines:
I want my future net feelings to be as positive as possible.
Plugging in will result in greater future net feelings than any alternative.
Therefore, I should plug in.
Objecting (even involuntarily) to being deceived on principle presumably serves premise 1; discarding the premise because of that objection therefore wouldn’t seem to follow logically. This would also apply to objections such as e.g. wanting the emotions of the people in one’s life to be sincere, or not wanting one’s actions to be preprogrammed—if these are “subroutines” of premise 1, their only value to the person is as a means of reliable future positive feelings. If one of these routines is projected to decrease feelings, why would the person retain it?
So now you are confusing “feelings” with satisfaction. Satisfaction is not possible with fake feelings. Hence the relationship example I just gave. People want to be satisfied, not have orgasms alone in a tube forever. And Magic Pill solutions simply don’t work for rational agents.
Why pursue satisfaction other than in the broader pursuit of premise 1, however (“I want my future net feelings to be as positive as possible”)? Does it confer a benefit beyond the feelings produced? Likewise, how is dissatisfaction considered negative other than as a negative feeling? i.e. if a person were dissatisfied and didn’t feel it, how would they know they were dissatisfied?
The feeling of satisfaction produced (and feeling of horror avoided) requires the chooser making that choice and the future self being confident it was the choice made.
This is what all three persons agree on (it’s what satisfies the chooser; it’s what satisfies the avoider; and it’s what satisfies the victim to believe has been avoided). That’s what makes it the only objectively rational decision.
This is a plain statement of the facts of the case.
You have tried to “get around” this by inventing coercive scenarios (lobotomies, constraints). The fact that you have to invent these coercions and destructions of faculties to get a different result is why your options don’t answer the question of what the most rationally objective decision is.
For example, lobotomy:
The feeling of satisfaction produced (and feeling of horror avoided) requires the chooser making the choice to retain their faculties and the future self being confident that that was the choice made.
This is what all three persons agree on (it’s what satisfies the chooser; it’s what satisfies the avoider; and it’s what satisfies the victim to believe has been avoided). That’s what makes it the only objectively rational decision.
In that scenario the chooser, as part of their deliberation, asks themselves (consciously or otherwise) “am I confident that my reality is and will remain unsimulated?” If yes, their feelings improve; if no, they worsen. If their ultimate goal is maximizing future net feelings, running this loop gives them an obvious advantage as long as there’s any chance their reality is unsimulated—an unsimulated reality such as ours being full of hazards, obstacles, and limitations outside their control.
What I mean to examine is why that loop would indicate against their choosing a permanent self-deception that guarantees better future net feelings than they’d have otherwise, and thus why they’d need to endure any horror in their choice—let alone so much that it overwhelms the positivity of the subsequent feelings. If they run the loop with an ultimate goal other than maximizing future net feelings, what would be a good description of that goal?
The correct framing is “everything in my life a lie, all the people I think love me don’t even exist, and I’ve never done anything or made any choice at all, and I’m all alone and powerless and just riding along helpless in someone else’s television program.” That is not a satisfying state. Not to any of the three persons in the analysis. And theirs are the only opinions that matter.
Assuming that checking these conditions (alone/not alone, choices determined/undetermined, etc.) is done with the goal of maximizing future net feelings, though, the person wouldn’t have obvious grounds to reject the machine even so. A (sufficiently empowered) person could attach a satisfaction condition to any state; creating such a condition is only worthwhile if it’s necessary for optimized positive feelings. If there is no necessity, there’s no rational imperative to check for the state; doing so would only reflect a bias or harmful motivation/desire. Being fully informed or confident that one isn’t in a simulation in itself doesn’t remove these biases—and without identifying them, how would a person distinguish an irrational choice from a choice that’s simply distressing in the short term?
You are again ignoring everything I said.
Feelings are not relevant. Satisfaction with one’s state of being is the metric. That is not simply “feelings,” because feelings can be false. And no one wants false feelings. That’s why no one would choose to live forever in a mere orgasm machine or to have a wife who only pretends to love them.
How is satisfaction defined, though, other than in terms of feelings—specifically the lack of a negative “craving/wanting” feeling and possibly the presence of positive feelings? Likewise, what makes false feelings undesirable apart from their possibly leading to future negative ones? It seems as if a person with maximal satisfaction would also need to have the most positive possible feelings, regardless of the person’s circumstances—like you say, a person could derive positive feelings even from being stranded in a hostile wilderness. Given this, why would a rational person react with horror to the idea of reacting to any input with maximum positive feelings?
Read Aristotle: when you ask of any thing you want, “Why do I want that?” and then ask the same of that, and so on, until you get to the one thing that you want for itself and not for any other reason or thing (the core reason, therefore, that you want anything at all), pursuing that achieves maximal available satisfaction (everything else is a frustration state). You are satisfied choosing that course of action more than choosing any other. This satisfaction is achieved at the moment of the decision (in fact it is how you feel at that decision that causes the decision you make), and in the future (when one in future reconciles what is rationally available with what one has chosen or pursued, which is the thing you are choosing to achieve).
This requires the state to be true. False satisfaction is not satisfying except to the deceived, but knowing you are or even might be deceived is dissatisfying. Therefore there is no rationally achievable state by which you can be both truly satisfied and deceived. Thus, the decider will never choose to be deceived. His future deceived self will tell him please do not do that, because it would dissatisfy them to know or even suspect that it was done. His future undeceived self will thank him for not choosing that nightmare state. And the decider himself will recoil in horror at the prospect of subjecting themselves to that state. Therefore, the only rational decision is to not choose that state. Any other decision is then by definition irrational.
This is why no one does anything for just “feelings.” If they did, they would jump at the chance to just sit in a medicated haze and have stimulated orgasms forever. The reason why no one would accept that fate for themselves is what I have been explaining you you over and over and over again and you keep ignoring me and asking the same dumb questions, not grasping the distinction between being rationally satisfied with oneself and one’s condition, and merely feeling good. You are like a heroic addict defending being forever blitzed out rather than the enlightened rational agent realizing why that is really not what they want for themselves.
If the way a person recognizes this core motivating object is by how it feels, though, that would seem to make positive feelings themselves the core object—a person with negative feelings for instance presumably wouldn’t be maximally satisfied. To a person with hypothetical complete control over their neurology, meanwhile, it isn’t clear that any external circumstance would mandate any particular emotional response—they could respond to any input (even lack of certainty as to the reality of one’s environment) with e.g. a profound sense of joy and wellbeing—people indeed commonly aspire to this sort of emotional control. To such a hypothetical being—as opposed to a typical human operating under uncountable biases—what advantage would a life outside the machine have to one inside it?
You are just talking in circles now.
Asked and answered.
If you want to know what someone with “complete control over their neurology” would choose for themselves, read The Objective Value Cascade.
If you want to know why an objective rational agent would never in that scenario choose to magic pill itself, read “The Magic Pill Challenge” in Goal Theory Update.
If I understand the Value Cascade article correctly, the being for instance values a universe containing other beings vs. a solitary one because the former produces better feelings. This however would seem to require a neurology that produces those feelings under those specific circumstances in the first place. If the being were to have perfect information and no existential threat (such as through the Nozick machine), why would it serve them to respond to a given input with negative rather than positive feelings?
That is not what is argued in Objective Value Cascade.
It argues exactly the opposite.
Try actually reading the article and interacting with its argument—in comments on that article, not here.
Maybe that will force you to finally read and engage with what’s actually in it.