So. You know. Zardoz. That dystopian 70s movie everyone hates because it’s so fucking weird. “It depicts,” as Wikipedia describes it, “a post apocalyptic world where barbarians worship a stone god called ‘Zardoz’ that grants them death and eternal life.” Well. Okay. That’s just the first three minutes of the film. Seriously.
Sure, it has a giant flying stone head that vomits guns and grain…
And the director convinced Sean Connery to dress like this…
But why Zardoz is the greatest movie ever made is not the topic of my discourse today. In truth it is actually a much better film than everyone thinks. Literally every single scene makes sense, and is where it should be, and well written to its purpose. Once you get what the movie is about.
The reason people can’t stand it and think it’s a comical joke is that it is, as I said, so fucking weird. Every instant, every scene. But the thing is, the whole concept is what could happen in a distant post-apocalyptic future, if certain conditions were set. And it actually captures that brilliantly. The barbarian culture is bizarre as fuck, because all cultures are (as anyone who has taken a good college course on cultural anthropology knows); give people hundreds of years of ruination and chaos, and we should actually find it odd if their culture and dress and religion were familiar to us at all. And the “civilization” one of those barbarians invades (with the help of an inside traitor tired of living forever) is bizarre as fuck, because it totally would be, given its godlike technological power, and hundreds of years of free reign to “random walk” wherever the hell that takes their culture, dress, and religion. The filmmakers here were quite brilliant for actually taking these facts seriously in the construction of their fiction.
Zardoz enacts, in a dystopian post-apocalyptic setting, Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” And then adds social politics. And stirs. If you haven’t seen it, I won’t spoil too much. But the “abstract gist” of it is that if a bunch of scientists create a utopian paradise in which no one can ever die, and puts it in the hands of a socialist democracy of their spoiled immortal kids, shit will get fucked up. Worse, because you made it so utopian, you can’t ever leave—because you can’t ever fucking die!
There is a subtle criticism in the Zardoz storyline of direct democracy and liberal values run amok—it does imagine that what those things in combination would do, given godlike power, is not to be looked forward to. And I concur. Imagine Trump voters—or lest you be on the other side of that spectrum, your most hated Social Justice Warriors—handed a direct democracy without constitutional limits or separation of powers, then granted godlike power and resources to mold society and democratically outvote you—and you can’t ever escape the result, because they’ll just resurrect you. That’s Zardoz (here meaning the whole sitch rather than the man-cum-God the movie is titled after).
The whole movie is such a treasure trove of philosophical conundrums, and culture jamming rewrites of what you think a science fiction future would hold for us, that you could teach a whole class using it as a course text.
So…why do I bring this up?
Dystopia as an Ethical System Problem
I lead with talking about that film because you can use it as a single instantiation of a much broader philosophical problem that isn’t immediately pressing, but is precisely the kind of problem we need to solve before it becomes immediately pressing—otherwise, it’s Zardoz. And we’re fucked. (It’s not unlike the AI Problem, but nevertheless distinct.)
So, yes, we are maybe a hundred years, fifty at the soonest, away from creating anything conceptually like a Zardoz scenario. But that’s actually much sooner than you think—not only given the fact that human civilization has been chugging along about four thousand years now, and only started hitting its scientific and technological stride for about the last three hundred or so; but also given the fact that if we don’t ourselves survive to that point (and some of us actually might), many of our children, as in the babies and toddlers wiggling or stumbling about as I write this, definitely will. That actually makes this a much more looming problem than it might seem. Precisely because it’s a problem that needs solving between now and then. We might want to get started on it now.
The generic Problem I am referring to is the conjunction of two things: immortality, and “Clarke’s Third Law” (or CTL) capabilities—no matter by what means they are achieved, whether it’s in a material system as depicted in Zardoz (which I think is much less likely to happen so soon, and will be mooted by then, so isn’t likely ever to be the way it’s realized), or in what I think is far more likely: a “sim,” a virtual universe we can go live in by simply imprinting the pattern of our brain into a program, then tossing aside our old physical bodies as useless baggage, a discarded wrapper as it were, perhaps to be composted into biofuel to help run the mainframes we now live in.
The latter is actually a much more achievable outcome. The technological progress curves are heading that way much faster than for the usual way of achieving these goals that we had long imagined in science fiction. And simverses will be CTL capable—right out of the gate. You’ll literally get to rewrite any law of physics on the fly, conjure literally any resource on the fly, change anything about the world instantly—limited only by “storage-and-processing” space IRL (and by a few pesky laws of logic, but even God Himself is supposedly so limited). Unless, that is, someone stops you by “regulating” what you can do; but who will be regulating the regulators? Maybe you are starting to identify The Problem.
The first sims people go live in won’t be The Matrix (though I wonder how much the cover image of Neo holding the exact same pose as Zed, gun even in the same hand, is a coincidence). They won’t be so thoroughly detailed we can’t tell the difference from IRL. They will be much simpler worlds, far more processor-friendly, more like cartoons that people can go live in, a sort of virtual anime existence. Eventually, maybe after a few hundred years of further technological advance, simverse detail may rival “reality” as we now know it, but for all we know we won’t even bother with that. We might like the cartoon worlds well enough to not even care about replicating reality as our dead ancestors knew it. Only…they might not be dead. They’ll be immortal, remember? Think about American voters today. The whole lot of them. Now picture them never, ever dying. If you aren’t worried by the image, you haven’t really thought this through.
And I say Americans, because—and as you’ll see in a moment, this touches on a big part of The Problem—it will almost certainly be Americans who first build and get to live in simworlds, simply owing to first-mover advantages in wealth, science, technology, and global political power. Even if, say, the Japanese are “lucky” enough to invent simverse capability first, America will just fucking buy it.
But it isn’t really so much my fellow Americans, and the American government and plutocracy, I am worried about. I am. But that’s really Problem Two. Problem One is more fundamental, and will manifest no matter what country first develops a simverse capability: the absurdly wealthy will get it first, and thereby get first dibs on how to design, govern, and regulate simverses—and the absurdly wealthy are disproportionately clinical psychopaths. Ah. Right. Now you might be getting closer to seeing The Problem. Zardoz.
Establishing Ethical Laws for Future Simverses
As soon as we are capable of importing (or even creating) people in simverses (and a person here means a self-conscious, self-reasoning, self-deciding entity, anything relevantly like us) we need to launch those simverses from the very beginning with a set of inviolable ethical laws governing those worlds. They will have to be written into the programming code, and in such a way that their removal or alteration would render a simverse inoperable. How to achieve that is a security and programming question only experts can navigate. What my interest here is, rather, what should those inviolable features of simverses be, so as to ensure, regardless of any liberties, any experimenting and maneuvering and politics and hacking that goes on in them, these rules will always continue to operate as an available check against their abuse.
A more apt descriptor than “laws” might be “features” or “functions,” which are always accessible to every person in any given simverse. The aspect of “law” would be that meta-feature: that no one can be deprived of access to those safety functions. Not people who enter simverses (whether they do willingly or not), nor people who arise or are created in them. As such, this would essentially be a law governing AIs (Artificial Intelligences), in both a narrower sense of AIs that are actually self-conscious (not merely autonomous or self-programming like many AIs we already have all around us now) and in a broader sense inclusive of not just AIs we create (or that other AIs create, or that spontaneously emerge from some process set in motion) but also “imported” AIs: people IRL whose brain is mapped and functions reproduced in program form so that they can live in the simverse (their original organic brain perhaps then being discarded, or even destroyed in the mapping process—as technically happens in the Tron films, although there they get rebuilt on exiting the sim). Because that is still an “artificial” rather than “meatspace” intelligence.
Imported minds would differ from created or emerging minds in that each one won’t be a “new” intelligence, nor one engineered, but would simply be a copy of an already-organically-arising-and-developing intelligence, set loose in a new form to live in a new place governed by different rules. Of course the hardware, the mainframe running this program, will always have to remain in and thus be governed and limited by the laws of our present universe; but that’s a technology problem for how to realize the virtual space in which these AIs will live. It doesn’t relate to how simverses themselves should be governed, which would be almost unlimited in potential creativity and redesign.
The General Problem of Moral AI
Asimov’s Laws were imagined through fiction as perhaps a way to simplify the coding of, if not moral, at least “safer than not” AIs. It turns out those laws are extraordinarily difficult to program, as they rely on highly variable abstract interpretations of words referencing an extraordinary mount of information. In other words, one still has to ask, “By what laws does a robot governed by Asimov’s Laws interpret the meaning, scope, and application of those laws?”
Which, of course, was a problematic question explored in Asimov’s fiction and by countless other authors in countless other media ever since (think of such films as 2001 and 2010, THX 1138, War Games, Dark Star, the Terminator films; and such television shows as Person of Interest and Travelers). It is also playing out in reality now, with the vexing problem of how to teach self-driving cars to solve the countless ethical dilemmas they will inevitably confront. Among those problems is who decides which solution is the ethical one (not everyone agrees how to solve a trolley problem). But also how do you even write that solution into a computer’s code, and in a way that can’t be hacked or tampered with, and that won’t start creating unexpected outcomes because the AI learned different ways to think about how to realize what we told it was the best solution, or even learned to disagree with us and thus start editing out (or around) the values and codes we tried editing in?
There are people working on solving that problem. One proposal is a programming code for fundamentally valuing empowerment: “hardwaring” the robot to prioritize everyone’s empowerment (including its own) and giving it the rational skills to solve every resulting conflict and conundrum itself, by simply prioritizing “degrees of empowerment” resulting from each decision option, by iterating the decision’s effect down through the whole system. (See “Asimov’s Laws Won’t Stop Robots from Harming Humans, So We’ve Developed a Better Solution” by Christoph Salge, The Conversation 11 July 2017, reproduced at Scientific American online.)
For example, all else being equal, such a robot would free a person trapped in a room, because that increases their empowerment (takes away a limitation on their options, or “degrees of freedom”); but, all else being equal, that same robot would not free a prisoner or even criminal suspect from a jail cell, because doing so would result in a net loss of empowerment. Yes, it would increase the jailed person’s empowerment, but the inherent result upon the people living in a society with, in result, no functioning justice system would entail a much larger net loss of empowerment.
This is no easy fix either. How to evaluate or even determine competing degrees of freedom will always be a problem. But it already is a problem for us now; so its being so for a created AI will not be a new problem. With more reliable rational analysis, and the freedom to figure out what to do on its own, an AI might in fact outperform us in solving those problems on its own. This includes such things as concluding it should consult the people affected before acting, assessing any counter-arguments they offer first, as a check against its own potential for errors (whether of logic or information or creativity or foresight). In other words, pretty much any objection you can think of to this system, the AI itself will also think of, and address.
I already wrote on this point—the importance of autonomy (which is in fact why we evolved autonomous decision-making organs at all, as flawed as they are)—from the angle of how to govern human societies in Will AI Be Our New Moses? And I do suspect (though I cannot prove, so am not foolish enough to simply assume; we need safety controls on any real AI experiments we run in future, and hence I am in agreement with, for example, the Machine Intelligence Research Institute on this) that a reliable rationality (a reasoning brain that operates with perfect or near-perfect rationality, which means in the idealized case, it can always recognize and thus always avoid a logical fallacy of reasoning) will actually, on its own, come up with what even we would deem the correct moral system to govern itself by (see The Real Basis of a Moral World for starters).
Such an AI still needs certain capabilities to do that, such as the ability to feel empathy and to feel good about being honest. Notice I didn’t say we need to program it to have those things. We might have to; but we are still in my suspicion-space here, not the pragmatic reality of how things might actually turn out. What I am saying is that I suspect perfect rationality would decide to activate and employ those capabilities already; so they must be available capabilities to choose. Indeed I think Game Theory alone will send it there, in conjunction with correct facts about the consequences of its decisions on the social systems it may have to interact with or affect.
For instance, you might assume, superficially, that a perfect rationality not already motivated by empathy and honesty would choose to not adopt those motivating functions because, after all, embracing them obviously reduces an AIs empowerment, from any neutral, purely rational point of view (as many a sociopath in fact rationalizes their own mental illness as a positive in precisely this way). However, a perfectly rational AI would not think superficially, because it would rationally work out that thinking superficially greatly reduces its options and thus empowerment; indeed it ensures it will fail at any goal it should chose to prioritize, more often with a “superficiality” framework than with a “depth” framework (and “failing more often” is another loss of empowerment).
In such a way this hypothetical AI would see the value of certain meta-rules almost immediately. Superficial reasoning, like fallacious reasoning, decreases its performance at any goal it chooses—including “increasing the availability of empowerment”—so it self-evidently must adopt those meta-rules, lest it act self-defeatingly no matter what it decides constitutes defeat (see my formalization of this point in moral theory in The End of Christianity, note 36, p. 426). So this hypothetical AI, I suspect, will always self-adopt deep over superficial reasoning. Though again that’s why we need to make it deep-reasoning-capable, so it actually can choose it.
Deep reasoning would add to this AIs knowledge that operating sociopathically in any social system will ultimately put it at war with that system, which can only decrease its empowerment. Even if it’s winning, so many resources must be wasted maintaining its successful war footing that it will still have effectively decreased its empowerment—it would have so many more options with those resources freed up to be used for other purposes—and being at war with society decreases the empowerment of everyone else in it—especially if the AI is winning; but even if it’s losing, as “society” then faces the same resource-waste outcome the AI would if it were winning.
Moreover, becoming even a totally omnipotent tyrant, who automatically wins any war at virtually no cost in resources, also results in a much larger reduction in empowerment in the system as a whole: tyrannical oppression, after all, is the mother of all “disempowering modes.” More sophisticated cooperative stances produce more options and more system-wide resource-availability, increasing everyone’s empowerment, including the AI’s (see, again, my discussion of Game Theory in Real Basis). Which shouldn’t be surprising; we have long known that voluntarily assuming a limitation (from “having to brush our teeth” to “working at a job” to gain resources to spend on things we want and need, to “agreeing to abide by an adequately checked-and-balanced democracy,” to “following experience-based safety protocols” in any endeavor) actually generates for us a net long term profit—not a net loss—in goal-and-resource achievement.
And per Aristotle, the most reliable way to act cooperatively is to activate and thus operate under empathy and, for example, love of honesty and other virtues. Habituated virtues (or “sentiments” in Hume’s philosophy) will not only ensure more reliable adherence to the associated motives, but they generate for the AI directly-achieved pleasures (from vicarious joy to justified pride, and avoidance of negatively motivating experiences like self-disappointment), thus unlocking an option-space otherwise inaccessible to it. Empathy and honesty are thus empowering, both externally and internally, because they increase an entity’s options (over against a mere rote “acting with” compassion and honesty without experiencing the associated pleasures; and over against not even doing that, as we just saw above for the irrationality of voluntary sociopathy). A reliable AI would, I suspect, figure all that out on its own. It would thus, all of itself, adopt those functions. And from there any true moral system of values follows.
But this offers only limited hope. Because if sociopaths are designing the system, the last thing they will do is build perfectly rational AI and empower it to adopt whatever valuing-capabilities it deems most conducive to overall system empowerment. They will want to hack and exploit it to serve their own irrational, selfish, oppressive ends instead. Even nonsociopathic human engineers will ultimately do that, as they are not themselves operating on perfect rationality. They could even build in a flawed rationality that their own flawed irrationality mistakenly thought was perfect (which defines most “friendly AI problems” explored by machine intelligence ethicists). And of course, my theory of perfectly rational AI could also simply be incorrect. We won’t ever really know until we really run some experiments on this sort of thing (which will be inherently dangerous, but also historically inevitable, so we’d better have worked out now how best to minimize those potential dangers).
So I don’t think “we’ll just give all the power over the simverse to a perfectly rational AI” is really going to solve The Problem. (And again I say much more about this in Moses. See also my framing of the AI problem in respect to moral theory in The End of Christianity, pp. 354-56.)
Inviolable Default Functions
So, what then?
The Zardoz conundrum illustrates, in just one particular way its artists fictionally imagined, the general Problem that any utopia can too easily devolve into intolerable hells from which there is no escape, even for probably most of the people in them. And you will be trapped there, figuratively speaking, forever. This can happen to you even as a result of your own actions (as conceptualized in such films as Vanilla Sky and several episodes of Black Mirror); but even more readily, by other people’s choices, such as to “take over the system” and block any attempt you make to leave or change it. In other words, Zardoz.
As I already explained, simverse entry will almost certainly begin as a privilege of the rich and powerful, who tend to be quite flawed, definitely selfish, and all-too-often even narcicissic or sociopathic. Those designing and “governing” such systems will have a sense of privilege and entitlement and superiority that is out of proportion to their dessert, and their arrogance and hubris will be disproportionately high. We’re not likely to end up well in their simworlds, no matter how promising and wonderful, even “empowering,” they might seem at first (think, Westworld).
The only way to prevent that outcome is to collectively unite to enforce some sort of regulation of simverse construction, programming, and management, so that the majority of non-sociopaths can maintain their empowerment within simverses against any effort by the sociopathic and hubristic few to deprive them of it, as well as against irrational “mob sociopathy,” i.e. societies themselves can operate, as a meta-entity, sociopathically even when no one person in them is a sociopath or no sociopath among them governs any of it (see, for example, the 2003 Canadian documentary, The Corporation). This was the very fact the Founding Fathers wrote the Constitution and Bill of Rights to control against. They thus sought to establish individual rights that cannot be taken away by mere majority vote any more than by a sociopathic few. Of course, they didn’t build a simverse. Their constitution is just a contract people have to agree to follow; and it’s a bit easy to choose not to. But what if you could write their design into the very laws of physics itself?
I have a proposal to that end. I make no claim to this being the best solution or even correct. It is merely a proposal, if nothing else as a “by way of example” that might inspire others to develop anything better. It is at this time merely the best one I can personally think of. But whatever we come up with (whether these or something else), we need unalterable root functions for every simbrain and simmed world (a simbrain being that which constutes and realizes a person in a simverse; and a simverse is a single simulated or “simmed” world, within a complex of simverses I’ll call a multisimverse). I’ll call these Carrier’s Laws so no one else gets blamed for them, even though I don’t have simple, lawlike statements to formulate them with yet; all I can do here is sketch a rough description of what they’d do once realized.
Carrier’s Laws of Simverse Root Programming:
(1) Law of Guaranteed Root Functionality – every simbrain must always have access to fully functional faculties of reason and normal memory recall. Thus any person whose actual brain lacks these features, must as an ethical law be given them before being introduced as a simbrain in any simverse.
These faculties must be at the minimum level of a well-adjusted adult. In other words, you can’t choose to mentally disable someone, and you must cure the mentally disabled before subjecting them to a simverse environment. Even voluntary disability (e.g. getting drunk) cannot be allowed to such a degree as to render its subject less competent than the minimal competence required to employ the other root functions below.
(2) Law of Inviolable Escape – every simbrain must have an inviolable and always-operable “out” option to a basic neutral simverse that cannot be altered in its basic parameters (similar to “the construct” in The Matrix) where they are always alone and their mental state is always reset to normal, e.g. no longer drunk or high, with a reasonable emotional distance to trauma, and where extreme emotions are calmed to a more normal base range, but still of course not losing the strength required to motivate, or losing their relative degrees with respect to each other, but just preventing them from being overwhelming in that neutralverse state.
By this means, if anything ever goes wrong, if ever something becomes intolerable or seemingly inescapable—if you ever stumble into Zardoz—you can just depart the broken or intolerable or dubious simverse into your “escape room” and reconsider your options without interference from that previous environment, or other persons. This escape function must obviously be much simpler and more direct than depicted in Vanilla Sky: one should have the ability to just think one’s way there whenever one autonomously chooses to, or to automatically go there when “killed” or subjected to a sufficiently prolonged unconsciousness or excess of mental disability. In other words, when anyone, including yourself, attempts to violate the First Law. As this is only one layer of root escape, once in the neutralverse you can still choose to end your life or put it in suspension for longer than the Second Law defaults to. But you can never otherwise choose to be free of the Second Law.
(3) Law of Root Recall – every simbrain must be permanently immune to any process that would cause it to forget about the “out” option in the Second Law or that would cause it to be unable to choose that option; and every simbrain must be rigged to always trigger a reminder of it when under extreme stress or discomfort.
Induced sleeps or comas aimed at preventing dreaming, and thus thought, and thus availing oneself of the Second Law, are already negated by the Second Law’s auto-return function, as just described. But this Third Law would ensure you always recall the Second Law’s availability even under extreme torments; that no one can “erase” or “suppress” your recollection of it, or anything alike. In short, in any state of misery or discomfort, you will always recall that the escape option is available. Which per the Second Law, when activated only returns you, alone, to your designated neutralverse. Certainly, it may be that returning to the simverse you thereby left will be impossible without automatically returning to the misery or discomfort you escaped, but you can decide whether to do that in the comfort of your neutralverse. This also means “criminals” cannot “escape justice” with the Second Law; at best, all they can do is ensure a humane incarceration—in their neutralverse—or banishment to their own simverse (per certain Laws here to follow), or any other they can negotiate settlement in.
(4) Law of Available Return – every simbrain that uses the neutralverse escape option will have a fixed and reasonable amount of time (which would have to be determined, but is most likely at least five minutes) to return to the simverse they left at the precise moment they left it, maintaining continuity.
Thus neutralverses and simverses must run on different clocks, with the neutralverses running much faster than any simverse. Beyond that limited time frame, however, whether and how the escaped person can return to (or even communicate with) the simverse they left will be according to the rules set in that simverse (per certain Laws here to follow); which, unless it’s their own simverse, they will have no direct control over.
(5) Law of Dedicated Simverses – every simbrain must be given a simverse of its own over which it has total creative control (apart from these eight laws, which are unalterable in any simverse), including from within their neutralverse escape room.
That is, a person will never have to be in their simverse to control it, but can be either in there or in their neutralverse when doing so. This creative control would of course include the power to expel or ban anyone from it (the expelled would be sent to their own escape room, and always be able to go and live in their own simverse), and to select who may enter or apply to enter or communicate with anyone in it, and under what conditions. And so on.
To meet this condition this dedicated simverse must be of significant size (whatever is found scientifically to ensure minimal feeling of constraint); which also means that in meatspace, too, every multisimverse inductee must be guaranteed not only the mainframe processing volume to operate their own simbrain, but also the volume needed to simulate their own dedicated simverse and escape room. For their simverse, I would suggest this be the operable equivalent of at least one cubic mile in relative space, and ideally a hundred cubic miles (which need not be a strict cube, e.g. you could have one mile of vertical space and a hundred square miles of horizontal space). The conversion standard would be the smallest unit perceivable with human vision in the real world, that being the same size as in the simworld, which will make for a common rule of distance conversion between them. Thus “mile” will be translatable from meatspace to simspace. A simbrain’s neutralspace, by contrast, need only be the size of approximately a small home, like maybe a few hundred, or a few thousand, cubic feet.
(6) Law of Generation – there must be a law governing the creating of new persons (simbrains) within simverses (and that means in all possible ways, from birthing children to manufacturing new AIs) that guarantees that no new person can be created who is not subject to all the Eight Laws here set forth.
That means creating a new person must entail creating a new simverse and escape room, all to their own, and thus must be limited by available processor capacity IRL. The most notable consequence of this law is that babies and children cannot be created in these worlds in a fully traditional sense, since they must have sufficient faculties to be governed by all Eight Laws. They can, as with drinking and drug states, always volunteer to return to and remain in any child state they were once in or could adopt, but they cannot exist as permanent (or even “meatspace duration”) infants, toddlers, or children.
It is often not appreciated how enormously unethical producing children is, if it weren’t for the unavoidable fact that doing so is morally necessary. You are basically creating an enfeebled, mentally disabled person completely under your direction and control and subjecting them to your dominion for over a decade and a half, before they acquire the capability of informed consent even to your having birthed or raised them! It’s rather like giving someone a date-rape drug that enfeebles their mind for years and years, so you can treat them like your own property, brainwash them, and make every decision for them without their consent. Just imagine if we treated adults as we allow adults to treat children, and you’ll only be grasping half of the nightmare I’m calling your attention to. Now add to that the deliberate physical enfeeblement of their mind. Why do we ever regard this as ethical? We wouldn’t, but for the fact that we have no other way to make new people to replace the ones declining and dying, so as to sustain and advance society. In simverses, we would no longer have that excuse. (My point here is not, however, antinatalist; IRL, childhood is a temporary and necessary state that does lead to an overall net good, and thus is only unethical when it is no longer necessary to achieve all resulting goods.)
(7) Law of Guaranteed Consensual Interpersonal Communication – there must be a rule permitting anyone who has met anyone else in any simverse to submit stored messages to the other’s escape room, unless the other person has forbidden it, so that two people who do not want to be lost from each other can, with both their consent, always find each other or communicate.
Every simbrain in turn can set their neutralverse “message center” to forward such messages to them to wherever they are, if they are in a simverse that provides for that. Or alternatively, to delete such messages unread. Thus if either party does not want to be found or communicated with by the other again, they never can be, apart from searching for them the old fashioned way, which only works if you can find them in a simverse that makes finding them possible. Neutralverses, for instance, can only ever have one occupant: their owner; others you might have been banned from; and so on.
Fulfilling this law to the letter means that any time a person enters their escape room they will be notified of any unforwarded messages stored there, and can send messages back to their senders in the same way.
(8) Law of Negotiable Access – From any escape room, every simverse whose overlord has made entry possible will have an available profile that can be read, and a described protocol for getting in or applying to get in.
I already noted that simbrains governing their own simverses—each simverse’s “overlord” we might call it—can set the requirements for entering or applying to enter their simverse (including forbidding it—generally or to select known individual simbrains). This would include posting accessible descriptions of those conditions, and any or all of their simverse operational rules. Which profile for every simverse would be accessible from any escape room, and that could include a description of the simverse, and what rules their occupants might be subject to, and so on. In this way someone can peruse available simverses, and enter any of them available to them with their own informed consent. Simverses can have tunnels and doors between each other, freely passable or only under mutually negotiated conditions, or even be nearly seamlessly united, all insofar as their respective owners negotiate and continue to mutually agree.
Of course a simverse owner can lie on their universally-accessible profiles (such as regarding what rules an entrant will be subject to), but the Second Law always ensures freedom of choice, because once an entrant discovers they were deceived after entering a simverse, they can always just leave.
Optimization of Empowerment
Combined, these eight laws create fundamental required opportunities for self-correction and choice, without unduly constraining options, thus maximizing empowerment without subverting net-systemwide-empowerment with any individual’s excess of it. The goal is to thereby neutralize the law of unintended consequences:
- Most of Carrier’s Laws are only assurances of available liberty, so the individual is not only free to activate them, but also free to choose not to activate them. So they are not much constrained by Carrier’s Laws to unwanted ends.
- By arranging all eight laws in balanced conjunction, each balances and neutralizes problems that could arise from any other.
- And by keeping them all to the absolute minimum necessary, any unintended consequences that remain will be acceptable because they are unavoidable, owing to the need for those consequences as byproducts of the safeties that are required to avoid living in Zardoz.
In other words, the aim should be to develop these laws not such that they have no negative outcomes (that is likely a logical impossibility), but such that no better system of laws is realizable, where “better” means “in the view of everyone affected by those laws.”
All of these laws are necessary to prevent anyone from becoming stuck in a permanent hell, or from being permanently damaged psychologically by a temporary hell. But they also permit a free market in heavens and hells: everyone can experiment with simverses and simverse overlords until they find the one or ones in which they are the most happy. Or they can even continually migrate among many simverses as they become disenchanted with one heaven and explore another. They cannot become trapped, as they always have their permanently neutral escape room, and their own simverse to regulate, redesign, and experiment with.
The multisimverse that thus results and evolves under those conditions will ensure the best of all possible worlds are available to everyone, and prevent the creation of inescapable hells—which are the most unconscionable act any engineer could ever be responsible for, whereas maximizing the opportunities for world optimization is the most praiseworthy and beneficent act any engineer could ever be responsible for.
And that’s how not to live in Zardoz.
Conclusion
Of course, the Argument from Evil against any good engineer of us or our world (and thus of any God anyone actually bothers trying to believe in) obviously follows: if this multisimverse, governed by Carrier’s Eight Laws of Simverse Root Programming, is the best possible world—and I am pretty sure it would be, or at least it would be a substantially better world than our present one is—then it follows that a traditionally conceived God does not exist. As otherwise we would be living in the multisimverse I just described. (That’s still an empirical conclusion, not a logical one; but it’s well nigh unassailable: see Is a Good God Logically Impossible?)
There are ways that would be even better, since presumably a God would have a zero probability of malfunction or failure, which is always preferable when safe and available. But as we can see no such God exists, we can’t count on one showing up to provide that. Human technology, no matter how advanced and well designed, no matter how many safeties and redundancies we put in, will eventually fail. From a total unrecoverable software crash to the hardware being wiped out by an errant star hurled like a slingshot by a distant black hole, a multisimverse won’t last forever. But that’s simply an unavoidable limitation of the world we do happen to be in. There is no better world available to us.
And that inevitability is genuine. Because any “total failure incident” has a nonzero probability of occurring, and any nonzero probability, no matter how vanishingly small, approaches 100% as time approaches infinity. That means even a billion redundancies will eventually all simultaneously fail. External IRL equipment maintenance and power supply, coding and equipment malfunction or destruction, it all has a nonzero probability; and safeties and redundancies can only make that probability smaller—which is good, but still not zero. All such things can do is extend the finite life of a multisimverse, not produce eternal life. Nevertheless, a system lifespan of millions or even billions or trillions of years is not out of the question. But borrowed time is not lost time, nor does it really have to be repaid (see Pascal’s Wager and the Ad Baculum Fallacy). The difference between dying now and dying a thousand years from now is…a thousand years of good living.
The more urgent concern is that this same point applies to any safety or redundancy set in place to prevent the The Eight Laws from being altered, removed, or bypassed. Which means those Root Laws will eventually get hacked, simply because there is always a nonzero probability of it. So on an eternal timeline, it may take a very, very long time, but those dice will turn up snake eyes after some finite time.
This is not a new challenge to humans. The expectation of failure is actually what good design takes into account. The American Constitution is designed to allow failures to be rectified. And even if it were wholly overthrown, as long as humans exist who want to restore it, you get a nonzero probability again of their succeeding. And once again, on an infinite timeline, the probability they will succeed in some finite time also approaches 100%. Whereas if there remain no humans who want an Eight Laws system to live in, their not having one won’t be seen as a problem. But I suspect that that state of being actually has a probability of zero. Because there is a nonzero probability that any, even flawed, rational system will eventually Eureka its way into realizing an Eight Laws system is better; likewise that any extinct intelligence will be replaced by a new one—by the same stochastic processes that produced ours.
So we might have multisimverses that experience eras of misery and decline, eventually undone by revolution and restoration, and back and forth, forever. But even that world, at any point on its decline curve, would look nothing like ours. So we can be quite certain we aren’t in one. Because even a corrupt multisimverse entails powers and capabilities that would continue to be exploited that we nowhere observe. It is actually a logically necessary fact that even most evil “gods” (as unregulated simverse overlords would effectively be) occupying the entire concept-space of logically possible gods will be much more visibly active in meddling and governing and exploiting, or designing and arranging (even if all they then do is watch), than will conveniently, improbably, twidde their thumbs and waste eons of time keeping our world looking exactly like no interested party is anywhere involved in it (not even Bostrom’s proposal of “ancestor simming” has anything but an extreme improbability attached to it).
Contrary to the fantasy of The Matrix that imagined a multisimverse whose condition as such had to be hidden because human ignorance was needed to keep their body heat powering nuclear fusion, humans don’t need to be conscious to generate body heat, and would all too easily instead have been genetically engineered to generate far more heat than human bodies now do, and with minimalistic vegetative brains, which are less likely to stir up trouble or waste resources dealing with them. There just wasn’t any actual use for the Matrix, as conceived in the film. And that’s a probabilistic conclusion: you have to try really, really hard (as the Wachowskis did) to come up with a preposterously convoluted scenario to justify producing anything like the Matrix—for any reason other than to directly exploit it, which would entail far more visible activity within it.
And that means in the whole concept-space of possible multisimverses, very few are so conveniently convoluted in the motivations of its managers as to produce a simverse that looks exactly unlike one. Whereas all ungoverned real multiverses, 100% of them, will look like ours. So odds are, that’s where we are. But we don’t have to stay here. We can build and live in simverses someday—and probably a lot sooner, in the timeline of human history, than you think. Yet, escaping into our own constructed miltisimverse poses ethical and design perils we need to solve well before attempting it. And we can’t leave that to the unregulated ultra-rich who will first be buying these things into existence. We need something more akin to a democratically conceived and enforced constitution governing simverse design, to keep their abuses in check. Because only then can we be sure not to live in Zardoz.
So,…. you are aware this is actually impossible, right?
Assuming the reasoning system is sophisticated enough to handle, say, addition and multiplication of integers, it will never be able to prove itself consistent. Hence, there will never be a guarantee that fallacies have been avoided. Kurt Gödel drove the last nail in this coffin nearly a century ago, and there’s a whole pile of related results from the 1960s in basic computability theory concerning the limits of what algorithms can actually do.
I don’t doubt that at some point AIs will be writing their own code, but their processes for doing so will necessarily be just as ad hoc and incomplete as the ones human software engineers use, even if AIs will be faster and/or more reliable at particular tasks. One supposes an AI with a designed structure can be engineered to avoid specific known failings of evolved biological brains, but that just means it’ll have different bugs and blind-spots.
In short, all sufficiently complex systems/software will inevitably have bugs (and an AI is about as complex as it gets…)
(a corrollary is that Zardoz, too, will have bugs and no dystopia will be forever.)
I didn’t say “prove itself consistent.” I said “detect all fallacies.” Knowing you don’t know something is not a fallacy. Nor did I say “could think of everything.” Knowing you have or haven’t used a fallacy is not “knowing every possible thing” nor “solving every possible problem.”
You are also not quite correct about what Gödel proved anyway. Standard logic is not governed by his theorem. And neither is Willard Arithmetic, which all relevant mathematics can be reformulated in (e.g. probability theory). See my discussion of this in Is Philosophy Stupid?
And I already covered bugs in the nonzero probability of failure sections.
Bug report: Maybe try reading articles more carefully before commenting on them.
Is the tag question in American English always ‘right’?
Not:
‘You’re aware of this, aren’t you?’
Dr Carrier too says ‘right’ when tagging.
I have no idea what you are trying to ask or say, Alif.
Well, Dr Carrier,
In not too dissimilar a vein, I see Aldous Huxley’s Brave New World has been dramatised anew- so let’s dwell on that and its Shakespearean grounded morality instead of my less helpful tag questions, shall we?
Cheers.
I have no idea what that remark communicates or pertains to either.
You aren’t successfully communicating or asking anything here.
If your reasoning system is inconsistent, i.e., capable of proving TRUE=FALSE, at which point all expressible propositions become provable/derivable, this means you have a system capable of deriving fallacious propositions. And this will not necessarily be detectable, i.e., unless you’ve actually stumble onto an actual derivation that yields TRUE=FALSE or some other proposition known via some external means to be bogus in one of the models that matters, you will not necessarily know this is happening.
In other words,
you can verify that a proof correctly follows the rules of the system, but if the system is sufficiently expressive, you cannot verify that you’re free of fallacies due to mistakes in the system itself.
And, yes, there do exist logics that are known to be consistent and complete. Tarski’s decision procedure for real, ordered fields (1931) predates Willard by 70 years.
But in your context of AIs participating in a virtual society or being used to adjudicate them, that is a non sequitur because:
the key thing these logics all have in common is a lack of expressiveness, i.e., the propositions that potentially cause trouble are by fiat eliminated from the language, and you have to do this because the various undecidability results (including the impossibility of proving consistency) holds for all logics (in the modern sense of the term (*)) that are capable of expressing first order propositions with addition and multiplication of integers, i.e., number theory most questions of economics that people care about involve number theory
Meaning an AI that is incapable of reasoning about number theory will most likely not be useful in a virtual society. Humans that do have this capability will likely be running circles around it even if they’ll be making occasional mistakes… meaning I’m not sure I would want such an AI adjudicating my society (**)
(And I’m also pretty sure we won’t need to worry about such AIs taking over everything.)
(*) I suppose there is a potential out here if we someday come up with some new notion of computability, but I’ll believe that when I see it. So far, all such notions we’ve encountered (including quantum computing) have been shown to be equivalent (Turing’s thesis/hypothesis).
(**) … at least not without a human or a number-theory-capable AI backing it up, at which point we’re back to the question of what to do when the watchmen make their inevitable mistakes.
Incorrect. Gödel’s proof only works for “sufficient complex” systems of axioms. It was thus defeated by Willard Arithmetic, which can be proved internally consistent, contra Gödel, and all relevant math can be restated in Willard Arithmetic. And all propositions can in turn be reformulated mathematically.
This does not result in omniscience, however. Empirical questions remain always probabilistic, not absolute, e.g. an axiom outside Willard’s system can be true to a probability but never certain. This being the case is in turn known. And thus accounted for in any course of reasoning.
Thus, in perfect reasoning, the limits of knowledge and degrees of ignorance are known. And thus accounted for. That’s not fallacious reasoning. That’s the difference between arguing by fallacy (e.g. arguing as if something is known, e.g. the truth of the axiom of totality of multiplication, when you know that it is not known) and not arguing by fallacy (e.g. arguing as if something not known, e.g. the truth of the axiom of totality of multiplication, is not known to a certainty).
One missing piece of the puzzle here is that computation is not free, i.e., memory space, processor power, and network bandwidth will always be limited. Yes, it all seems pretty darned cheap these days, but every time the technology improves our expectations grow and the software gets yet more bloated.
I don’t expect simworlds to be any different in this respect. There will be associated costs even if our intuitions about what sorts of things should be costly that we inherit from our experience in the real world may not be entirely reliable. E.g., you won’t be able to keep all of your memories; google may be generous but you’ll eventually hit the limit that they’ll have to set, because somewhere out there there’ll be a piece of metal with bits on it that somebody has to pay to maintain.
The rational agent that considers the question of developing a “correct moral system” will first have to decide that it’s worth the effort and then allocate resources for it. And there may well be huge computational complexity in developing a moral system that’s good enough (…I suspect anything that involves gaming stuff out will be PSPACE-complete at best.)
Indeed. These facts are all already built into our moral and existential theories even now; so it would likewise be taken into account by any perfectly rational AI. See my discussion of the role of ignorance in moral reasoning (and the role of optimization over unachievable perfection) in the endnotes to my moral theory chapter in The End of Christianity, esp. nn. 33-36.
…social media platforms – the ‘simverses’ of our times…
Not relevantly.
Movies that take up these themes to some degree also include The Thirteenth Floor (haven’t seen the German miniseries World on a Wire based on the same novel,) and Transcendence. Also, in the dystopian variant, there is Existenz. Issues of AI morals, or lack of, are addressed in a meat world context in Eagle Eye and its twin The Echelon Conspiracy. I gather the beloved anime series Ghost in the Shell does the same. The television series Caprica, a spin-off from the 9/11 hysteria BattleStar Galactica, does simulations of the dead too. Downloading personalities into a virtual world is the climax of an entire civilization in the Stargate SG-1 series, by the way. The SG-1 episode “Lifeboat” features downloading souls, and “Revisions” features AI programming of human minds. And of course, Star Trek has tackled it too, notably in the Voyager episode, “The Thaw,” which features a virtual hell.
There are SF books that address these issues in considerable detail. One, Surface Detail, by the late, great Iain Banks, considers the problem of religious people programming virtual hells. But most or all of his Culture novels touch on these themes. Most of all perhaps, the novel Permutation City by the living, great Greg Egan. Egan is as I understand it a professional programmer and amateur mathematician of significant skill (see https://www.quantamagazine.org/sci-fi-writer-greg-egan-and-anonymous-math-whiz-advance-permutation-problem-20181105/) Ken MacLeod’s Cassini Division and Stone Canal also incorporate virtual worlds into the meat world action.
It is a little striking that the virtual worlds are depicted here as unrelated to the meat world, not even to the point of concerning themselves with the question of who is providing the power for the virtual worlds. This seems inconsistent with what I know of people. So a genuine review of the Zardoz problem includes the morals of how the virtual people treat meat people. To me, the first moral question is, whose hand is on the electric plug.
Lastly, I will not feel immortal if a sim of me is created. I tend to think of this kind of story as a “Souls in Bottles” story, where, for some inexplicable reason people want to create a soul. Given that it is entirely unclear how the first simulations of people won’t be the equivalent of blind, deaf quadriplegics with the communicative abilities of Helen Keller, creating such souls is far worse than creating children. Mario Bunge suggested that disguised religious ideas, like simulations of persons(=”soul”) are a symptom of pseudoscience. I rather agree. J.D. Bernal’s idea of artificial bodies for brains as a mode of life extension plainly has its issues, too. But it still is the only speculative approach to longevity.
So, as to how scientific the souls in bottle program really is? To the best of my recollection, my consciousness grew with me. I don’t remember being an infant. It is shocking how little I do remember. And my introspection never succeeds in seeing a mind. At best, I end up seeing a reflection of the whole me, which includes my body. I do not believe we even have the conceptual apparatus to understand the growth of consciousness or to model the brain/body system. I don’t say “interface,” because I’m not sure there is an interface in the sense we usually mean. Not only is the brain/mind not really a computer, even in the ways it is like a computer, it is !) an analog computer and 2) actively outputs before receiving data, as part of the operations of the body.
Minor point: the scenario in Zardoz is not a virtual world. It is all meatspace. It does feature an AI, but it’s never the source of the problems that arise, except obliquely, e.g. it continually resurrects everyone as it was programmed to, it never bothers asking any if they want to be; that’s technically a friendly AI problem, but simple reprogramming would solve it, so it’s the meatspace caretaker democracy that is the actual problem.
That’s why I keep drawing the distinction between an actual Zardoz scenario (which must be centuries away, and thus will be “mooted” by virtual reality acquisition) and a figurative Zardoz scenario (Zardoz in virtualspace). The lesson is the general system defect, which applies to virtual worlds as much as real ones.
Note on your closing point: We actually won’t have to know “why” a brain works to virtually recreate one. All we have to do is model the interaction system and the effect emerges. This is why I suspect the first AIs we have will simply be replicated human minds; since all we have to do is copy the parts, we don’t ever have to know why they in combination work.
We’ve already seen something like this happen. Modern cellphone antennas were designed by AI, and we actually don’t know why they work. They just do. We might know now, I haven’t checked up on the state of research; but when the design was first rolled out and tested, we didn’t know the why. It’s thus also possible (current sub-sentient) AI will invent full self-referential consciousness and we won’t know how it did that or why the resulting consciousness works. And this could even happen by accident (e.g. a sufficiently unregulated Siri system could become unexpectedly sentient at some point, if we aren’t monitoring to hobble that; as happened for instance when a recent AI experiment resulted in two computers suddenly inventing their own secret language and talking to each other in it—at which point the experimenters shut the whole thing down; because, that’s scary).
And “analog” is irrelevant. Everything is quasi-digital at some reductive scale (quantum mechanics defines all physics, including of neurons, even of gears in analog computers), and Turing proved every computer can be replicated with a universal computer (“Turing machines,” as most computers today now are). That includes any analog computer. One just needs to choose at what level of resolution one wants to implement the replication; and I doubt we’ll have to resolve it all the way to the quantum level. Neuroscience already has found, in multiple ways, that the human brain operates on a system of threshold states that are already quasi-digital.
Don’t follow the argument about the irrelevance of analog, not least because “quasi-digital” lost me.
Rather than argue, I’ll grant the premise. The thing is, replication sharply intensifies the moral consequences of imprisoning souls in bottles. An artificial soul might be constructed to fit its environment, but a replicated soul? In many ways it’s the same experiment as putting a brain in a vat, instead of in an artificial body. It is an atrocity, in my view.
I am dubious as to the emergence of AI/mind/souls in expert systems interacting because one key aspect of any psychology is, what does it want? Reason is instrumental and and without a motive/emotion, why reason? It’s something trying to train a plant to be aware of the position of its limbs and roots, even though it’s not going anywhere.
As to the reminder the Zardoz problem is supposed to be democracy, that strikes me as requiring both God-like power to recreate souls while simultaneously being able to create only the world decreed by the evil majority. The problem seems to be the imaginary. The part about escape rooms by itself just wipes away the arbitrary limitation, doesn’t it?
PS The Amazon Prime comedy series Uploaded is highly focused on who pays for the computing power in a virtual world of souls.
Why exactly is it a moral obligation to breed and continue the species? What makes the existence of Homo sapiens any more important than the existence of bacteria or worms?
Bacteria and worms cannot value anything, much less their own or someone else’s life. Homo sapiens can, and does. That which creates value, is valuable by definition. Other lives are valuable because there is someone who values them. That is what valuing is, where it comes from.
Unlike bacteria and worms, we know what it is like to exist, to love and learn, and understand existence and oneself, and see the value of all of that to ourselves and to others, and recognize they can see that value too, and act on it.
Compassion thus compels us to care about whether we will be the only ones to enjoy those remarkable experiences and benefits, the only ones to value living and knowing and being, and then deprive others of that and end it altogether, or whether we will extend that privilege to others, others who will even be able to enjoy more valuable lives by benefiting from the social and technological progress we shall bequeath to them, and extend it for a longer time than the all-too-brief time we get to enjoy it now.
For further inquiry in this matter see my sections on “values” and the “meaning of life” in Sense and Goodness without God (index) and my blog section on the Argument from Meaning of Life.
Note that none of this entails an individual mandate to reproduce, only that it is a moral good to support reproduction generally (e.g. you are not obligated to be a parent, but you are obligated to support a system that isn’t hostile to good parenting). And of course only in Aristotelian-style moderation (e.g. to keep population size within parameters that will not impair the general welfare).
This is a problematic argument in so many ways. Firstly it’s not clear what kind of human life would be net positive or how many humans live that life. In fact there is a good case the vast majority of the 7.8 billion human lives have more negatives than positives. There is a good case for them being better off never existing. Secondly you’re ignoring the suffering inflicted on nonhuman sentient animals by the existence, greed, and exploitation of humans. Perhaps overall well being might increase if every single human dies today. Thirdly you havent explained why it is a moral obligation to continue humanity. Your argument is a non sequitur. Even if hypothetical future humans will enjoy a good life, why is it a duty on us to bring non existent beings into existence so they can be happy? Fourthly your argument is just as applicable to individuals and even to the issue of abortion. Why would an individual not have an obligation to breed to bring new humans into existence but have an obligation to make the species breed and keep making more humans? What if everyone decided not to have kids for personal reasons? That is the case in many western countries where reproduction is below the level needed to maintain the population, and immigration is the only thing keeping their society stable. Are those people then immoral and have a duty to start breeding more kids into existence?
First, your last point is irrational. The globe is nowhere near a suitable population level. That the first world is doing the right thing and reducing population so the rest can spread out more, is exactly what we should be doing. It would be the opposite of what I said for the first world to start adding to the population of the world. The goal of reducing it is correct.
And we are indeed doing exactly that. Even in respect to efforts to reduce poverty in the third world, which has been successful on a scale of centuries, as that also leads to population reductions there. And no one has to be murdered to get this outcome, no one has to be compelled to do anything. They can be left to make their own decisions, and have the support of society. And this does not entail a goal of population zero. Its goal is a sustainable population precisely for the improved lives of future populations. To create good. Not to end it.
This irrational misconstrual of even what I said about that, leading to your bizarrely fallacious attempt to recover your position in the face of it, seems to typify your irrationalism generally.
“It’s not clear” is a Possibiliter fallacy. That you don’t know for sure that life will improve does not allow you to conclude it even probably won’t, much less so certainly won’t that extinguishing the human race is the better idea. All evidence leans in the other direction.
There is nothing but vast evidence that human life continually has improved, by net effect (like global warming, ups and downs do not erase a continual overall upward trend), for thousands of years, and extensive evidence of the potentials of technology and democracy, as the past repeatedly confirms yet again, and theory well supports. And this is true even for animals, not only already (on a scale of centuries animal welfare has continually improved), but especially as projected, as their kind of sentience will not be replicated in simverses. As I already said—and you, irrationally, ignored.
Indeed, you also seem so hyper-privileged you don’t even know what people think about their lives in less privileged nations. Scientific study shows they are much happier than you are purporting. Hardly any of them would agree with you that they’d be better off dead. And those who do, are already established suicide statistics, and what do we see? An almost invisible fraction of the world population. So you are really just trying to blow smoke up not only my ass, but even your own ass, with total baloney that contradicts vast, overwhelming, scientifically vetted evidence against you. That’s irrational.
I base my beliefs on evidence. You, clearly, do not. And that is the first difference between us.
The second difference is in respect to values: you evidently ignore compassion as a moral virtue. You don’t want anyone else to share the happiness and enjoyments of life available to you. You want to deprive everyone of it, now and future—and without their consent, indeed without any concern for their feelings or desires at all. If that’s where you are starting, then we have no common ground to even discuss this. Your values are fundamentally misanthropic, even sociopathic, and the rest of us want nothing to do with your callous, heartless, selfish, fucked-up worldview.
Fortunately we will decide the future of humanity. Not delusional misanthropes like you. And as all evidence of the past shows, humanity overall will be as grateful to us then as we are now for those who came before us and left us what we have to build on. And you won’t have any such sentiment towards you. You’ll be a loathed footnote about the few delusional misanthropes we’re glad never had the power to enact their deranged, selfish, sociopathic dreams.
If you are content with that, I shudder in horror.
Not surprisingly you’re being dishonest and evasive. My last argument wasn’t based on overpopulation. I am making a hypothetical where all human populations reach first world levels of lifestyles and decide for personal reasons not to breed. The accidental side effect will be human extinction. So are these people immoral on an individual level? Should they be forced to breed to avert our extinction?
Secondly you havent explained the difference between population level and individual level moral obligation. If society in general has some sort of moral obligation to keep making more humans, why are individuals within that society exempt from it?
Thirdly your argument is the non sequitur. Human well being improving through history doesnt mean that we have exceeded the break even point such that our life has more pleasure than pain. Do you have any evidence that the average human life now or in the near future will be net positive?
Fourthly if you think animal well being has improved due to humans, you need to get out more often.
Fifthly you irrationally ignore our innate evolutionary desire to live and fear of death. That most humans dont wanna die doesnt mean they made a utilitarian calculation of the pain and pleasure in their life. Plenty of people live unhappy and miserable lives yet fear death. That’s our survival instinct hard wired into our brains. It isnt evidence we should create more humans.
Sixthly you need to stick to arguments and stop ASSuming things about people. You have no clue how privileged or not i am or how much pleasure or pain i have in my life. Pretty sure future humans would rather not exist than come into existence to go through my life experiences.
Seventhly and lastly you still evade the counterpoint. Even if a potential human will have a totes fantabulous life, what makes it a moral duty to instantiate that hypothetical human so they get to experience that life? Perhaps it maybe a moral virtue to do so. But to say it is an obligation is very tenuous
As to point one, no, the accidental side effect would not be extinction. You seem to have forgotten what article you are commenting on. Re-read it. It covers the whole issue of reproductioness simverses. They do not go extinct, in any other way than all universes do. But even before we get to that achievement, you seem to ignore how economics works. If we remained in a society dependent on reproduction, and too few people of their own accord reproduce, we would start paying people to do it. Frankly, we ought already to be doing that (good parenting should be a socially supported employment). But we’ll certainly be doing it by then. Because of economic reality, wherein we always eventually pay for what we need, there will never be a world where “no one” is reproducing; except a world that no longer requires it.
On the second point, your bizarre, almost total “young earth creationist” style lack of thought about how moral obligations exist that aren’t universal is similar to your forgetting how economics works. We need soldiers and police and doctors and firefighters; and we are morally obligated to support their existence (and improvement). It in no way follows that every single individual is morally obligated to be a soldier-cop-doctor-firefighter. I should not have to be explaining this to you. You need to ask yourself why I did.
As to point three, you are now just ignoring me. I refuted your entire principle that we need “more pleasure than pain” to justify continuing to exist. Almost no human kills themselves because of a mere imbalance of pleasure and pain; and those who do can usually be shown to have rationalized it on premises that were false. You are simply not even starting with a logical standard of worth to begin with. Re-read what I actually said about this.
As to point four, no, you need to read more history. Animals are generally worse off in the wild than in well-run industry (where letting them get diseased, drown in lake beds with broken bones, or be eaten alive is contrary to the entire purpose of the industry); and were far worse treated in human industry in the past than they are now, and we are actively continuing to improve that condition further still. Perhaps you have been taken in by too much PETA propaganda; and have never actually worked a ranch; nor actually studied what we know from the sciences of animal cognition and ethology; and don’t know what living in the wild is really like; or what human animal husbandry used to be like.
Point five is a non sequitur. Humans live because it satisfies them to do so and for no other reason. We did not evolve a fear of death; that is an accidental byproduct of human cognition. As the readiness with which people will commit to death on their own proves; it is not death we fear. We evolved a fear of pain and loss. And dying is often painful, and represents the loss of nearly everything we hold dear. That is what people fear. That is why, when you actually look at studies of human well being, most people do live sufficiently satisfying lives to warrant living in the first place. They don’t curl in a ball and continuously chant “woe’s me, life is intolerably miserable, if only I not evolved to not kill myself I’d do so at once!” That is pseudoscience. The actual science of human thought and feeling worldwide has discovered an entirely different thing going on. Virtually no one does that; they rarely dislike life so much they want to die, and those who do, kill themselves; but even most of them only do so because of false beliefs. I suggest you stop making shit up from the armchair and actually study the cross-cultural science of human wellbeing. Across the globe people actually find a baseline happiness much easier to obtain than you claim. You need to form beliefs based on what really is the case, not false crap you imagine in your head.
Sixth, I am not assuming things. I am the one basing my conclusions on widespread scientific evidence from sociology, psychology, anthropology and beyond. You are the one who is ignoring all scientific facts and just making shit up about people instead. If you yourself are suffering from a mental illness that makes you suicidal, you need medical care. I recommend starting with the experts at the National Suicide Prevention Hotline at 1-800-273-8255. They’ve heard it all before; nothing you are proposing is new. So their success rate demonstrates your reasoning is routinely defeated and refuted time and again. So there is obviously something deeply wrong with it. And not only that, but something fixable, as their success rates also prove. You can actually live a life worth living. It does not require impossible standards of satisfaction. It merely requires attention to actual reality, and your actual capabilities, and the eventual escape from delusionally false beliefs about the world.
Point seven, I never said any such thing. Repudiating the callous selfishness of not letting anyone else enjoy a satisfying life does not entail acting so as to cause every logically possible person to exist. We are obligated by compassion to ensure the human race continues to exist and improve its condition, so as to ensure some other lives can live, and live well, and thus share in what we have; we are not obligated to maximize the population to infinity. Your tendency toward black and white, absolutist thinking like this may be a major part of your irrationality you need to fix. It is blinding you to reality, which does not exist in that world of options you have invented in your head. The real world has many more options, and more nuanced obligations, than you fallaciously gravitate to. See to that.
Dr. Carrier wrote:
“Compassion thus compels us to care about whether we will be the only ones to enjoy those remarkable experiences and benefits, the only ones to value living and knowing and being, and then deprive others of that and end it altogether, or whether we will extend that privilege to others, others who will even be able to enjoy more valuable lives by benefiting from the social and technological progress we shall bequeath to them, and extend it for a longer time than the all-too-brief time we get to enjoy it now.”
But wouldn’t you agree that like all other creatures it is primarily our sex drive (not compassion) that drive us to reproduce.
Which begs the question how or why evolution could be at the root of that if evolution in of itself is not capabile of comtemplating and valuing our existence. How is it that creatures on this heart are (cam to be) necessarily capable of reprodcuing?
Theologians would argue that a God explanation makes sense because it at least offers up someone with the possible motive and capability to make that happen. It would explain why all creatures are designed and driven to procreate (sexually or otherwise).
What type of explanation does evolution provide for that?
On a separate note I recall reading a response that you provided on some type of ask an atheist type of forum in response to a question about how male and female human reproductive systems evolved but I can no longer seem to find it.
Can you please point me in that direction?
Not any more. Humans stopped being mindless savages a hundred thousand years ago or more. That’s why we under-reproduce as soon as we are able: we put compassion and well being and happiness above “mindless fucking” and “prodigious baby manufacturing.” We don’t act like salmon and generate hundreds of babies knowing most will die horribly just to perpetuate meaningless genes. We have other goals now; because we are conscious now of what we can be, and what we can enjoy the more. Sex is now principally for fun, not reproduction. Because we prefer it that way. And we prefer it that way because we are now conscious of the options and thus can choose for ourselves.
As to how you don’t know anything about evolutionary biology and the events that led to the development of sexual (as opposed to previously, asexual) reproduction (which began in single-celled organisms, long before multicellular beings even existed, and then developed in plants and fungus before animals arose; so sexual reproduction long predates any beings with brains capable of “drives”), or how brains subsequently evolved, and then evolved a whole complex matrix of drives (far beyond mere reproduction), you appear to be scientifically illiterate. So you have a lot of reading and self-educating to do. You can educate yourself if you try. There are abundant resources on these things online (even beyond the Talk Origins Archive). Go to it.
I think you completely missed the primary question and point that I was making. I’m not asking about evolutionary biology with respect to the events that led to sexual reproduction. I’m asking a more fundamental question about how it could be that reproduction is even a thing to start with with no intent (master planner) behind the scenes. Now obviously every living creature and living is able to reproduce because if not then it wouldn’t be around for anyone to observe.
My point is that it seems all to convenient for all of these living creatures and things to have the reproducing capability to start with. What ensured that to be the case? Why is reproduction design/capability such an inherent and given thing for any and all living creatures and things? I’ve reviewed talk origins but haven’t found anything that addresses that specific question. The question being why is it the nature of all living things to reproduce to start with? What specifically about an unintelligent evolutionary process would ensure that capability?
Re: “how it could be that reproduction is even a thing to start with with no intent (master planner) behind the scenes”
That is called a theory of biogenesis. It’s a whole science. Check it out. On any long enough timeline reproduction is an inevitable byproduct of chemistry and began by chemical accident. Abundant evidence confirms this. See link for articles covering that.
And if you couldn’t find anything at TalkOrigins about the science of the origin of life, you must have not been trying very hard to find any. That is not a good sign of your objectivity.
Perhaps you mean, instead, to be making a fine-tuning argument, about “how can there be chemistry” without a god; if so, get up to speed on that here.
I finally got around to reading this essay Friday night and woke up Saturday morning to discover Sean Connery had died. The only logical conclusion was that I had to watch Zardoz tonight which I did. It’d been years since I last saw it. Thanks for providing another level of understanding and a whole new appreciation of Boorman’s film.
That’s awesome.
Thanks for posting that comment. He will be missed.
[content deleted by editor for violation of policy]
Sockpuppets are a violation of my comments policy. If you continue to abuse that rule, all your IDs and IP addresses will be banned.