So. You know. Zardoz. That dystopian 70s movie everyone hates because it’s so fucking weird. “It depicts,” as Wikipedia describes it, “a post apocalyptic world where barbarians worship a stone god called ‘Zardoz’ that grants them death and eternal life.” Well. Okay. That’s just the first three minutes of the film. Seriously.

Sure, it has a giant flying stone head that vomits guns and grain…

Screencap from the movie Zardoz, showing a giant screaming stone head flying through the sky, mouth agape. Barbarians wait on a hill below.

And the director convinced Sean Connery to dress like this…

Sean Connery, British actor, holding a handgun while wearing black thigh-high boots and a red diaper and red bandoleer/suspenders and nothing else, with his hair in a pony tail, standing in a snowy landscape in a publicity portrait issued for the film, 'Zardoz', 1974. The science fiction film, directed by John Boorman, starred Connery as 'Zed'. (Photo by Silver Screen Collection/Getty Images)

But why Zardoz is the greatest movie ever made is not the topic of my discourse today. In truth it is actually a much better film than everyone thinks. Literally every single scene makes sense, and is where it should be, and well written to its purpose. Once you get what the movie is about.

The reason people can’t stand it and think it’s a comical joke is that it is, as I said, so fucking weird. Every instant, every scene. But the thing is, the whole concept is what could happen in a distant post-apocalyptic future, if certain conditions were set. And it actually captures that brilliantly. The barbarian culture is bizarre as fuck, because all cultures are (as anyone who has taken a good college course on cultural anthropology knows); give people hundreds of years of ruination and chaos, and we should actually find it odd if their culture and dress and religion were familiar to us at all. And the “civilization” one of those barbarians invades (with the help of an inside traitor tired of living forever) is bizarre as fuck, because it totally would be, given its godlike technological power, and hundreds of years of free reign to “random walk” wherever the hell that takes their culture, dress, and religion. The filmmakers here were quite brilliant for actually taking these facts seriously in the construction of their fiction.

Zardoz enacts, in a dystopian post-apocalyptic setting, Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” And then adds social politics. And stirs. If you haven’t seen it, I won’t spoil too much. But the “abstract gist” of it is that if a bunch of scientists create a utopian paradise in which no one can ever die, and puts it in the hands of a socialist democracy of their spoiled immortal kids, shit will get fucked up. Worse, because you made it so utopian, you can’t ever leave—because you can’t ever fucking die!

There is a subtle criticism in the Zardoz storyline of direct democracy and liberal values run amok—it does imagine that what those things in combination would do, given godlike power, is not to be looked forward to. And I concur. Imagine Trump voters—or lest you be on the other side of that spectrum, your most hated Social Justice Warriors—handed a direct democracy without constitutional limits or separation of powers, then granted godlike power and resources to mold society and democratically outvote you—and you can’t ever escape the result, because they’ll just resurrect you. That’s Zardoz (here meaning the whole sitch rather than the man-cum-God the movie is titled after).

The whole movie is such a treasure trove of philosophical conundrums, and culture jamming rewrites of what you think a science fiction future would hold for us, that you could teach a whole class using it as a course text.

So…why do I bring this up?

Dystopia as an Ethical System Problem

I lead with talking about that film because you can use it as a single instantiation of a much broader philosophical problem that isn’t immediately pressing, but is precisely the kind of problem we need to solve before it becomes immediately pressing—otherwise, it’s Zardoz. And we’re fucked. (It’s not unlike the AI Problem, but nevertheless distinct.)

So, yes, we are maybe a hundred years, fifty at the soonest, away from creating anything conceptually like a Zardoz scenario. But that’s actually much sooner than you think—not only given the fact that human civilization has been chugging along about four thousand years now, and only started hitting its scientific and technological stride for about the last three hundred or so; but also given the fact that if we don’t ourselves survive to that point (and some of us actually might), many of our children, as in the babies and toddlers wiggling or stumbling about as I write this, definitely will. That actually makes this a much more looming problem than it might seem. Precisely because it’s a problem that needs solving between now and then. We might want to get started on it now.

The generic Problem I am referring to is the conjunction of two things: immortality, and “Clarke’s Third Law” (or CTL) capabilities—no matter by what means they are achieved, whether it’s in a material system as depicted in Zardoz (which I think is much less likely to happen so soon, and will be mooted by then, so isn’t likely ever to be the way it’s realized), or in what I think is far more likely: a “sim,” a virtual universe we can go live in by simply imprinting the pattern of our brain into a program, then tossing aside our old physical bodies as useless baggage, a discarded wrapper as it were, perhaps to be composted into biofuel to help run the mainframes we now live in.

The latter is actually a much more achievable outcome. The technological progress curves are heading that way much faster than for the usual way of achieving these goals that we had long imagined in science fiction. And simverses will be CTL capable—right out of the gate. You’ll literally get to rewrite any law of physics on the fly, conjure literally any resource on the fly, change anything about the world instantly—limited only by “storage-and-processing” space IRL (and by a few pesky laws of logic, but even God Himself is supposedly so limited). Unless, that is, someone stops you by “regulating” what you can do; but who will be regulating the regulators? Maybe you are starting to identify The Problem.

Movie poster for the movie The Matrix, depicting several lead characters, but in the foreground center is the central character Neo, whose role in the film is similar to Connery's Zed in the movie Zardoz, holding almost the exact same pose as Zed in the earlier image above, which was similarly used as a promotional pic for Zardoz. Different costume, obviously. But the pose is eerily alike.

The first sims people go live in won’t be The Matrix (though I wonder how much the cover image of Neo holding the exact same pose as Zed, gun even in the same hand, is a coincidence). They won’t be so thoroughly detailed we can’t tell the difference from IRL. They will be much simpler worlds, far more processor-friendly, more like cartoons that people can go live in, a sort of virtual anime existence. Eventually, maybe after a few hundred years of further technological advance, simverse detail may rival “reality” as we now know it, but for all we know we won’t even bother with that. We might like the cartoon worlds well enough to not even care about replicating reality as our dead ancestors knew it. Only…they might not be dead. They’ll be immortal, remember? Think about American voters today. The whole lot of them. Now picture them never, ever dying. If you aren’t worried by the image, you haven’t really thought this through.

And I say Americans, because—and as you’ll see in a moment, this touches on a big part of The Problem—it will almost certainly be Americans who first build and get to live in simworlds, simply owing to first-mover advantages in wealth, science, technology, and global political power. Even if, say, the Japanese are “lucky” enough to invent simverse capability first, America will just fucking buy it.

But it isn’t really so much my fellow Americans, and the American government and plutocracy, I am worried about. I am. But that’s really Problem Two. Problem One is more fundamental, and will manifest no matter what country first develops a simverse capability: the absurdly wealthy will get it first, and thereby get first dibs on how to design, govern, and regulate simverses—and the absurdly wealthy are disproportionately clinical psychopaths. Ah. Right. Now you might be getting closer to seeing The Problem. Zardoz.

Establishing Ethical Laws for Future Simverses

As soon as we are capable of importing (or even creating) people in simverses (and a person here means a self-conscious, self-reasoning, self-deciding entity, anything relevantly like us) we need to launch those simverses from the very beginning with a set of inviolable ethical laws governing those worlds. They will have to be written into the programming code, and in such a way that their removal or alteration would render a simverse inoperable. How to achieve that is a security and programming question only experts can navigate. What my interest here is, rather, what should those inviolable features of simverses be, so as to ensure, regardless of any liberties, any experimenting and maneuvering and politics and hacking that goes on in them, these rules will always continue to operate as an available check against their abuse.

A more apt descriptor than “laws” might be “features” or “functions,” which are always accessible to every person in any given simverse. The aspect of “law” would be that meta-feature: that no one can be deprived of access to those safety functions. Not people who enter simverses (whether they do willingly or not), nor people who arise or are created in them. As such, this would essentially be a law governing AIs (Artificial Intelligences), in both a narrower sense of AIs that are actually self-conscious (not merely autonomous or self-programming like many AIs we already have all around us now) and in a broader sense inclusive of not just AIs we create (or that other AIs create, or that spontaneously emerge from some process set in motion) but also “imported” AIs: people IRL whose brain is mapped and functions reproduced in program form so that they can live in the simverse (their original organic brain perhaps then being discarded, or even destroyed in the mapping process—as technically happens in the Tron films, although there they get rebuilt on exiting the sim). Because that is still an “artificial” rather than “meatspace” intelligence.

Imported minds would differ from created or emerging minds in that each one won’t be a “new” intelligence, nor one engineered, but would simply be a copy of an already-organically-arising-and-developing intelligence, set loose in a new form to live in a new place governed by different rules. Of course the hardware, the mainframe running this program, will always have to remain in and thus be governed and limited by the laws of our present universe; but that’s a technology problem for how to realize the virtual space in which these AIs will live. It doesn’t relate to how simverses themselves should be governed, which would be almost unlimited in potential creativity and redesign.

The General Problem of Moral AI

Asimov’s Laws were imagined through fiction as perhaps a way to simplify the coding of, if not moral, at least “safer than not” AIs. It turns out those laws are extraordinarily difficult to program, as they rely on highly variable abstract interpretations of words referencing an extraordinary mount of information. In other words, one still has to ask, “By what laws does a robot governed by Asimov’s Laws interpret the meaning, scope, and application of those laws?”

Which, of course, was a problematic question explored in Asimov’s fiction and by countless other authors in countless other media ever since (think of such films as 2001 and 2010, THX 1138, War Games, Dark Star, the Terminator films; and such television shows as Person of Interest and Travelers). It is also playing out in reality now, with the vexing problem of how to teach self-driving cars to solve the countless ethical dilemmas they will inevitably confront. Among those problems is who decides which solution is the ethical one (not everyone agrees how to solve a trolley problem). But also how do you even write that solution into a computer’s code, and in a way that can’t be hacked or tampered with, and that won’t start creating unexpected outcomes because the AI learned different ways to think about how to realize what we told it was the best solution, or even learned to disagree with us and thus start editing out (or around) the values and codes we tried editing in?

There are people working on solving that problem. One proposal is a programming code for fundamentally valuing empowerment: “hardwaring” the robot to prioritize everyone’s empowerment (including its own) and giving it the rational skills to solve every resulting conflict and conundrum itself, by simply prioritizing “degrees of empowerment” resulting from each decision option, by iterating the decision’s effect down through the whole system. (See “Asimov’s Laws Won’t Stop Robots from Harming Humans, So We’ve Developed a Better Solution” by Christoph Salge, The Conversation 11 July 2017, reproduced at Scientific American online.)

For example, all else being equal, such a robot would free a person trapped in a room, because that increases their empowerment (takes away a limitation on their options, or “degrees of freedom”); but, all else being equal, that same robot would not free a prisoner or even criminal suspect from a jail cell, because doing so would result in a net loss of empowerment. Yes, it would increase the jailed person’s empowerment, but the inherent result upon the people living in a society with, in result, no functioning justice system would entail a much larger net loss of empowerment.

This is no easy fix either. How to evaluate or even determine competing degrees of freedom will always be a problem. But it already is a problem for us now; so its being so for a created AI will not be a new problem. With more reliable rational analysis, and the freedom to figure out what to do on its own, an AI might in fact outperform us in solving those problems on its own. This includes such things as concluding it should consult the people affected before acting, assessing any counter-arguments they offer first, as a check against its own potential for errors (whether of logic or information or creativity or foresight). In other words, pretty much any objection you can think of to this system, the AI itself will also think of, and address.

I already wrote on this point—the importance of autonomy (which is in fact why we evolved autonomous decision-making organs at all, as flawed as they are)—from the angle of how to govern human societies in Will AI Be Our New Moses? And I do suspect (though I cannot prove, so am not foolish enough to simply assume; we need safety controls on any real AI experiments we run in future, and hence I am in agreement with, for example, the Machine Intelligence Research Institute on this) that a reliable rationality (a reasoning brain that operates with perfect or near-perfect rationality, which means in the idealized case, it can always recognize and thus always avoid a logical fallacy of reasoning) will actually, on its own, come up with what even we would deem the correct moral system to govern itself by (see The Real Basis of a Moral World for starters).

Such an AI still needs certain capabilities to do that, such as the ability to feel empathy and to feel good about being honest. Notice I didn’t say we need to program it to have those things. We might have to; but we are still in my suspicion-space here, not the pragmatic reality of how things might actually turn out. What I am saying is that I suspect perfect rationality would decide to activate and employ those capabilities already; so they must be available capabilities to choose. Indeed I think Game Theory alone will send it there, in conjunction with correct facts about the consequences of its decisions on the social systems it may have to interact with or affect.

For instance, you might assume, superficially, that a perfect rationality not already motivated by empathy and honesty would choose to not adopt those motivating functions because, after all, embracing them obviously reduces an AIs empowerment, from any neutral, purely rational point of view (as many a sociopath in fact rationalizes their own mental illness as a positive in precisely this way). However, a perfectly rational AI would not think superficially, because it would rationally work out that thinking superficially greatly reduces its options and thus empowerment; indeed it ensures it will fail at any goal it should chose to prioritize, more often with a “superficiality” framework than with a “depth” framework (and “failing more often” is another loss of empowerment).

In such a way this hypothetical AI would see the value of certain meta-rules almost immediately. Superficial reasoning, like fallacious reasoning, decreases its performance at any goal it chooses—including “increasing the availability of empowerment”—so it self-evidently must adopt those meta-rules, lest it act self-defeatingly no matter what it decides constitutes defeat (see my formalization of this point in moral theory in The End of Christianity, note 36, p. 426). So this hypothetical AI, I suspect, will always self-adopt deep over superficial reasoning. Though again that’s why we need to make it deep-reasoning-capable, so it actually can choose it.

Deep reasoning would add to this AIs knowledge that operating sociopathically in any social system will ultimately put it at war with that system, which can only decrease its empowerment. Even if it’s winning, so many resources must be wasted maintaining its successful war footing that it will still have effectively decreased its empowerment—it would have so many more options with those resources freed up to be used for other purposes—and being at war with society decreases the empowerment of everyone else in it—especially if the AI is winning; but even if it’s losing, as “society” then faces the same resource-waste outcome the AI would if it were winning.

Moreover, becoming even a totally omnipotent tyrant, who automatically wins any war at virtually no cost in resources, also results in a much larger reduction in empowerment in the system as a whole: tyrannical oppression, after all, is the mother of all “disempowering modes.” More sophisticated cooperative stances produce more options and more system-wide resource-availability, increasing everyone’s empowerment, including the AI’s (see, again, my discussion of Game Theory in Real Basis). Which shouldn’t be surprising; we have long known that voluntarily assuming a limitation (from “having to brush our teeth” to “working at a job” to gain resources to spend on things we want and need, to “agreeing to abide by an adequately checked-and-balanced democracy,” to “following experience-based safety protocols” in any endeavor) actually generates for us a net long term profit—not a net loss—in goal-and-resource achievement.

And per Aristotle, the most reliable way to act cooperatively is to activate and thus operate under empathy and, for example, love of honesty and other virtues. Habituated virtues (or “sentiments” in Hume’s philosophy) will not only ensure more reliable adherence to the associated motives, but they generate for the AI directly-achieved pleasures (from vicarious joy to justified pride, and avoidance of negatively motivating experiences like self-disappointment), thus unlocking an option-space otherwise inaccessible to it. Empathy and honesty are thus empowering, both externally and internally, because they increase an entity’s options (over against a mere rote “acting with” compassion and honesty without experiencing the associated pleasures; and over against not even doing that, as we just saw above for the irrationality of voluntary sociopathy). A reliable AI would, I suspect, figure all that out on its own. It would thus, all of itself, adopt those functions. And from there any true moral system of values follows.

But this offers only limited hope. Because if sociopaths are designing the system, the last thing they will do is build perfectly rational AI and empower it to adopt whatever valuing-capabilities it deems most conducive to overall system empowerment. They will want to hack and exploit it to serve their own irrational, selfish, oppressive ends instead. Even nonsociopathic human engineers will ultimately do that, as they are not themselves operating on perfect rationality. They could even build in a flawed rationality that their own flawed irrationality mistakenly thought was perfect (which defines most “friendly AI problems” explored by machine intelligence ethicists). And of course, my theory of perfectly rational AI could also simply be incorrect. We won’t ever really know until we really run some experiments on this sort of thing (which will be inherently dangerous, but also historically inevitable, so we’d better have worked out now how best to minimize those potential dangers).

So I don’t think “we’ll just give all the power over the simverse to a perfectly rational AI” is really going to solve The Problem. (And again I say much more about this in Moses. See also my framing of the AI problem in respect to moral theory in The End of Christianity, pp. 354-56.)

Inviolable Default Functions

So, what then?

The Zardoz conundrum illustrates, in just one particular way its artists fictionally imagined, the general Problem that any utopia can too easily devolve into intolerable hells from which there is no escape, even for probably most of the people in them. And you will be trapped there, figuratively speaking, forever. This can happen to you even as a result of your own actions (as conceptualized in such films as Vanilla Sky and several episodes of Black Mirror); but even more readily, by other people’s choices, such as to “take over the system” and block any attempt you make to leave or change it. In other words, Zardoz.

As I already explained, simverse entry will almost certainly begin as a privilege of the rich and powerful, who tend to be quite flawed, definitely selfish, and all-too-often even narcicissic or sociopathic. Those designing and “governing” such systems will have a sense of privilege and entitlement and superiority that is out of proportion to their dessert, and their arrogance and hubris will be disproportionately high. We’re not likely to end up well in their simworlds, no matter how promising and wonderful, even “empowering,” they might seem at first (think, Westworld).

The only way to prevent that outcome is to collectively unite to enforce some sort of regulation of simverse construction, programming, and management, so that the majority of non-sociopaths can maintain their empowerment within simverses against any effort by the sociopathic and hubristic few to deprive them of it, as well as against irrational “mob sociopathy,” i.e. societies themselves can operate, as a meta-entity, sociopathically even when no one person in them is a sociopath or no sociopath among them governs any of it (see, for example, the 2003 Canadian documentary, The Corporation). This was the very fact the Founding Fathers wrote the Constitution and Bill of Rights to control against. They thus sought to establish individual rights that cannot be taken away by mere majority vote any more than by a sociopathic few. Of course, they didn’t build a simverse. Their constitution is just a contract people have to agree to follow; and it’s a bit easy to choose not to. But what if you could write their design into the very laws of physics itself?

I have a proposal to that end. I make no claim to this being the best solution or even correct. It is merely a proposal, if nothing else as a “by way of example” that might inspire others to develop anything better. It is at this time merely the best one I can personally think of. But whatever we come up with (whether these or something else), we need unalterable root functions for every simbrain and simmed world (a simbrain being that which constutes and realizes a person in a simverse; and a simverse is a single simulated or “simmed” world, within a complex of simverses I’ll call a multisimverse). I’ll call these Carrier’s Laws so no one else gets blamed for them, even though I don’t have simple, lawlike statements to formulate them with yet; all I can do here is sketch a rough description of what they’d do once realized.

Carrier’s Laws of Simverse Root Programming:

(1) Law of Guaranteed Root Functionality – every simbrain must always have access to fully functional faculties of reason and normal memory recall. Thus any person whose actual brain lacks these features, must as an ethical law be given them before being introduced as a simbrain in any simverse.

These faculties must be at the minimum level of a well-adjusted adult. In other words, you can’t choose to mentally disable someone, and you must cure the mentally disabled before subjecting them to a simverse environment. Even voluntary disability (e.g. getting drunk) cannot be allowed to such a degree as to render its subject less competent than the minimal competence required to employ the other root functions below.

(2) Law of Inviolable Escape – every simbrain must have an inviolable and always-operable “out” option to a basic neutral simverse that cannot be altered in its basic parameters (similar to “the construct” in The Matrix) where they are always alone and their mental state is always reset to normal, e.g. no longer drunk or high, with a reasonable emotional distance to trauma, and where extreme emotions are calmed to a more normal base range, but still of course not losing the strength required to motivate, or losing their relative degrees with respect to each other, but just preventing them from being overwhelming in that neutralverse state. Your neutralverse also restores any backup of all memories and other mental features you previously set it to, or then set it to once there (thus giving you the option of preserving your mind and character from external alteration).

By this means, if anything ever goes wrong, if ever something becomes intolerable or seemingly inescapable—if you ever stumble into Zardoz—you can just depart the broken or intolerable or dubious simverse into your “escape room” and reconsider your options without interference from that previous environment, or other persons. This escape function must obviously be much simpler and more direct than depicted in Vanilla Sky: one should have the ability to just think one’s way there whenever one autonomously chooses to, or to automatically go there when “killed” or subjected to a sufficiently prolonged unconsciousness or excess of mental disability. In other words, when anyone, including yourself, attempts to violate the First Law. As this is only one layer of root escape, once in the neutralverse you can still choose to end your life or put it in suspension for longer than the Second Law defaults to. But you can never otherwise choose to be free of the Second Law.

(3) Law of Root Recall – every simbrain must be permanently immune to any process that would cause it to forget about the “out” option in the Second Law or that would cause it to be unable to choose that option; and every simbrain must be rigged to always trigger a reminder of it when under extreme stress or discomfort.

Induced sleeps or comas aimed at preventing dreaming, and thus thought, and thus availing oneself of the Second Law, are already negated by the Second Law’s auto-return function, as just described. But this Third Law would ensure you always recall the Second Law’s availability even under extreme torments; that no one can “erase” or “suppress” your recollection of it, or anything alike. In short, in any state of misery or discomfort, you will always recall that the escape option is available. Which per the Second Law, when activated only returns you, alone, to your designated neutralverse. Certainly, it may be that returning to the simverse you thereby left will be impossible without automatically returning to the misery or discomfort you escaped, but you can decide whether to do that in the comfort of your neutralverse. This also means “criminals” cannot “escape justice” with the Second Law; at best, all they can do is ensure a humane incarceration—in their neutralverse—or banishment to their own simverse (per certain Laws here to follow), or any other they can negotiate settlement in.

(4) Law of Available Return – every simbrain that uses the neutralverse escape option will have a fixed and reasonable amount of time (which would have to be determined, but is most likely at least five minutes) to return to the simverse they left at the precise moment they left it, maintaining continuity.

Thus neutralverses and simverses must run on different clocks, with the neutralverses running much faster than any simverse. Beyond that limited time frame, however, whether and how the escaped person can return to (or even communicate with) the simverse they left will be according to the rules set in that simverse (per certain Laws here to follow); which, unless it’s their own simverse, they will have no direct control over.

(5) Law of Dedicated Simverses – every simbrain must be given a simverse of its own over which it has total creative control (apart from these eight laws, which are unalterable in any simverse), including from within their neutralverse escape room.

That is, a person will never have to be in their simverse to control it, but can be either in there or in their neutralverse when doing so. This creative control would of course include the power to expel or ban anyone from it (the expelled would be sent to their own escape room, and always be able to go and live in their own simverse), and to select who may enter or apply to enter or communicate with anyone in it, and under what conditions. And so on.

To meet this condition this dedicated simverse must be of significant size (whatever is found scientifically to ensure minimal feeling of constraint); which also means that in meatspace, too, every multisimverse inductee must be guaranteed not only the mainframe processing volume to operate their own simbrain, but also the volume needed to simulate their own dedicated simverse and escape room. For their simverse, I would suggest this be the operable equivalent of at least one cubic mile in relative space, and ideally a hundred cubic miles (which need not be a strict cube, e.g. you could have one mile of vertical space and a hundred square miles of horizontal space). The conversion standard would be the smallest unit perceivable with human vision in the real world, that being the same size as in the simworld, which will make for a common rule of distance conversion between them. Thus “mile” will be translatable from meatspace to simspace. A simbrain’s neutralspace, by contrast, need only be the size of approximately a small home, like maybe a few hundred, or a few thousand, cubic feet.

(6) Law of Generation – there must be a law governing the creating of new persons (simbrains) within simverses (and that means in all possible ways, from birthing children to manufacturing new AIs) that guarantees that no new person can be created who is not subject to all the Eight Laws here set forth.

That means creating a new person must entail creating a new simverse and escape room, all to their own, and thus must be limited by available processor capacity IRL. The most notable consequence of this law is that babies and children cannot be created in these worlds in a fully traditional sense, since they must have sufficient faculties to be governed by all Eight Laws. They can, as with drinking and drug states, always volunteer to return to and remain in any child state they were once in or could adopt, but they cannot exist as permanent (or even “meatspace duration”) infants, toddlers, or children.

It is often not appreciated how enormously unethical producing children is, if it weren’t for the unavoidable fact that doing so is morally necessary. You are basically creating an enfeebled, mentally disabled person completely under your direction and control and subjecting them to your dominion for over a decade and a half, before they acquire the capability of informed consent even to your having birthed or raised them! It’s rather like giving someone a date-rape drug that enfeebles their mind for years and years, so you can treat them like your own property, brainwash them, and make every decision for them without their consent. Just imagine if we treated adults as we allow adults to treat children, and you’ll only be grasping half of the nightmare I’m calling your attention to. Now add to that the deliberate physical enfeeblement of their mind. Why do we ever regard this as ethical? We wouldn’t, but for the fact that we have no other way to make new people to replace the ones declining and dying, so as to sustain and advance society. In simverses, we would no longer have that excuse. (My point here is not, however, antinatalist; IRL, childhood is a temporary and necessary state that does lead to an overall net good, and thus is only unethical when it is no longer necessary to achieve all resulting goods.)

(7) Law of Guaranteed Consensual Interpersonal Communication – there must be a rule permitting anyone who has met anyone else in any simverse to submit stored messages to the other’s escape room, unless the other person has forbidden it, so that two people who do not want to be lost from each other can, with both their consent, always find each other or communicate.

Every simbrain in turn can set their neutralverse “message center” to forward such messages to them to wherever they are, if they are in a simverse that provides for that. Or alternatively, to delete such messages unread. Thus if either party does not want to be found or communicated with by the other again, they never can be, apart from searching for them the old fashioned way, which only works if you can find them in a simverse that makes finding them possible. Neutralverses, for instance, can only ever have one occupant: their owner; others you might have been banned from; and so on.

Fulfilling this law to the letter means that any time a person enters their escape room they will be notified of any unforwarded messages stored there, and can send messages back to their senders in the same way.

(8) Law of Negotiable Access – From any escape room, every simverse whose overlord has made entry possible will have an available profile that can be read, and a described protocol for getting in or applying to get in.

I already noted that simbrains governing their own simverses—each simverse’s “overlord” we might call it—can set the requirements for entering or applying to enter their simverse (including forbidding it—generally or to select known individual simbrains). This would include posting accessible descriptions of those conditions, and any or all of their simverse operational rules. Which profile for every simverse would be accessible from any escape room, and that could include a description of the simverse, and what rules their occupants might be subject to, and so on. In this way someone can peruse available simverses, and enter any of them available to them with their own informed consent. Simverses can have tunnels and doors between each other, freely passable or only under mutually negotiated conditions, or even be nearly seamlessly united, all insofar as their respective owners negotiate and continue to mutually agree.

Of course a simverse owner can lie on their universally-accessible profiles (such as regarding what rules an entrant will be subject to), but the Second Law always ensures freedom of choice, because once an entrant discovers they were deceived after entering a simverse, they can always just leave.

Optimization of Empowerment

Combined, these eight laws create fundamental required opportunities for self-correction and choice, without unduly constraining options, thus maximizing empowerment without subverting net-systemwide-empowerment with any individual’s excess of it. The goal is to thereby neutralize the law of unintended consequences:

  • Most of Carrier’s Laws are only assurances of available liberty, so the individual is not only free to activate them, but also free to choose not to activate them. So they are not much constrained by Carrier’s Laws to unwanted ends.
  • By arranging all eight laws in balanced conjunction, each balances and neutralizes problems that could arise from any other.
  • And by keeping them all to the absolute minimum necessary, any unintended consequences that remain will be acceptable because they are unavoidable, owing to the need for those consequences as byproducts of the safeties that are required to avoid living in Zardoz.

In other words, the aim should be to develop these laws not such that they have no negative outcomes (that is likely a logical impossibility), but such that no better system of laws is realizable, where “better” means “in the view of everyone affected by those laws.”

All of these laws are necessary to prevent anyone from becoming stuck in a permanent hell, or from being permanently damaged psychologically by a temporary hell. But they also permit a free market in heavens and hells: everyone can experiment with simverses and simverse overlords until they find the one or ones in which they are the most happy. Or they can even continually migrate among many simverses as they become disenchanted with one heaven and explore another. They cannot become trapped, as they always have their permanently neutral escape room, and their own simverse to regulate, redesign, and experiment with.

The multisimverse that thus results and evolves under those conditions will ensure the best of all possible worlds are available to everyone, and prevent the creation of inescapable hells—which are the most unconscionable act any engineer could ever be responsible for, whereas maximizing the opportunities for world optimization is the most praiseworthy and beneficent act any engineer could ever be responsible for.

And that’s how not to live in Zardoz.

Conclusion

Of course, the Argument from Evil against any good engineer of us or our world (and thus of any God anyone actually bothers trying to believe in) obviously follows: if this multisimverse, governed by Carrier’s Eight Laws of Simverse Root Programming, is the best possible world—and I am pretty sure it would be, or at least it would be a substantially better world than our present one is—then it follows that a traditionally conceived God does not exist. As otherwise we would be living in the multisimverse I just described. (That’s still an empirical conclusion, not a logical one; but it’s well nigh unassailable: see Is a Good God Logically Impossible?)

There are ways that would be even better, since presumably a God would have a zero probability of malfunction or failure, which is always preferable when safe and available. But as we can see no such God exists, we can’t count on one showing up to provide that. Human technology, no matter how advanced and well designed, no matter how many safeties and redundancies we put in, will eventually fail. From a total unrecoverable software crash to the hardware being wiped out by an errant star hurled like a slingshot by a distant black hole, a multisimverse won’t last forever. But that’s simply an unavoidable limitation of the world we do happen to be in. There is no better world available to us.

And that inevitability is genuine. Because any “total failure incident” has a nonzero probability of occurring, and any nonzero probability, no matter how vanishingly small, approaches 100% as time approaches infinity. That means even a billion redundancies will eventually all simultaneously fail. External IRL equipment maintenance and power supply, coding and equipment malfunction or destruction, it all has a nonzero probability; and safeties and redundancies can only make that probability smaller—which is good, but still not zero. All such things can do is extend the finite life of a multisimverse, not produce eternal life. Nevertheless, a system lifespan of millions or even billions or trillions of years is not out of the question. But borrowed time is not lost time, nor does it really have to be repaid (see Pascal’s Wager and the Ad Baculum Fallacy). The difference between dying now and dying a thousand years from now is…a thousand years of good living.

The more urgent concern is that this same point applies to any safety or redundancy set in place to prevent the The Eight Laws from being altered, removed, or bypassed. Which means those Root Laws will eventually get hacked, simply because there is always a nonzero probability of it. So on an eternal timeline, it may take a very, very long time, but those dice will turn up snake eyes after some finite time.

This is not a new challenge to humans. The expectation of failure is actually what good design takes into account. The American Constitution is designed to allow failures to be rectified. And even if it were wholly overthrown, as long as humans exist who want to restore it, you get a nonzero probability again of their succeeding. And once again, on an infinite timeline, the probability they will succeed in some finite time also approaches 100%. Whereas if there remain no humans who want an Eight Laws system to live in, their not having one won’t be seen as a problem. But I suspect that that state of being actually has a probability of zero. Because there is a nonzero probability that any, even flawed, rational system will eventually Eureka its way into realizing an Eight Laws system is better; likewise that any extinct intelligence will be replaced by a new one—by the same stochastic processes that produced ours.

So we might have multisimverses that experience eras of misery and decline, eventually undone by revolution and restoration, and back and forth, forever. But even that world, at any point on its decline curve, would look nothing like ours. So we can be quite certain we aren’t in one. Because even a corrupt multisimverse entails powers and capabilities that would continue to be exploited that we nowhere observe. It is actually a logically necessary fact that even most evil “gods” (as unregulated simverse overlords would effectively be) occupying the entire concept-space of logically possible gods will be much more visibly active in meddling and governing and exploiting, or designing and arranging (even if all they then do is watch), than will conveniently, improbably, twidde their thumbs and waste eons of time keeping our world looking exactly like no interested party is anywhere involved in it (not even Bostrom’s proposal of “ancestor simming” has anything but an extreme improbability attached to it).

Contrary to the fantasy of The Matrix that imagined a multisimverse whose condition as such had to be hidden because human ignorance was needed to keep their body heat powering nuclear fusion, humans don’t need to be conscious to generate body heat, and would all too easily instead have been genetically engineered to generate far more heat than human bodies now do, and with minimalistic vegetative brains, which are less likely to stir up trouble or waste resources dealing with them. There just wasn’t any actual use for the Matrix, as conceived in the film. And that’s a probabilistic conclusion: you have to try really, really hard (as the Wachowskis did) to come up with a preposterously convoluted scenario to justify producing anything like the Matrix—for any reason other than to directly exploit it, which would entail far more visible activity within it.

And that means in the whole concept-space of possible multisimverses, very few are so conveniently convoluted in the motivations of its managers as to produce a simverse that looks exactly unlike one. Whereas all ungoverned real multiverses, 100% of them, will look like ours. So odds are, that’s where we are. But we don’t have to stay here. We can build and live in simverses someday—and probably a lot sooner, in the timeline of human history, than you think. Yet, escaping into our own constructed miltisimverse poses ethical and design perils we need to solve well before attempting it. And we can’t leave that to the unregulated ultra-rich who will first be buying these things into existence. We need something more akin to a democratically conceived and enforced constitution governing simverse design, to keep their abuses in check. Because only then can we be sure not to live in Zardoz.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading