In 2020, Christian philosophers Kenneth Boyce and Philip Swenson presented a thesis at a conference, which has yet to appear under peer-review (though a version is in review), arguing that “fine tuning” is actually evidence against a multiverse. This is strange because it is most definitely the other way around. Consequently, a patron funded my research-attention on this, as they wanted to know how these guys are getting the wrong conclusion. Boyce and Swenson’s presentation paper was titled The Fine-Tuning Argument Against the Multiverse; and Boyce blogged a popular summary of it for the American Philosophical Association under the same title. He also was interviewed on it at Capturing Christianity. I reviewed all these materials to ensure I correctly understood what they are trying to argue.
Background
My readers will probably be well familiar with the concept of “fine tuning” as a reference to apparent “coincidences” in the fundamental “settings” of the universe (the so-called “physical constants”) that allow life to likely be a random outcome of it. They might also know how multiverse theory would render this inevitable: with countless randomly selected universes actually existing, the probability that at least one would exhibit apparent fine tuning and thus produce life is as near to 100% as makes all odds. Thus, as an explanatory hypothesis, it does away with any need for intelligent design. In fact these are arguably the two most popular hypotheses to explain “fine tuning,” the (often vague and hand-wavy) idea of intelligent design, and multiverse theory. There are other explanations (see A Hidden Fallacy in the Fine Tuning Argument). But the question today is the relative probability of these two more popular ones. So just to be easy, I will pretend here as if these are the only contenders.
In actual science, conducted by real scientists in the field of cosmology, intelligent design is a fringe position at best. It has no foundation (unlike the foundations of multiverse theory, there is zero science establishing that the required supernatural or even preternatural powers or entities even exist). And it has never produced a successful peer-reviewed cosmological model (it only gets bandied about in philosophy; whereas it never succeeds as an actual explanation of cosmological observations). By contrast, multiverse theory is the leading position among actual scientists for exactly the opposite reasons: most of the leading theories of cosmology, which actually explain a large number of bizarre observations and rely on well-established findings of science (like quantum mechanics and nonlinear dynamics), happen to entail or fail to prevent multiverse solutions (see Six Arguments That a Multiverse Is More Probable Than a God). That is too unlikely an epistemic coincidence. It suggests that’s the real explanation for fine tuning.
But even from a mere philosophical perspective, we can treat Intelligent Design (ID) and Multiverse Theory (MT) as competing hypotheses, and ask (as we should) what they each predict differently than the other if they were true, and then go and look and see which observations bear out. In other words, we should ask, and answer, the question of what caused fine tuning with the scientific method, rather than the armchair “wishful thinking” methodologies of theology. Because epistemically it simply comes down to: which hypothesis makes our actual observations more likely. One might concede MT wins this battle over ID, and try to fight a battle over their relative prior probabilities instead, but theism performs poorly there once you start treating it correctly (see, again, A Hidden Fallacy in the Fine Tuning Argument). But the Boyce-Swenson thesis is that something about fine tuning as evidence makes MT less likely than ID, and that’s not an argument over priors, but likelihoods: they are claiming the evidence is less likely on MT than on ID. And their premiere evidence here is “fine tuning” (or FT) all by itself.
Which gets us to what’s strange about this. Or maybe it’s not strange, given that Boyce and Swenson’s approach commits a typical error of almost all Christian apologetics: they leave evidence out that, when you put it back in, completely reverses their conclusion.
The Boyce-Swenson Argument and Its Critics
But we’ll get to that. First, let’s summarize their argument. Let’s set aside their ancillary claim that “If the [ID] hypothesis is false and there is only one universe, then it seems extraordinarily improbable that the fundamental constants of nature would just so happen to fall within the life-permitting windows.” That’s true but moot, as using that as an argument commits the usual Fallacy of Fine-Tuning Arguments: ignoring the deeply problematic question of the vanishingly small prior probability of a God (or any requisitely convenient Being), which no evidence makes any more likely than a single instance of random “lucky” fine tuning. Getting a God requires even more luck than that. So while in the SU model the Likelihood Ratio will favor ID, the Prior Odds would not, and these would cancel out, leaving us no more the wiser whether ID or SU (a chance-incident single-universe model) is the more epistemically probable. These probabilities are both simply inscrutably small. Boyce and Swenson aren’t resting their argument on that, though, and I’ve already addressed this problem elsewhere. Moreover, MT eliminates even the likelihood disparity here, and thus is an explanatorily superior hypothesis to SU, particularly given that it rests now on solid scientific foundations and is not just some possibility being posited out of hand. So let’s stick with that here.
Boyce and Swenson lean on the prior arguments of apologists Ian Hacking and Roger White, who decades ago maintained that this reasoning was an “inverse gambler’s fallacy” (a concept invented by Hacking). As Boyce and Swenson admit, this argument only gets to a “you can’t tell either way” conclusion, and not a conclusion actually supporting ID. They want to push it further. But that’s a problem if the Hacking-White argument isn’t even applicable here—then there isn’t even a stool left for the rest of the Boyce-Swenson argument to stand on. And lo, Hacking’s application of this notion to MT has already been refuted: in an online paper written-up by Darren Bradley, “A Defence of the Fine-Tuning Argument for the Multiverse” (May 2005), and in a peer-reviewed paper by John Leslie, “No Inverse Gambler’s Fallacy in Cosmology,” Mind 97.386 (April 1988), pp. 269–272. Both present formal proofs of the inapplicability of Hacking’s argument. But the gist is simply that Hacking (as also White) is ignoring the selection effect present in the cosmology case that isn’t present in their analogies—such as Hacking’s example of a gambler watching their first-ever roll of dice. As Bradley puts it, “a condition must be satisfied for an inverse gambler’s fallacy to be made that is not satisfied in the cosmology case.”
Boyce-Swenson’s Fatal Mistake
Their error is analogous to the mistake people make in the Monty Hall Problem, when they forget to account for the selection effect (Monty Hall had limited choices as to which door to open, thus rearranging the probabilities), which adds information to the scenario, and it’s that information that changes your conclusion. Leslie provided the most salient analogy:
You catch a fish of 12.2539 inches. Does this specially need to be explained? Seemingly not. Every fish must be of some length! But you next discover that your fishing apparatus could catch only fish of this length, to within one part in ten thousand. It can now become attractive to theorize that there were many fish in the lake, fish swimming past your apparatus until along came just the right one.
“No Inverse,” p. 270
By contrast, Hacking-White are talking about a gambler rolling two standard dice, seeing them make box cars, and then inferring there must have been a lot of prior rolls of those dice—an obvious fallacy. Ironically, this being the case is more a problem for the theist, though that was not noticed by Leslie or Bradley: suppose the universe shall go on infinite years; that would mean any event of any probability will inevitably occur; but if that’s the case, the most improbable event could have occurred as likely at the beginning as anywhere else; consequently a single universe makes fine tuning by chance perfectly likely again, and you can only get away from that realization by resorting to Hacking’s “inverse gambler’s fallacy,” forgetting that a chance twelve on the dice as as likely to be the first roll as any other in a running series of rolls. So, really, it’s the theists who are committing the inverse gambler’s fallacy. They are using it to conclude someone cheated; but for exactly the reasons Hacking and White explain, we actually can’t tell from the first roll being lucky that anyone cheated (I’ve pointed this out before, in The End of Christianity, p. 293). Because that roll was just as likely to be lucky as any other.
But that isn’t the problem Leslie and Bradley point out. They are focused on the misapplication of the fallacy to MU. The difference is between a gambler rolling dice (or watching them rolled) and then explaining the result, and a gambler staring at a hundred rooms and being told they will only be brought into one of those rooms to see what was rolled if the dice roll twelve. The latter gambler has information that the former gambler does not: the selection that guarantees they will only get to see twelves. This actually allows the inference that other dice have been rolled, owing to a simple rule of probability: you are always more likely to be typical than exceptional. Which is actually a tautology—it just restates the fact that more probable things are more probable (more frequent) than less probable things. But that tautology has consequences here.
This is why a gambler rolling dice can’t say it is “more typical” for twelves to be rolled first, or last or anywhere in particular. But a gambler can say it is “more typical” for someone to win a lottery if that gambler wins a lottery. For example, if the gambler wins a lottery they know to be millions to one against, and has no other information than that, they can rightly conclude it is more likely that there are lots of lottery players and that people win all the time, than that that lottery has only been played once and only by them, and yet they won. That would make them exceptional (rare), not typical (commonplace). And they are more likely to be commonplace than exceptional. And of course evidence bears this out: pop the hatch and look around, and lo, there are thousands of people who have won lotteries; “millions to one” lotteries are won every month—it’s routine, not exceptional. While the external evidence could have reversed that conclusion (we could pop the hatch and see we were the only payer all along), before we get to check that evidence it is (from our epistemic perspective) unlikely that’s what the evidence will turn out to be; because it is, literally, the least likely (the most uncommon) situation we could be in (of all the possible situations we could be in).
So the Hacking-White fallacy does not apply to fine tuning, because fine tuning adds information: a selection effect guaranteeing we will only ever see a win—that, in effect, we will only ever reel in fish precisely 12.2539 inches long. In that case, it simply is far more likely that we are the victim of a selection effect than that we were “just lucky.” (I already pointed this problem out in my chapter on design arguments over a decade ago in The End of Christianity, pp. 196–98, with the corresponding notes on pp. 411–12. I find that theists, even mathematicians among them, have a very hard time understanding this point. Which may go to explain why they remain theists.)
In the open scenario, where a gambler encounters dice for the first time and then rolls them, the gambler has no information by which to decide whether that was the first ever roll of dice or a late roll of the dice—unless they have outside evidence regarding this, which is another problem with the Hacking-White argument: it ignores the fact that we have such evidence. We are not just waking up in The Matrix and tasked with explaining how we got there. We have a lot of external evidence bearing on the question, just as in the real world a gambler knows dice have been rolled a gazillion times already and are being rolled a gazillion times around the world as they themselves roll theirs. In the real world (not the fictional fantasy world Hacking-White invented), gamblers know things—they have evidence regarding how often dice have gotten and are being rolled, and even regarding the physics of dice. Just as we now do regarding fine tuning—we have a lot of external evidence bearing on the question of what caused it, by which we can comparatively test explanations like ID or MT. And test them against that evidence we must. But the point here is that in addition to all that other evidence (which I’ll get to), fine tuning itself already gives us a crucial piece of evidence: that our observation has already been selected by the thing we are trying to explain. We are the gambler in the closed lottery scenario (with the many closed rooms), or Leslie’s fishing scenario. We are not the gambler in Hacking-White’s scenario. We have more information than they do—already. And we get even more when we pop the hatch and look around. Which we have.
So already the Boyce-Swenson argument is screwed. Their reliance on the invalid Hacking-White thesis collapses their entire case before it even gets started. As Bradley puts it (my emphasis):
The Inverse Gambler’s Fallacy is only committed if the specific evidence refers to a trial that has the same probability of existing in any relevant possible world. [But o]ur universe is more likely to exist given that there is a Multiverse rather than a Universe. So this objection to the Fine-Tuning argument for the Multiverse does not work.
And this inference follows from the information given to us by the inherent selection effect of “fine tuning.” Since, on MT, we will only ever observe a finely tuned world (since no life arises in other worlds), this fact gives us information about how we got here, such that, if we have independent reasons to doubt ID (as we do: see, for example, The Argument from Specified Complexity against Supernaturalism and Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them), then fine tuning is evidence for MT. Others have noted it’s even worse than that, that FT is in fact evidence against ID (a surprising but unavoidable fact, which I’m getting to). But the point here is that FT by itself makes MT more likely than SU, and therefore is evidence for MT. This is not an inverse gambler’s fallacy.
At most one can say here, “But ID also makes FT likely; so FT alone can’t help us distinguish between ID and MT,” which is true (if we grant the premise, the conclusion follows), just as one can say, “But ‘someone cheated’ also makes my first roll being twelve likely; so my first roll being twelve can’t help me distinguish between ‘someone cheated’ and my just being lucky.” But that is not the situation we are in. We have information bearing on whether someone cheated, and that information renders it unlikely—but even with no information, we’d still need a lot more evidence than “my first roll was a twelve” to get us to “someone cheated.” Analogously, we have information bearing on whether ID caused FT, and that information renders it unlikely—we’d need a lot more evidence than FT to get us to “ID caused it.” And as it happens, when we go looking, all the evidence goes the other way. ID simply doesn’t pan out as likely. This is another common error in apologetics: Misunderstanding the Burden of Proof. And that’s even granting the premise—when, actually, the premise that “ID makes FT likely” happens to be false. God has no need of FT in exactly the same sense as God has no need of a starship.
The Boyce-Swenson argument ignores this and tries to build on the Hacking-White thesis such that if we grant that ID is a more probable cause of FT than MT, then this being the case makes MT less likely than SU. But this is a circular argument. You have to start by concluding you don’t need MT to explain FT in order to end up concluding you don’t need MT to explain FT. This is unsound logic. Whether ID is a more probable cause of FT than MT is precisely the thing we are supposed to be asking, not presuming. And at any rate, it is tautologically obvious that if you prove God made our world, it’s then unlikely he used a Multiverse to do it. They grant the possibility he did; their argument is simply that it’s not the most likely conclusion in that case, which is fair enough. One might challenge that, but I have no interest to. Because this is a very uninteresting conclusion. And the fact is, they never establish this tautological premise anyway; it is in fact false. And they never address any of the scholarship that already proved it false.
So, in the end, Boyce and Swenson’s argument proceeds as follows: they presume ID renders FT likely whereas, owing to Hacking-White’s inverse gambler’s fallacy, the uncertainty of MT supposedly renders FT a 50/50 prospect at best, an “unknown” rather than a dead certainty (in effect saying “you can’t tell from the dice coming up twelve the first time that there were any other rolls of the dice,” which means it’s 50/50 at best that there were); and this then allows their conclusion that FT more likely correlates with ID than MT, which entails MT is therefore not likely. But this is wrong seven ways from Sunday. There is no evidence FT is likely on ID. Whereas there is evidence that is unlikely on ID but likely on MT. And since MT does not implicate Hacking-White’s inverse gambler’s fallacy, the conclusion that MT explaining FT is at best “50/50” is simply false.
To the contrary, given MT, P(FT) is effectively 100%: there will then be, to a certainty near 1, at least one life-containing universe and therefore at least one FT observation; and indeed, per the Hacking-White fallacy, we are as likely to be it as any. The most they could try to get is, given ID, P(FT) is also effectively 100%, leaving us at a wash between ID and MT in explaining FT. They can’t even get to that, though, owing to the fact that the premise “ID entails FT” is false. But assume they nevertheless could. That still disallows the Boyce-Swenson thesis. Because you can’t get from “ID and MT are equally likely given FT” to “ID is more likely than MT,” and therefore you can’t get to “MT is unlikely.” It all collapses like a house of cards—once you realize the error they made at the very beginning.
Formal Identification of the Error
This error formally appears when Boyce and Swenson claim P(L|¬T&S&K) = P(L|¬T&¬S&K), or P(Life|not-Theism, and Single Universe, and everything else we know) = P(Life|not-Theism, and not-Single Universe, and everything else we know), i.e. that the probability of life without ID is the same whether a single universe exists or a multiverse. In fact it is false that a single godless universe is as likely to generate life as a multiverse. That’s like saying a single draw of poker is as likely to generate a royal flush as a million draws of poker. Sorry, but, no. They have screwed up here, by equivocation fallacy confusing “this universe” with “a universe.”
Boyce and Swenson are actually saying “the probability that this universe would be life permitting is the same” on SU and MU, which is a tautology (the isolated—as opposed to total—improbability of this is the same whether SU or MU; just as the probability of drawing a royal flush right off the top of a deck of cards is always the same), and then confusing that with, “the probability that there would be any life-permitting universe is the same.” Which is false—owing to their misapplication of the inverse gambler’s fallacy. Instead, as we all know, the probability of someone drawing a royal flush is increased by the number of draws, so add-in the selection effect (you only get to look at what was drawn if it is a royal flush), and then “there were a million draws” is always more likely (by far) than that there was only one.
This destroys their argument, because it absolutely depends on this false premise, that the probability of life without ID is ‘the same’ whether a single universe exists or not (Boyce & Swenson, pp. 9–10). This mistake is the same as saying the probability of a royal flush is always the same, therefore it is the same whether there were a million draws or only one, which is false. The inverse gambler’s fallacy requires that there be no selection effect, such that the gambler cannot know whether they are at the first draw of a series or a later one. But with fine tuning, there is a selection effect. Our pole only catches fish exactly 12.2539 inches long. And thus, our catching a fish is simply far more likely if there are many fish of varying length. So, too, for a multiverse explanation of our existence.
The Genuine Bayes Factor
In any genuine (and not sham) Bayesian argument, the final (or “posterior”) probability of a hypothesis is only correctly describable as P(h|e&b), where h is the hypothesis, the symbol “|” means “given that,” and e is all the evidence one can adduce for or against h (“evidence”), and b is all other human-available knowledge (“background data”). The notable thing here is that e and b must be exhaustive of all available knowledge: you cannot leave anything out. If you know something, it goes in. Apologetics always operates by leaving things out (see Bayesian Counter-Apologetics).
If you derive a P(h|e&b) but e and b were not complete (or you ignored the effect of their contents on P, which amounts to the same thing), your conclusion does not describe reality. It describes a fictional alternate reality in which what you left out doesn’t exist. But it does exist. So the only way you can claim your P(h|e&b) applies to the real world we actually live in is if you complete e and b and properly derive P therefrom. This never goes well for the apologist. Which is why they always avoid doing it, and hope you don’t notice (and you probably won’t notice, unless you study how all empirical arguments are Bayesian—which is why understanding this is absolutely crucial to modern critical thinking now).
When asking what predictions ID and MT differentially make—what observations they predict differently from each other—we get entirely different results than Boyce and Swenson. First, of course, ID predicts far more intelligent and values-based design and governance in the world than we observe. The Argument from Evil is actually a contra-design argument: it is pointing out what ID predicts but fails to show up; whereas the actual availability and distribution of evils in the world is exactly as expected on MT: indifferent to any intelligent arrangement whatsoever, apart from entirely human intervention. But we will set that aside today (though it is relevant evidence: it has to go into e in any argument to design). Let’s more impersonally ask what we’d expect. Because when we do that, we still get very crucial observational differences.
For example, if MT caused FT, then we would expect to observe some peculiar things to a very high probability—indeed an arbitrarily high probability, as only Boltzmann effects could get a different result, at probabilities so small as to be beyond absurd, and thus not at all possible to expect. Because given MT (and “not ID”) life can only arise by a highly improbable chance accident, and therefore can only be expected if there has been a vast scale of random trials. And this entails we should expect to see a universe of vast size and vast age that is almost entirely lethal to life. Which is what we observe: the universe is dozens of billions of lightyears in size (at least), over a dozen billion years old (again, at least; as this is only the age of the locally observed portion of the cosmos), and almost entirely a lethal radiation-filled vacuum. Even “places to exist” here are almost entirely life-killing stars and black holes, while even places “not that” are almost entirely lifeless perches—frozen, irradiated or violent—incapable of habitation. The scant few places life even could arise are self-evidently the product of random mixing of variables, like a planet or moon’s distance from a star, its size, chemical composition, and the happenstances of its local astrophysical history—variables we indeed see scattered in random variation across the universe.
This randomness, and those indicators of randomness as a cause, are what we almost certainly will observe if MT is what caused FT. Whereas none of this is predicted or expected if ID is true. In fact, ID sooner predicts the opposite (a young, small, well-arranged, and highly hospitable universe). But even if you are too gullible to accept that, you cannot rationally deny the fact that this is not what ID predicts (you have to gerrymander ID to get this result, with a bunch of ad hoc emendations that lack any inherent probability)—but it is exactly, and peculiarly, what MT predicts (no ad hoc emendations needed). Hence Why the Fine Tuning Argument Proves God Does Not Exist.
Ultimately, you can’t escape the mathematical consequences of this observation with gerrymandering. You might want to fabricate a convoluted “just so” story to explain why your hypothesized designer wanted to engineer the universe to look, weirdly, exactly like a universe would be expected to look if no such designer existed, but that just moves the improbability of your hypothesis around inside the equation, it can’t get rid of it (see The Cost of Making Excuses). The posterior probability ends up the same: disfavoring ID as an explanation. God can just make universes work. Because Gods are magic. They aren’t constrained by local physics. So they don’t need FT. Whereas, without God, FT is the only kind of universe that can contain life. So, if ¬ID, then the only way life can ever observe itself existing is if FT, which means P(FT|Life&¬God) = 1. But because Gods don’t need FT to make universe’s life-hospitable, necessarily, P(FT|Life&God) < 1, and therefore P(FT|Life&God) < P(FT|Life&¬God). FT is therefore evidence against ID, and thus for MT; not the other way around.
Boyce and Swenson can only get the opposite result, first, by screwing up the probability of L on MT (with their erroneous nonsense about inverse gambler’s fallacies), and then by leaving out all the evidence actually pertinent to telling apart MT and ID as causes of FT. They are thus not only screwing up the math, they are also “rigging the evidence,” hiding Oz behind a curtain (“Do not look behind the curtain!”). Which is not legitimate. The result is that their conclusion simply does not apply to reality. It only applies to a fictional, non-existent world they have invented in their heads, one that doesn’t have all this differential evidence for MT and against ID as a cause of FT.
Indeed this is true even without MT: even on SU, FT is always evidence against ID. The most one can get is more evidence for ID, or a higher prior for ID, to counteract the dis-favorable evidence of FT; but theists never have that. Theists are thus stuck needing to establish ID despite FT being a manifestly weird choice by a Creator, making the universe look exactly like a universe would have to look if there was no Creator. Which is an embarrassing position for them to be in. And this was all proved years ago: see Elliott Sober, “The Design Argument” (2004), the latest version of which is now in The Design Argument (Cambridge University Press 2018); and Michael Ikeda and Bill Jefferys, “The Anthropic Principle Does Not Support Supernaturalism” (2006), an earlier version of which appeared in The Improbability of God (Prometheus 2006); all of which summarized in my article on Why the Fine Tuning Argument Proves God Does Not Exist.
Because of all this, MT is a much better explanation of the observed facts than both ID and SU.
Conclusion
There is in fact a lot of evidence supporting MT that does not support ID, even apart from what I mentioned here, as again I relate in Six Arguments That a Multiverse Is More Probable Than a God. It’s also possible to derive MT from an initial state of absolutely nothing (see The Problem with Nothing). Because not appreciating the power of randomness to inevitably generate order is a common failing among theists (see Three Common Confusions of Creationists and The Myth That Science Needs Christianity). Theists also tend to get everything wrong about the actual facts and what they really entail (see Justin Brierley on the Science of Existence).
This happens even in Boyce’s blog, when he says “the degree of fine-tuning required for the cosmological constant to have a life-permitting value has been estimated, for example, to be 1 part in 10^120!” That isn’t true. It’s being untrue is of no importance, since Boyce and Swenson don’t rely on this claim in their conference paper, and one can replace this mistake with real examples (like the value of the alpha constant). No one doubts apparent FT. But the cosmological constant isn’t an example of it. The figure Boyce cites is, in fact, “The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology” (emphasis mine). In other words, that number only exists because our theory of quantum mechanics fails, predicting a wildly different result. This means our theory is wrong. It does not mean anyone “tuned” the universe that far away from our theory. In practical fact, the vacuum pressure this constant measures can vary a great deal (by a factor of even a hundred or more) and still generate life (even more so when you allow other constants to vary). And even that constraint already depends on an untenable assumption: that you can change the average vacuum energy of a universe without changing anything else. That seems unlikely. But this gets into all the problems with actually nailing down what even has been tuned or could be, what is even possible or likely in the first place (see my discussions here and here).
The formal presentation of Boyce and Swenson avoids this problem by simply punting to other publications for the fact of FT. So getting FT wrong isn’t what’s wrong with their argument. What’s wrong with their argument is ineptly thinking that concluding the probability of FT on MT is high is an inverse gambler’s fallacy. It’s not. It’s a straightforward likelihood: given MT, there will be FT (to an arbitrarily high probability, and hence effectively a probability of 1). The question of whether FT is then evidence of MT depends on the probability of FT on ~MT, which we all agree is low on ~ID and which Boyce and Swenson insist is high on ID. But they are wrong about that. Since ID has no need of FT, and in fact using FT to get life is a really weird and unnecessary thing for ID, indeed wholly unexpected (as it makes the universe look exactly like a universe with no ID in it, an even stranger choice for ID), P(FT|MT) is logically necessarily always higher than P(FT|ID). FT is therefore always evidence for MT over ID.
This follows even if you build out a model of ID that also entails its own MT (as Boyce and Swenson consider), because FT remains unnecessary and peculiar even in that scenario. A God who made a bunch of universes would make them all look like wonderlands governed by his magical will; he would not bother with the callous, clunky, and absurdly bizarre tinkering of trivialities like the mass of the top quark. God has no need of quarks, much less maintaining hyper-specific masses for them. Only godless universes need these sorts of things, as without God, only these sorts of things can generate life. The existence of quarks alone is thus evidence against ID, in just the same way the Argument from Evil is. Hence what makes MT entail FT is the absence of ID.
You can test this with a hypothetical array: imagine all logically possible universes in the set of all universes lacking ID; how many observers in that set will observe themselves in an FT universe? All of them. Because in ~FT worlds, observers never arise (whereas they could, indeed even likely would, if ID is true). Thus, the probability of that observation (FT), on that condition (~ID), is 100%. That only leaves the question of the prior probability of FT on the assumption of ~ID, which is again high on MT, arguably low on ~MT (i.e. on SU). Sure, ID also has an excruciatingly low a prior (that’s A Hidden Fallacy in the Fine Tuning Argument). But that’s a separate tack. Here the point is: when you are using FT as evidence, and not trying to argue over its relative priors, then FT is always evidence against ID. Because it’s just always more likely to be observed on ~ID. This cannot be gotten around, not least by falsely pretending this is an inverse gambler’s fallacy.
Could you explain whether the MT in this argument is either: A. A universe that exists like a bubble in a carbonated beverage, emerging inside what we could call the “Super-Universe” i.e. everything in total that exists, or B. An “Everett’s Many Worlds” universe, or C. both of these, i.e. any other universe in the multiverse like ours (with similar QM) also has “Everett’s Many Worlds”?
I don’t agree with some of the conclusions you made in the 27Jan2018 post regarding simulation theory, and I’m trying to wrap my head around the idea of whether this argument for a multiverse vs FT/SU applies to simulation theory in the same way. After listening or reading guys like Bostrom and Tegmark, I found myself thinking, “You sound like a creationist (again).”
You wrote in that blog: Bostrom himself claims all three must be equally likely, so far as we know.
I just listened to him on a Startalk episode and Neil deGrasse Tyson, who’d believed in ST, then was convinced against it, had Bostrom on the show.
Ending conclusion: Tyson was back to accepting, “yeah, we probably are in a construct.”
So, perhaps in 2018, Bostrom had a 1/3 prior on each of the elements of his argument, but it doesn’t seem to be the case now. He “punted” on the question of “what odds do you put?” but it’s obvious from his discussion that he places posterior odds on simulation theory very high. I’d speculate he’s close to 1, but that’s just my inference based on how he evades the question.
You wrote then: There actually isn’t any use for “ancestor” sims; and they are unthinkable to any species ethical enough not to destroy itself before acquiring the ability to make them.
This is, seemingly, conjecture, and seems to dismiss the fact that in a multiverse, there are infinite numbers of types of evolved beings that reach the ability to “compute” as there are infinite numbers of variations of “fine tuned” universes capable of producing some kind of life that can get to: “I think; therefore I am.”
Certainly in some universes, felines are the top evolved being, and have no problem creating universes to torture mice.
But nevermind that, there’s actual proof this idea is wrong: children.
I have three kids (to be honest, only one was “planned”). In intentionally producing offspring, I did exactly what you’re claiming nobody would do. My children are subject to theft, murder, rape, torture, war, disease, and these are first world kids. People in the third world intentionally bring children into the world everyday knowing full well their lives will likely be hellish, short, and, such that had they been given the choice, they’d might have said, “yeah, no thanks.”
The ultimate question here is: “Would you cease to exist if given the choice?” If not, then you’d also not tell a simulating intelligence not to make you in the first place. You’d accept the risk of life axiomatically since anyone not choosing suicide is agreeing to play the game.
Going to Bostrom as you’ve quoted in the 2018 blog:
(1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
In a multiverse this would be essentially impossible, as argued here. There would necessarily be infinite numbers of posthuman stage civs.
(2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
This seems impossible, given the fact that humans purposefully have children (beings they care more about than strangers). Even if very rare, to postulate it wouldn’t happen seems strange, imagining all advanced civilizations in all possible universes, all having the same ethics? That can’t be anywhere close to one, seems axiomatic, it’s close to zero.
Another way to ask this is this: Let’s imagine tomorrow, every human has a popup screen appear in the air in front of them that declares “You’re in a simulation. You can end the simulation now if you’d like, but it also ends the entire universe.” Of course, we’d all cease to exist, for certain someone would hit “exit”….but what if it was a majority vote? Or what if ending the universe required a supermajority?
Undoubtedly, there is a no-zero number of us that would elect to continue in this life, even knowing that we’re going to witness a lot of suffering among others and quite likely for ourselves.
(3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
Seems a reasonable conclusion that is, seemingly, accepted by a lot of smart (non-theists).
To come full circle, I realize the stimulation theory (which I’m really leaning toward believing is true, not that it matters) makes me sound like a creationist (I say again, as I used to be a YEC).
And this bothers me.
But so does Everett’s Many Worlds.
I went back and re-read one of Robin Hansen’s pieces about how we’re evolving into something different (i.e. humans today live in a world that would be unrecognizable by earlier humans, and thus, we’ll be unrecognizable to our ancestors in a similar way) and coupled with G. Hinton’s (god-father of AI) statement that we’re probably just a stepping stone in evolution to a post-physical (or post-carbon based) intelligent race, it seems that in the next hundreds (thousands) of years, AI intelligence will likely replace and/or co-evolve with humanity and in such a world, it’s not a stretch to think they (or a government, corporation, or religion) would create simulations.
Since we have the Mormons as an example, of people who believe the good ones in their religion will become gods and make new planets they’ll populate, not much of a difference ethically between that and simulation theory, it seems to me that postulating that this has happened isn’t too much of a stretch (going back to Bostrom’s argument for ST) and I can further see that it seems, in both MT and Everett’s Many Worlds, there’d be universes created by beings who did interfere and universes created by beings who did not (our universe, as an example).
It seems axiomatic that the future dominant species on earth will be very different from us. Sam Harris says, “We’re creating a god…” and Judea Pearl says, “We’re creating a new species.”
Why assume this new species, or “god” if you will, is going to reflect or have our ethics and values?
Why assume that it won’t see primitive humans the way we see our fish in our aquariums or our animals in zoos?
And further, why postulate this superintelligence is the first to exist in this universe or any universe?
And none of this is an argument for theism, I think throwing “compute” into any Bayesian calculations makes Christianity’s likelihood so close to zero we can declare it’s basically zero, but this wouldn’t preclude that a universe exists with a tinkering god(s) and if we model such a universe, well, my experience is that I live in a world with either a sadistic creator or one that created the circumstances for good/bad outcomes, with the same thought I had when I choose to have kids, i.e., well, yeah, life can suck, but it can be good, and at the end of the day, would you rather exist or not exist?
If you answer, “sure, I’ll take existence,” then that’s axiomatically a logical reason why creating universes isn’t something an advanced society would necessarily and by default deem unethical.
Michael: While Richard has written against simulation theory, it’s definitely far more serious than theistic intelligent design, such that theists choosing their ID over sim theory is indication of the ideologically-motivated reasoning. That’s because it more closely matches what we see. We don’t expect software engineers anywhere to be perfect, or to necessarily make heavens. They would be constrained by their own naturalistic parameters.
By the way, that actually leads us to one problem with simulation theory. We naively would expect outright glitches. Like we see in every MMO ever made. “But the sim designer is perfect” is effectively “But God is omnipotent”.
But, seriously? You’re going to just argue that ancestor sims have no downsides? Well…
1) All sims computationally must be run in a computer that is at least as large as the sim itself, if the sim is perfect. And it’s utterly against all evidence we’ve seen that there would be civilizations that would just throw away universe-sized computers for…
2) Pointless sims, because that is what an ancestor sim is. Software designers that could make a universe wouldn’t need the data from such a universe. Maybe they’d be running some weird counter-factual, but that’s a highly unusual circumstance, especially since…
3) An ancestor sim is an act of such colossal and pointless evil that any civilization that had people who could throw away entire universes worth of material to build a sim computer and no one to stop them would probably have killed themselves.
Now, can you imagine some specific weird society that can throw away resources on an ancestor sim and is willing to do so? Sure, but the God theorist can specify a God that would want to make this specific universe. That’s when the sim theory does become just theism or a Cartesian demon again: When people are trying to save the theory from falsification.
So when you ask why assume that an intelligent civilization wouldn’t look at us like fish… no one is. The sim theorist is the one assuming that, implicitly. If you stop assuming it, then you have to ask, “How likely is that?” And even if you think most future civilizations would be that sociopathic and callous (and, remember, pointlessly so, since all of the monumental evil is for at-best barely-useful data), surely not all of them are. Which means that that alone means the probability of ancestor sims for explaining the evidence is nowhere near 100%. Whereas this is exactly what a multiverse or single-universe naturalist universe would look like.
But the issue, of course, is that the assumption contradicts the evidence we do have. (And, yes, that evidence is only the one sapient species we have to extrapolate from, but you don’t just get to ignore that). You mention fish, but… I would actually feel quite bad about pointlessly creating a universe where a bunch of fish were created and died, and endured pain. That strikes me as pretty basic empathy. And we’re not fish. No advanced civilization could fail to see that. Maybe their cognitive ability would vastly outpace ours, but they would still be able to see entities that have autonomy and a sense of self. It still would be pointless cruelty.
See, the classic issue with any ID approach is that the horrible evil in this universe is random and pointless. Compare it to the kid drowning an ant-hill. At least then the kid gets immediate satisfaction. But even a sadist would build a universe that would be far more cruel than ours, and get their rocks off immediately. This is a universe where you wait billions of years to get that action. (And “they don’t care” doesn’t cut it).
Now, even in the universe as we have it, if sim theory created really interesting and novel predictions that were fulfilled, it would be worth taking seriously. But so far it hasn’t. And the idea being interesting to think about doesn’t make it valid. Basically, the theory can put up or shut up.
A minor correction to your overall point: I don’t think end-stage sim tech would exhibit detectable glitches. If you notice, apart from mechanical damage, the human brain is not subject to software crashes, and what count as “glitches” are simply normed into the sim (cognitive and optical illusions, memory revision, etc.), and this is due to its design.
The brain is running a sim (of our world and of our selves) on a vast neural net, such that a glitch might affect a few neurons, but the system as a whole will compensate by maintaining a consistent simulation with the inputs available. I think by the time we can build simverses, they will be run similarly, such that what we call “glitches” will be functionally invisible to those inside the sim, and ultimately self-correcting.
Rather, what is more unavoidable is that designers would actually design things. And this is what is missing from the evidence: any indication of intelligent design (any plan, governance, anything), even by hypothetical alien programmers. All the evidence corresponds to chance accident as being foundational rather than design (the universe is random and capricious; it’s unnecessarily large and old; it’s inefficiently chock full of useless junk).
This would have to be deliberate, the designers choosing to make our universe look like a completely undesigned one, which requires some convoluted esoteric reasons, which makes sim theory just another conspiracy theory. It’s no different than positing lizard people secretly run the governments and corporations of Earth. Sure. It’s possible. But odds don’t favor it. Not even by a longshot.
Simverse proponents are thus just like theists: they know their theory is contradicted by all the evidence, so they have to engage in elaborate, ad hoc, “just so” storytelling to keep their theory alive.
Richard: That’s possible, I suppose. I’m thinking about it from the perspective of software engineering, though. I think it’s dubious to think that end-stage software engineering would have absolutely perfect coding. I think there’s a strong argument to be made that that differentiates naturalist universes from simmed ones: Simmed ones have to be made by beings who can always, informationally, make an error.
Certainly the presence of a glitch like, say, a galaxy that just suddenly seemed to appear as if there was some kind of draw distance glitch, or was compressed due to some spawning or memory retention glitch, would be decisive evidence for a sim.
Aside from that, yes, I agree with you wholly that this is a really boring universe from the perspective of a designer with unlimited ability to design, and so the only possible motive to make a universe like this (especially one where it is visibly true that we’re being left alone and some Saint’s Row psycho isn’t running around the street with god mode cheats on) would be as a model.
Michael: My motives are moot. I agree, if I found out I was in a sim I wouldn’t try to commit mass suicide. I would try to find a way of breaking out and punishing, or at least talking to, the asshole who made a universe full of pain when they didn’t have to. The objection isn’t to my motives, it’s to the sim designer’s, and my objection isn’t just moral, it’s also probabilistic. I think it’s exceedingly unlikely that someone would engage in such rampant pointless sadism with such a high cost.
Yes, one can argue that an advanced sim maker would be beyond our comprehension… if you want to just assume that they have to be like that and not merely technologically advanced but still running on finite intellect hardware as conscious instantiations (which is logically possible and thus takes up probability space). But that’s just an admission that the theory is unfalsifiable, and thus not worth considering. If one wants to take it seriously, one has to look at the existing evidence, which means looking at the conscious entities we have to work from. It’s not reasonable to imagine wildly different entities evolving in some precursor universe that manage to be that evil while also maintaining a civilization. It’s possible, I suppose, but it’s inherently unlikely.
We’re probably not alone. Does that mean that we’re a sim? No. It means there’s probably other sapient species somewhere in other galaxies.
And we got to making rockets decades ago, and yet we’re still not on Mars. It’s a challenge even to go to our Moon. “Technological progress was smooth and easy in a lot of areas”, even if true (and it wasn’t, scientific progress is complex), doesn’t extrapolate infinitely any more than extrapolating from our behavior necessarily does. In particular, hitting sharp limits on the resources you’ve got to get out and try to find more seems to be a very real problem. It’s wholly possible that the kind of planets that support life can never be big enough to give civilizations enough resources to get to anything like interstellar travel. Which is a prerequisite for an ancestor sim running our universe, or even our solar system.
Similarly, saying “Once AGI takes off [a thing that may not actually be possible yet because we’re extrapolating], it’ll be impossible to predict what to do with it” not only is false (It’s not impossible to say that civilizations won’t throw away universes worth of materials to run pointless sims, because that’s saying that they have the rationality of a teenager), but is also moot. You don’t know what you don’t know. What indications we have of the universe being a sim aren’t dispositive enough to make it a good theory. If someone can start getting a sim theory that works, in that it makes falsifiable predictions that turn out verified rather than falsified, great.
We actually don’t run amazing simulations in our brains. Seriously, try it. Try free associating indefinitely. I’m a GM and so I’ve tried “playing” my own characters, even if just for content. It doesn’t work. I can create a lot, but I see patterns, I see my own handiwork behind the curtain. This is why we invented collaborative storytelling, and tabletop roleplaying, and writer’s rooms, and fan fiction. Because in fact individuals can only create so much. And our models are not internally consistent. That’s why there are these things called “plot holes”. We run impressive simulations on kludged hardware. We don’t run actual universes with all the actual computation.
Which we know because, when we end up doing it for the universe we actually live in, we need to brute force the three body problem, and do a lot of iterative calculation and projection, and deal with chaos theory. The fact that our brains did not intuitively arrive at any of that shows that our brains aren’t doing a mathematically-valid sim.
Regarding #2: You cannot simultaneously argue that this is a species with immense quantum computing which has clearly cracked the secrets of the universe and also that it somehow has a mysterious need to run a model of a universe it knows how to run because it already did. This is just God speak in disguise. “You can’t apply human behavior to God”. There’s just no plausible range of sapient species that would have evolved that would be simultaneously smart enough to be able to run such a sim and also dumb and evil enough to actually do it. It’s as inconsistent as a loving god that makes hell.
So do I know that such people have to be good? No. So let’s take a probability analysis. How many random sapient species capable of building an ancestor sim would be evil enough to do so? Think it’s 10%? 1%? It’s clearly not 100%. I’d argue it’s vanishingly unlikely. However unlikely it is, it’s not 100%. but what we see is 100% expected on there being no design.
This is, yet again, saying “Because we don’t know, my theory is right (or even worth considering)”. If you don’t know, you don’t know. “It’s possible they have some motive totally beyond what the data we have would predict” is an admission that the theory isn’t probable and so not worth considering.
And, yeah, I can say Sam or Judea Pearl are wrong. By your own admission, how could they be right? They don’t know the future post-singularity either. And yet they’re happy to speculate. So why can’t I? Right, because this is an argument from authority. Seriously.
Again, if supercivs see us like we see cows, they would not make us . I would not make billions of cows not to eat them but just to sit out in a pasture somewhere and die, pointlessly. Your own analogy proves the point.
So, yeah, you just didn’t engage with my points. Again, a sim theory isn’t impossible. It’s just not very good. And “Well, maybe there’s some reason all the evidence makes my theory look unlikely that we can’t anticipate right now” is the excuse only a bad theory needs to offer. It didn’t cut it for string theorists who had an infinitely better case, it doesn’t cut it for sim theory.
I agree. What I am saying is that P(glitch-observation|sim) is too low to be useful in an argument. Our sim is run on such a small bit scale (the Planck scale alone is dozens of orders of magnitude smaller than a meter of distance or a second of time) that glitches won’t be massive coordinated events like that. They will be drowned out by the noise of the rest of the sim, too small or fleeting to notice. A neuralnet sim will be more like the rotation of the Earth: you can wobble it, but it’s pretty darned stable no matter what crashes into it. And that’s a physical crash we are talking about, not some mere programming confusion.
Hence my analogy to the human mind was not to suggest our minds do exactly what a simverse server would be doing. As you note, we are not math machines (we’re pretty bad at that naturally and have to install software patches just to be able to do much of it at all). Rather, the analogy is that the human mind is an extraordinarily stable OS. Apart from literal physical blows (and comparable mechanical kicks like drugs), it keeps running coherently no matter what goes wrong inside it from a software perspective. We don’t even notice 90% of our visual field is fake. And this is because of a design feature any simverse server would share: neuralnet processing—on an absurdly vast scale. Thus, glitches at a scale to be noticed would require so many coordinated errors as to beggar probability. That’s why the probability of our seeing one is functionally nil—even if we are in a simverse (unless we were in a pretty bad one).
But that’s my only disagreement. Everything else you said (before and now) I concur with.
But, seriously? You’re going to just argue that ancestor sims have no downsides? Well…
No, I’m arguing that if you found out you were in a sim, you’d not suicide or plot to exterminate humanity to maximally end suffering.
My argument is that if an advanced civ (of superintelligent AGIs and perhaps a naturally evolved carbon-based co-existing) exists anywhere in the multiverse, then postulating why it would do or not do something is just pure speculation on our part.
It seems axiomatic that we’re not alone (in this universe or the multiverse) as to speculate that would put us on par with theists. We’re not that special. Since we’ve gone from hunter-gatherers to flying to the moon in an eye-blink of time, it seems only likely that other civs have also developed tech and that AGI already exists or perhaps, at best, we’re the first in this universe, but others can’t be far behind.
Once AGI, what Sam Harris (after, in part, talking to Yudkowski) refers to as “gods” is up and running, all bets are off. We simply cannot know for sure what its evolution will be, how its reward functions will operate, and/or even whether it’ll tolerate humans or work with us symbiotically. It’s all speculation, as alien to us as Facebook and the Space Shuttle would be to 1st century Jews and Romans.
1) All sims computationally must be run in a computer that is at least as large as the sim itself, if the sim is perfect. And it’s utterly against all evidence we’ve seen that there would be civilizations that would just throw away universe-sized computers for…
Not true. We run amazing simulations in our (relatively) small brains. In an MMO-RPG, it isn’t a requirement that the entire universe is rendered to all characters at all times, only what is observed. And that information could be fed into a “brain” biological or not, and we already know this is possible. You’re reading this, but you have no conscious awareness of my surroundings, nor me of yours, they’re being run in our brains, which, all things considered on the scale of a universe, are tiny and only use a small amount of power to run.
Also, once AGI is up and running, scarcity starts to not matter so much. There’s no logical reason a civ couldn’t harness their sun’s power, then so on and so on. There’s a lot of energy in the universe we live in (and we can’t know how much energy there is in others, but probably a lot).
2) Pointless sims, because that is what an ancestor sim is. Software designers that could make a universe wouldn’t need the data from such a universe. Maybe they’d be running some weird counter-factual, but that’s a highly unusual circumstance, especially since…
You’re postulating they’re pointless but you’re not accounting for the fact that you’re postulating this as a human set in a certain culture at a certain time. We can see that quantum computing and superintelligent AGI is likely possible, and perhaps it’s axiomatically and factually 100% that it happens in this or some universe, to speculate otherwise would be to say every civ that gets to “compute” automatically destroys itself (or is somehow destroyed).
That doesn’t make sense to me, why postulate this? Are you saying you think you know the mind and philosophy of a super intelligent artificial intelligence? How can you know this?
I guess you can say you don’t believe Sam Harris is right, we’re not creating a god, and you can say Judea Pearl is wrong, we’re not creating a new species, but that would be as highly speculative as anything else. It seems the handwriting is on the wall, we are creating it as we speak. Be it a hundred years or a 1000 or 10,000 years, it’s coming (barring some reason it’s not, but that would be speculation on par with believing Jesus is returning to stop it).
Once you concede that superintelligent artificial intelligences are not only possible, but likely (in this universe or some universe) all bets are off, how can you possibly know what these beings will do? Or think? Or find morally acceptable?
3) An ancestor sim is an act of such colossal and pointless evil that any civilization that had people who could throw away entire universes worth of material to build a sim computer and no one to stop them would probably have killed themselves.
You’re postulating “people” here by anthropomorphizing the future sentient life (in this universe or in the multiverse). That’s obviously wrong for reasons pointed out above. We cannot comprehend what’s going to happen with AGI, that’s the big debate right now by all the experts and pundits in the field. Any number of their ideas could be totally wrong, of course, but all of them? Are they all wrong? We’ll fail at building superintelligences? I find that spurious, especially since in order to postulate they don’t (or won’t) exist, you have to postulate that all civs in all universes fail at supercomputing and building AGIs. To me, that’s on par with believing in a God.
Once we accept intelligence will exist (or does exist) that is a product of compute and math (at the hands of highly evolved apes or some other carbon-based life form) then speculating how they’ll spend their resources seems like trying to speculate what God’s favorite ice cream flavor is, how can we possibly know?
If superintelligences see us the way we see chickens, beef cows, and heck, even dogs and cats, then arguing that simulations would be “colossal and pointless evil” is non-sequitur.
I think Facebook is a colossal and pointless evil. It is destructive on many levels and extracts resources from humans for the “pleasure” of wasting time and all kinds of negative things, a lot of which they’re blind to. Why does Facebook exist? Or network television, for that matter? It seems that humans, after society evolved to the point where hunter-gatherer societies couldn’t compete and agrarian societies evolved to industrial and tech societies, got bored. We bore easily. We invent things for pleasure, some of which require a lot of pain, cost, and sacrifice (how many idiots have died on Everest?).
To imagine that a superintelligent sentient life would do things to endlessly entertain itself (or themselves) doesn’t seem a stretch.
In fact, imagine this….your brain is scanned. You can live forever now. What do you want to do?
Live in paradise forever?
Or maybe, just maybe, you’d elect to subject yourself to randomness and the unboring consequences of risk.
Like in a simulation.
Like as a human.
Michael, you do realize you sound exactly like a Christian apologist, right?
Yes. We absolutely can predict the behavior of rational agents. It is not “mere speculation.” It is evidence-based reasoning. You are the one who is trying to elevate an extremely unlikely and convoluted speculation to the status of a believable claim. You are the one doing that. All we are doing is pointing that out.
“I think I’ll recreate the Holocaust and Black Death for kicks” is not morally or even rationally comparable to “I will use those resources instead to make a better place for me and mine to live and play.”
Anyone who refuses to accept that is an apologist seduced by a delusional faith-doctrine. Just like Christians.
The argument in this case uses MT as a global covering set (both in their paper and accordingly in my response), so it is inclusive of all possible versions of MT. That would mean not only the ones you mention, but also Smolin cosmogenesis, serial cosmogenesis (e.g. Penrose conformalism), Tegmark necessitarianism, and so on. So, yes, it would also include Bostrom overreplicated simverse scenarios (as I discussed in 2018). Some of these have better evidence for them than others; some require fewer ad hoc assumptions than others; etc.
It does, and indeed I apply it there. Bostrom’s error is the same error as the Boltzmann brains enthusiasts make, which I discuss a bit here and in more detail in respect to Brierley (as linked above).
The question is not whether or how many universes of a certain kind there will be. What matters is the ratio of those to other kinds there will be (within the set of universes producing observers in this case). So, natural universes will vastly outnumber Boltzmann solar systems for much the same reason that they will vastly outnumber Bostrom universes.
Bostrom’s singular error is in thinking Ancient Aliens will repurpose the entire universe into “ancestor sims” when in fact no one ever would; if they repurposed it all at all, it would be for games and paradises. Thus the number of “ancestor sims” will be minuscule compared to “natural universes” (or rather, as what he is counting, natural civilizations in natural universes). I give the reasons for this in the article you referred to.
That has always been the case. His paper was more judicious I am sure because peer reviewers required it. But it’s Motte and Bailey: once freed from the constraints of objective review, he can go back to his irrational enthusiasms. Bostrom is like a cat with string: he wants this to be true. He invents apologetics, complete with fallacies and handwaving, to make it true. We just have to see through it.
No. This is based on extensive observation and logic. To think the contrary (like Bostrom) is conjecture. It’s the same as if he was going around insisting all civilizations would convert their moons into cheese. You need evidence to believe that is even remotely likely. He has none. You can’t say “well, it’s only conjecture that all aliens wouldn’t use their resources to do that.” It’s the other way around. We have lots of data on how rational beings employ their resources when able. And his argument requires not a fringe exception, but a consistent rational behavior universal to all capable civilizations. That’s as bonkers as the moons into cheese argument.
But why? They already have “I think; therefore I am.” So why do they need to go full Hitler and produce a managerie of horrors to “randomly” make more, when they are rational enough to know how to simply make good universes from the start? Even evil species would not waste their time on that. They’d make games and paradises. They could gain nothing from pointless ancestor sims that torture uncountable trillions of people. And most rational civiliations won’t be evil enough to even try. Remember, these are civilizations that didn’t wipe themselves out. So they are the least likely to be universal sociopaths.
Indeed, they couldn’t even replicate their own history that way (they lack access to the complete array of initial conditions, and that’s even assuming quantum mechanical randomness does not already make it impossible even with the initial conditions), and even if they could, they only need to do that once (and then they can run different scenarios on their own phase of history to predict outcomes). And, again, even to do that, they’d have to be irredeemably evil. We can very much doubt that’s a “norm” across rational civilizations that kept themselves alive long enough to have that ability.
Bostrom’s position requires too many wildly implausible assumptions about universal “norms” of surviving civs.
That won’t help Bostrom. A fringe weird case does not get his conclusion. He needs this to be the case for most civilizations—and not “just” civs, but civs that survived themselves (nuclear eras, eras with even deadlier existential-threat weapons, and so on). “Felines torturing mice” do not even make a civilization, much less one that can stay alive all that way to godlike power. And even then, they’d just fill the universe with “mice torture sims,” not the “ancestor sims” Bostrom needs. See what I mean? His premise is wildly implausible, to the point of being almost self-contradictory. It’s nonsense on stilts.
We don’t randomly throw children into death camps. Not as a normal behavior anyway. If we could make new worlds for our kids, we’d give them games and paradises, not the Holocaust, or the Indonesian Tsunami, or the Black Death.
So no, children do not prove Bostrom’s point. They disprove it. His premise requires radically evil and irrational behavior toward “children” (indeed, untold trillions of them), which is demonstrated to be exactly opposite the norm for rational beings, and indeed exactly opposite the norm of rational beings capable of not wiping themselves out with their own reckless inhumanity even after being handed weapons capable of erasing planets (as that is the scale of tech required here).
Bostrom’s premise simply makes no sense when you think it through.
We need children. We don’t need ancestor sims. We make them because we believe we can govern their world enough to make their lives worthwhile within the resource constraints we have. Bostrom is talking about creating countless children we will never meet, and doing nothing for their welfare, even though we have no resource constraints requiring anything bad threaten them at all.
This is self-contradictory. If you had the power the civs Bostrom is talking about had, you’d behave in a particular way toward not only your children but all children, which would not track what we observe around us down here. That’s the point. If you could secure all kids in a just paradise, you would. So why would you waste resources on nightmare worlds? You wouldn’t have to. That’s why Bostrom’s premise is bunk.
Indeed, it’s worse, because all the resources redirected to build any nightmare world could have been repurposed to build a paradise instead. So who would do that? Who would choose the Holocaust over Eden? Who would tie up any resources on that? Even evil beings would not waste their time. They want games and paradises. They aren’t going to burn server space on useless nightmare worlds. And even if they did, they’d be game worlds (and thus obviously directed by evil intelligences), not indifferent messes like ours. That requires positing an extraordinarily improbable conspiracy theory, just like the lizard people nonsense (indeed, exactly like). And the thing about “improbable” is, it means “infrequent.” Most civs simply aren’t going to build such elaborate conspiracies; indeed, almost none will bother. Yet Bostrom needs most to do this. It’s simply not credible.
And if you could prevent that, you would. You would not recreate the Holocaust, or the Indonesian Tsunami, or the Black Death, or the global Vatican child-rape industry. The civs Bostrom is talking about have that capability. Hence, they’d never recreate the Holocaust, or the Indonesian Tsunami, or the Black Death, or the global Vatican child-rape industry—at all, much less trillions of trillions of times. They’d create games and paradises. Just as you would. Think this through.
There is, of course, the other problem (the kind of use of space Bostrom imagines is impossible on computation theory: you cannot create even one ancestor sim for our universe inside it, much less the trillions of trillions his argument requires; it’s computationally impossible). But you seem to be eliding that one. Though I spent two paragraphs on it for a reason. But this is a problem for all sim args. What is a particular problem for Bostrom is his requirement of universal irrationality, in civs that cannot possibly have survived were they like that.
I agree empirically. But I don’t agree as a matter of logic. That is, you are fallaciously assuming an infinite array entails every configuration. It could (and in some MU models it would). But that doesn’t follow as a matter of logic. You can have an infinite variety of oranges, but that means you have zero apples. And not all MUs are infinite. On the Everett model, the explored space is finitely countable. It’s entirely possible it never walks into the scenario needed (just as depicted, more or less, in the recent Battlestar Galactica series: people just keep repeating the same mistakes, even when allowed to explore all choices over and over again, because of the same aggregate outcome effects that turn quantum physics into classical physics at scale).
That said, though, I do believe empirically it’s more likely than not that even the Everett model would, as others. So I doubt this premise very much. But that’s not the same as saying it is impossible. I think it is all but inevitable that even we will reach this stage, much more so countless other civs (we are already, IMO, a century to a thousand years away from it; and it’s far more difficult to actually kill a civilization than usually claimed).
As noted above, this is a false analogy. Throwing trillions of children deliberately into ovens when you don’t have to is not analogous to what people who have children are doing or would ever do. At all. We would use our resources for them far more rationally.
I think the mistake you are making is confusing us now (not a posthuman civ), where we bring children into a rough world because we have no choice, and gamble on our being able to protect them simply because we have no choice, and us then (a posthuman civ, Bostrom’s required condition), where we are literally gods who can control even the laws of physics, much more so climate, justice, etc.
In the condition Bostrom imagines, we could literally do away with all the threats to our children (see, for example, my exploration of the option-space in How Not to Live in Zardoz). So, why would we devote any of our resources, much less all of them, to not doing that? It makes no sense.
Even evil beings (cf. again How Not to Live in Zardoz) would be obvious in a simverse of their governance. They would not waste time on pointless “ancestor sims.” We observe we don’t live in a governed universe at all. So we know we aren’t in some corporate psychopath’s dreamworld. Much less what would be far more common: the games and paradises most people would end up redirecting all the matter of a universe toward.
(If they even would. I am skeptical of even that premise, that we’d waste time converting “the entire universe” into sims, as Bostrom requires; I think we would settle on locally isolated sims, probably deep in intergalactic space where threats are minimal, and not bother going any further. It would be tedious, and pointless, to try and “convert the entire universe” into sims, so I am doubtful even of the idea that “every” civilization would do that, too.)
Because they are enthusiasts for the idea. Not because they have any rational argument or evidence for it. Their attraction is emotional, not sensible. Just notice how they drop all logic and scientific method and critical thought when exploring it. It’s the new theism. Which also counts a lot of “smart” people in its ranks. Ad Populum and Ab Auctoritate are fallacies for a reason.
Thanks for the response.
A couple thoughts:
Neil DeGrasse Tyson said he was losing sleep and eventually came to believe he was wrong on ST, but then was convinced again. So, I’m not sure whether postulating, “he believes because he wants to believe” works for him.
But, obviously, smart people are wrong about things all the time.
The idea that “they’d only make games and paradises” seems to ignore that we might be in a game, and paradises would be (seemingly) boring.
Finally, it appears to be a logical conclusion that all advanced civs, having evolved naturally with a desire to survive, replicate, and gain resources for such activities, will eventually become “god-like” at least from our perspective, and integrated with machines/AI, or they’ll be annihilated (from within or from without) so, I’d argue that any discussion about “what an advanced civ would do” is kind of like postulating what kind of government the chimpanzees will establish after they colonize mars. We don’t have the unknown unknowns in our sphere of knowledge.
Greg Egan’s Permutation City (hard sci-fi that postulates the ability to mind scan) has an explanation on an expanding computing universe (I don’t get all the quantum mechanics and such, but the guy seems brilliant and knowledgeable). Inside this virtual world, which is limitless and ever expanding, there is an autoverse, created by a computer scientist, that runs for billions of years (inside the universe, but only a few thousand years outside it). When the dominant species becomes sentient, and figures out it evolved naturally, the humans that programmed the universe decide it’s only ethical to enter the world and correct the one thing they’d gotten wrong: they actually live in a construct. All the evolution that produced them was allowed to run “naturally” via the rules of the autoverse.
They reject the idea. “There can be no gods.”
In anycase, I don’t “believe” or “not believe” we live in a construct (and if we did, so what?) but I do believe humans and AGIs (which are coming soon, it seems) will be able to build highly realistic simulations. What that means…..hopefully, I’ll live long enough to discover, although I suspect there’s a high probability Yudkowsky is correct and we’ll be exterminated by the next dominant species on this planet.
I don’t think you are taking any of these words seriously. You are starting to sound like a Christian apologist. “The world couldn’t be any better than it is now, or else it would be too boring” is easily refuted poppycock. Honestly.
And it’s games that would not be as boring as this world. A game would be self-evidently a game. Believe me. You would know if you were just an extra in the latest edition of Resident Evil. And sure, just like an apologist, you can build convoluted conspiracy theories to cling to this vain hope (“Maybe we are extras in an evil game built by evil aliens, and all the aliens choked on their food in a freak accident and died, but the game-AI just kept running the game waiting for them to log back in…”). But all that does is expose how absurdly improbable the belief is you are trying to defend.
To again on to Richard’s point:
How many people happily abandon their children?
Some, sure.
How many would even have kids if they didn’t have to for sex?
Almost none, right? There’s maybe an occasional just-awful psychopath that wants to have tons of kids and then not take care of them, even with the presence of easy contraception (and in this case we are talking literally instantaneous contraception for the analogy to hold). Such that is actually remarkable when you hear about, say, an obgyn who used their own sperm for in vitro. And even then, that was usually not arrogance and evil and rather just needing to keep up supply.
How many people like living in civilizations with people who routinely abandon kids, and do nothing about it and support doing nothing about it?
Again, virtually none. The kind of species that would be okay with that would never evolve because they wouldn’t protect their own children.
You’re talking about someone making billions of children, who only appear after billions of years (instead of instantly, like you would do if you could make a perfect sim and wanted simulated kids, a thing that is already dubious at best to happen en masse), and then doing nothing to help them or to hurt them. Never once indulging in either being a loving or an abusive parent.
That’s vanishingly unlikely. But on a non-sim theory, it’s exactly what is expected.
Every time you make an analogy to human experience, it doesn’t hold. That should tell you that the theory isn’t defensible.
Indeed.
And I’ll reiterate, this is still even true for an evil civilization. While the probability of a non-evil civ doing something so horrific (because it wouldn’t have to; it could play God without the evil results) is no better than trillions to one against, the probability of an evil civ doing it will still only be thousands of times higher, not trillions, and thus the probability remains no better than billions to one against.
This is because, while evil civs have pathways to the result that non-evil civs don’t, and thus “could get there,” they almost certainly won’t bother, because they are evil. Being evil, they would have no use or care for ancestor sims. That’s an objective waste of resources even from a completely coldhearted Machiavellian perspective. Just as they would have no use for converting all moons in the universe into cheese. “But they could! You don’t know! You are just speculating!” is not a logically effective argument against our point here.
Why would a selfish being burn server time on useless ancestor sims (much less trillions of galactic-mass-scale server arrays), when they could use that server time for orgies and Doom trolls? There is no rational answer to this question. You have to spin out ever more ridiculous and convoluted conspiracy theories to get a different conclusion, which is identical to claiming Satan planted all the fossils, and that before the Fall inaugurated carnivorism, dinosaur teeth were for cracking coconuts.
That is not a legit rebuttal to the evidence presented.
And remember, this is for wasting even one or two servers on an ancestor sim. Bostrom needs almost all aliens to burn almost all their server space across their entire universes on redundant ancestor sims that they won’t even get to live or play in—and that can never inform their own history, as a simmed universe must always be smaller than the universe simming it and thus can never be identical to it; and even if by some magic they overcome that problem, they still can’t possibly ever have the requisite initial-conditions data to produce a matching result to their own.
That simply has zero plausibility. They’d sooner turn all moons into cheese.
While I do not agree with the conclusion of their argument, it is not entirely foolish either. It has some similarity with the following argument: If 1 in a googol universes is life-permitting, then the probability that I am a first-person observer cannot be larger than 1 in a googol. And that argument shows the wishful thinking of the naturalists, who simply equate the probability that some third-person observer exists, somewhere in the multiverse, with the probability that I am a first-person observer. That cannot be true on naturalism. I am not you, nor any other person, so, on naturalism, there is a lot of room for not being a first-person observer. Consequently, the more life-hostile emptiness you create in the multiverse, the smaller the odds become that any person is a first-person observer, and therefore that the naturalist worldview is tenable.
You really do suck at math, Ward. It is astonishing. I think you might not be mentally well, as I’ve told you before. But I have to address your nonsense anyway. So here goes:
No, Ward. Nothing you just said is true.
First, there is no such thing as a third person observer who is not also a first person observer. By the very definition of observer. So the frequency of them cannot deviate at all.
Second, the probability of an observer finding themselves in the 1 in a googol universe they can be in is literally 100%. It therefore cannot be “less” than 100%.
Third, the number of life-killing worlds has zero effect on the existence of observers. Because they don’t arise in those worlds. So it matters not how many or few there are. All that matters is how many life-permitting worlds there are, and that frequency always goes up with tries. This is a basic law of probability: the probability of an outcome always increases with each random selection of an outcome; and all probabilities approach 100% as selections approach infinity.
Fourth, the number of life-permitting worlds that are mostly life-hostile is always going to be vastly larger than the number of life-permitting worlds that are mostly life-friendly. Because the latter requires vastly higher specified complexity. It’s exactly the difference between rolling a seven on two dice vs. a twelve: the latter is only one configuration and thus is rarer; whereas the former can be realized by a larger number of configurations and thus is greater. Thus, if life is a random outcome, most cases by far it will find itself in a vast system of random tries, i.e. a mostly life-hostile world (and an extremely large and old one at that).
Thus, if you selected observers at random from among all logically possible configurations of universes, the probability is extremely high that you will select one in a largely hostile (and ancient and vast) universe, because very few life-generating universes will not be like that; almost all will be like that.
This is why observing ourselves in a vast system of failed tries is evidence for our having arisen by a random process rather than design. Design would skip the wasted space and tries, because that is literally what intelligence makes possible—indeed, it is the only thing that makes any difference between intelligence and chance: the ability to skip all the failed tries and go straight to the designed outcome. I covered this last time in How the New Wong-Hazen Proposal Refutes Theism.
The difference between first- and third-person observers is that there can be only one first person observer, while there can be arbitrarily many third person observers. On naturalism, there cannot be any mechanism that distributes the odds on which third-person observer becomes the first-person observer. What mechanism would you propose? Is every brain neuron equiprobable? Every brain? Do CPUs also count? You have no answers on these questions. And that’s because naturalism and the reality of the external world cannot provide any.
That’s literally insane, Ward.
It is logically impossible for there to be a third person observer who isn’t at their locus a first person observer. Otherwise they wouldn’t be an observer at all. So the number of third and first person observers is always equal. It literally could never be otherwise.
As for what makes the difference between sub-cognitive consciousness (a mere animal, whose status as an observer is not pertinent here, as this is a discussion about cognitive observers, not noncognitive observers) and a cognitive consciousness (an “observer” in the relevant sense here), we have an extensive array of data on that: what is needed is a high degree of integration of complex information in the construction of a world-model and a self-model, requiring a rather elaborate and sophisticated physical apparatus, activating both meta-cognition and semantical cognition. CPUs could in principle achieve this status, and probably will someday, but we are nowhere near that achievement yet in this star system. It will take probably many more decades of more engineering, if not centuries.
Why are you so proud on your mental sanity, Richard? Mental sanity is favored by Darwinian evolution, right. But people with this kind of mind are less likely to believe they are in a simulation, that they observed a miracle, that they will be judged in an afterlife, that they are the only first-person observer, etc. All of this is merely a protection mechanism that is the result of Darwinian evolution. The fact is that the truth is just too big for sane Darwinian brains. So if you admit to have a sane Darwinian brain, you are confessing a severe problem with everything you say here.
Is your specified complexity the current state of physics? I thought that physics made predictions on the basis of repeatable observations. This story about specified complexity only serves to explain one single observation: that you find yourself as the first-person consciousness of a human being. This will never be explicable by physicists.
To Fred B-C: I am only trying to make sense of Richard’s vague proposal of specified complexity. Of course Richard didn’t say anything of that. Because then it would be clear that specified complexity is in fact a supernaturalistic proposal.
Translation: “Crazy person advocates everyone should be crazy. Then forgets all science refuting him…even after having it pointed out to him repeatedly.”
I rest my case.
Ward: I know the anthropogenic principle always feels unintuitive and like a cheat, but it’s just not.
“What’s the probability I find a royal flush?” is never, ever dictated by the likelihood that the house I’m in happens to have a deck of cards. If decks of cards are impossible, then there are no real royal flushes to be had. But as long as they are possible, once I have a deck of cards, a royal flush is inevitable if I keep dealing out cards.
In the multiverse, there’s decks of cards and dice and coins galore. That is, there are ranges of universes where life just isn’t possible, and ones where it could be (such that, say, a multidimensional invader could come in and find plenty of real estate) but it doesn’t emerge (though those seem pretty low-probability), and then ones where life does exist. It’s inevitable. The probability of life given a multiverse is one. And so, unless you have some special reason to wonder why you’re alive (like, say, you’re living on the surface of a sun rather than in a temperate planet), it’s not a mystery, under the multiverse, that you are.
Good to see that you are aware of the problem to some degree, Richard. But honestly, your proposal is rather vague and does not lead to any new insights in physics. It follows from your proposal that there should be some vague, undefined process or computational program that scans a googol universes, finds neurological structures, distinguishes animal brains from human brains, and then creates an independent reality that is reduced to the evolving brain of that human being. I dare to claim that your badly defined process is in fact supernatural, because there is nothing in the natural world that does such a job.
I am not here proposing “new insights in physics,” Ward. I am describing the current state of physics.
And I never propose nor require any of the ridiculous things you are attributing to me here.
If you want to actually know (though I do not think you do, because I am quite certain you are insane; and I have linked you to some of these before, so that you “forgot” them all is yet another ping of the evidence that you are insane) what my proposals are for ontological grounding and why they are “natural” and not “supernatural,” read:
Superstring Theory as Metaphysical Atheism
The Argument from Specified Complexity against Supernaturalism
The God Impossible
The Argument to the Ontological Whatsit
This reply from Ward really suggests that he doesn’t realize other people don’t share his bonkers assumptions. He doesn’t bother to explain any of that, either why it would be necessary as an assumption for Richard’s position or where he ever saw Richard say any of that.
Blondé is genuinely insane. You can tell because he literally forgets entire conversations you just had with him.
Surely you’re trolling Ward.
You might be behind the curve, LOL. Ward has been selling his crazy on my blog for years. You are perhaps jumping in at the end, after the slow build of an entire exasperation mountain that lies behind what we are saying to him now. This has been a long time coming. It isn’t a sudden reaction to a newbie.
It’s no wonder why my favorite game of all time was Populous. 😉
I just saw the comment about the limit on nesting threads, I thought, as the other commenter thought, that at some point you just say to yourself, “enough already….”
You mentioned to her this was the proper workaround, so I’m using it.
I really only wanted to say, yes, I understand the simulation theory makes people (myself perhaps) sound like a born-again fundie Christian. It was the realization that I was sounding like a creationist (again) that was bothering me in the first place.
So, first off, I’m not “advocating” we’re in a simulation, nor do I “believe” we are, although my posterior odds are self-evidently higher than yours, which seem to be close to zero.
Why am I north of zero?
Well, besides guys like Bostrom, I read your blog on how to avoid Zardoz (I also watched the movie).
In that, you give a list of rules you think are a good step in the right direction to avoid a horrible simulated universe like Zardoz.
Well, you don’t go around writing posts like, “How to become a Christian” you know, just in case we notice the rapture happening, and we need to get saved asap.
But you do write a post based on the idea that rich sociopaths in the future could actually build a horrible simulation.
If you write such a post, I’m going to assume, even if it’s one-in-a-million, you give some posterior odds higher than “it’s impossible” and that’s what I was arguing in the first place, it’s possible that this thing you’re concerned to prevent has already happened.
Not that we do or likely live in a simulation, merely that if we agree that rich sociopaths do sick things, without compassion or empathy for others, and might continue to do them (and we also have no idea about other civs or even a AI hybrid civ that might exist) the idea that we’re currently in a simulation is higher than billions to one against, I but I believe it’s orders of magnitude higher than Christianity turning out to be true.
Since I read guys like Greg Egan, I tend to be more open to, and more interested, in potential ideas like simulations and weird things that are possible with simulations and virtual worlds. In Diaspora, which I just started reading again,
https://en.wikipedia.org/wiki/Diaspora_(novel)
the opening of the book starts off with an explanation of the rules surrounding making a “child” in the VR world, with its associated ethics.
Now, all that said, on the GE live stream the other night, I was the guy that asked about a way to properly “weigh” and ask/answer questions or think about an issue regarding statements like “the majority of scholars believe X” and while my example was about Christian scholars, it can apply here in physics I think.
How do we weigh these (you mentioned on the live stream it was a complicated question without an easy answer) answers when it comes to these types of questions and claims?
If the majority of “New Testament Scholars” believe Jesus resurrected, we discount that because the “majority” includes a group that’s biased and presups the answer.
I feel like, I could be wrong, that some of these questions in Quantum Mechanics (and all the other various things, ST, Multi-verse/Everett’s Many Worlds) land in this same territory.
When Everett’s Many World’s was first talked about, most people didn’t pay attention and dismissed it as nonsense. It’s not dismissed today…
I think Jesus Mythicism might end up being the same (maybe 10 more years…maybe less).
I wonder if ST might be viewed differently after another few years of AI development…stranger things…
And, just to end this, no, I’m not saying I “believe” we’re in a simulation nor am I “advocating” such a belief, I just think it’s probability is farther from zero than yours, how far, I don’t know…
I do think it’s important to avoid Zardoz, but if Zardoz world’s are impossible, then what’s the point of discussing how to avoid them?
LOL. No. It’s just physically impossible to squeeze another indented paragraph to smaller than a couple words wide, so threading stops. It’s a software design. The workaround, as you’ve now used well, is to jump up an indent or two in the threading and reply there.
I wouldn’t set my odds “at” zero. Just very low. You could say “arbitrarily close to zero” though; but yes, still far higher than supernatural worlds. And for the reasons I outline. Even in scenarios absent the supernatural, like sim theory, too many ad hoc and extremely improbable premises have to be “stacked” to get the result. Which calls into question why anyone is motivated to build those stacks, much less lean on them.
It’s only worse for supernaturalism (which requires taller and even less probable stacks), but apart from scale of improbability, there is no effective difference. Whether it’s devils or aliens, it’s the same convoluted and completely unevidenced construct.
Actually, I do.
I have also pointed out how Christianity is wildly less probable than even alien invaders (ergo likewise sim theory, which is just another “secret alien invaders” theory), as the latter at least rest on established possibilities (whereas the supernatural cannot claim that).
It would still be self-evidently to its inhabitants a simulation. Just like Zardoz was self-evidently a built world and not some accidental natural one. Hence my point: even evil sims won’t be useless “ancestor sims” and won’t look at all like the pointless world we live in, because even evil people act completely differently than that. Their worlds will look like sims, literal playworlds for psychos. Not random, ungoverned worlds, inherently purposeless, and massively wasteful of data resources.
As even just one example: their villains would be immortal and all powerful. Ours are neither. That is simply improbable on sim theory. It is, rather, what we expect if sim theory is false.
And that’s again just one example of what I mean. Designed worlds would be radically different from undesigned garbage heaps like ours, in thousands of ways.
Because different powers are implicated: literal gods exist in your scenario, and thus everything the existence of gods entails will be true in sims—evil gods or otherwise; and different limitations are implicated: sims have a resource limit in terms of processing space, and thus designs will be efficient in processing, not wasteful; hence we would not have granularity down to the Planck scale or trillions of dead galaxies and vast interstellar vacuums, as all that processing power could be repurposed more efficiently to the function of the sim.
For example, instead of a trillion useless galaxies, there would be a trillion inhabited Matrix sims or whatever—and honestly, they’d be more like Tron than the Matrix, as the latter requires far more stacks of convoluted premises to motivate their construction and maintenance. Notice the difference in granularity: Tron is vastly more efficient a design than even the Matrix, and yet the Matrix is vastly more efficient than our world, as it doesn’t have to do oogles of math on useless quark-gluon solutions to just make a cup of coffee.
Hence I used the Matrix example in my baseline critique of sim theory because sim theory is just another Cartesian Demon theory. Look how convoluted and full of plot holes the designers of that fiction had to get to make even it work.
Any sim theory has to implicate conspiracies to conceal the truth, and posit whole other complicated universes, making for trillions of ad hoc epicycles, including alien conspirators complete with bizarre motives (and bizarre means unusual, unusual means infrequent, and infrequent means improbable). Like the motive to make massively inefficient, totally ungoverned sims, with no design features of any use. Moons made of cheese.
It’s just another “the Devil planted all the fossils” (indeed, almost indistinguishably; you are just subbing “aliens” in for “the Devil”).
And everything I just said and have been saying is obvious to anyone who thinks about it. So my question is…why didn’t you think of any of this? Why has Bostrom not? There is something going on here, some sort of motivated reasoning that needs addressing.
So my question is…why didn’t you think of any of this? Why has Bostrom not?
Indeed, I have. I mean, I’ve read your posts, and I’ve listened to others (Michio Kaku as an example) who don’t think it’s likely/possible that we’re in a simulation.
There’s this guy, George Hotz. He was the first guy, as a teenager, to publically crack the iPhone, and he’s building kits to make self-driving cars, and also speaks about AI, he seems pretty freaking smart with programming and gaming.
https://youtu.be/_SpptYg_0Rs?si=NgP7D8r2D_d9NpJG
He believes we’re in a simulation as do many computer and AI experts, so the idea that it’s impossible due to the physics seems in dispute. George Hotz in that short clip above says, “Yes, but it may be unfalsifiable.”
So, yeah, I see how it’s on par with religion, perhaps, but the difference is, we have the technology being built now that leads these guys to believe that virtual worlds are possible.
That leads to all the other philosophical reasons you reject ST. Okay, fine, but that’s philosophy, not hard science regarding the possibility of the technology.
It would still be self-evidently to its inhabitants a simulation.
No, this is wrong.
https://youtu.be/ESXOAJRdcwQ?si=UwMMRwPkizezn0K9
Hotz gave this talk at SXSW about 4 years ago, “Jailbreaking the Simulation” and one thing he said was “NPCs don’t know they’re in a simulation.”
Players know, yes, which is what you’re saying above, “it would be self-evidently” to those in the simulation, but that’s not true for the NPCs, those that are programmed to not know they’re in one.
Now why believe that?
Well, I listened to this recent talk:
https://youtu.be/mSWJmzMoTyY?si=8K0zPCPqW8QPMF2-
Robert Sapolsky on Lawrence Krauss’s YT podcast.
He claims that free-will is an illusion.
Okay, so we have no free-will…what’s that imply?
If we’re in a simulation, and we’re NPCs, then it stands to reason:
A. We don’t have free-will
B. We won’t know we’re in a simulation
C. We won’t be able to falsify this idea
So, yeah, look, it’s all, likely, mental masturbation to talk about this stuff, I get that, I just think the arguments against ST are relying on emotional philosophical reasons (like “no advanced society would do something so evil”) are all just assertions, not evidence.
Perhaps we’re in the base world. Okay, Elon Musk says that’s billions to one against for him, perhaps he’s a crappy Bayesian, but he’s not stupid about computers.
But imagine…we’re in the base world, and we’re building things that can build simulations, virtual worlds which, to the “characters” inside them, might not be noticable. What ethics must we consider?
That was the point of your “Richard’s Rules” correct? To think about it.
So I think, thinking about the “what if we’re in a simulation” helps us think about, “what ethics must we consider when we build simulations and AIs?”
So that’s one of my reasons for thinking about this stuff, just to reiterate, I’m not “advocating” this belief or saying, “I believe” we’re in a simulation.
But I do think it’s something to think about as, even if we’re not in a video game as NPCs, we’ll be building them soon (by “we” I mean human society).
The ethics about this get dirty, I think we agree on that….
Just like the Lizard People Rule Earth hypothesis.
This is motivated reasoning to a conspiracy theory, not rational evidence-based reasoning.
False analogy. The NPCs he is talking about are not sentient. If they were, trust me, they’d figure it out pretty quickly.
Indeed, this is one of the possible pathways to general AI: an NPC designed so well it actually starts thinking for itself and assessing its situation. This was literally the plotline to the recent Westworld series.
The measures they had to go to there to keep the “NPCs” from figuring things out is a perfect example of the convoluted conspiracy theory you have to cling to to keep claiming we are in Westworld.
Bad philosophers always confuse what free will is. The real thing is not an illusion, only the ivory tower fake thing that bad philosophers mistakenly think is what everyone is talking about.
See the “free will” category in my right margin dropdown menu.
Dr. Carrier, some days ago I was reading Chapter 3 from “OHJ” and …
EDITOR: This comment does not belong here. Please post comments on relevant articles. I have taken the courtesy of reposting your comment in my Open Thread On the Historicity of Jesus (you can find it soon there, I am working on the switchover, it’s arduous). But I will not do this normally. More and more commenters are breaking this rule and I have to start clamping down. My comments policy requires comments be posted where they are relevant. Please do follow that rule from now on. Thank you.
P.S. Sorry, my WordPress Theme has finally crashed for good. That’s a five hour fix job so I can’t sort it tonight. I’ll try to get it done tomorrow. Then I’ll get your comment moved and posted where it should be and you should be able to submit replies without nonce or verification errors.
Okay! I finally got it sorted. Your comment, followed by my reply, is now here.