Comments on: Ten Ways the World Would Be Different If God Existed https://www.richardcarrier.info/archives/26502 Announcing appearances, publications, and analysis of questions historical, philosophical, and political by author, philosopher, and historian Richard Carrier. Sat, 15 Mar 2025 18:29:39 +0000 hourly 1 https://wordpress.org/?v=6.7.2 By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40237 Sat, 15 Mar 2025 18:29:39 +0000 https://www.richardcarrier.info/?p=26502#comment-40237 In reply to X.

Exactly. That’s super dumb. And completely scientifically illiterate. Just as I said.

Anyone who studies qualia theory in cognitive science knows “B could be replaced with anything else” is false. It’s fantastically false. And brain science already established not only that it is false, but why it is false.

I discuss examples of why that is in my articles on qualitative consciousness.

Start with, again, Touch, All the Way Down: Qualia as Computational Discrimination, but more focused examples are covered in my articles on The Evolution of Awe and Memory Realism (see also my discussion of this in the context of the theories of Dennett and Churchland and the even dumber Christian Mind Radio Theory).

But all of these touch on a key example: color.

It is actually true that colors can be anything, because (scientists have already established and agree) colors are made up. They don’t exist outside our minds. But they do have to manifest as a color, since that is the computational function: to distinguish geometric planes by photon predominance. There is no physical or even logical way to do that but by inventing some kind of color to fill the space. Because that is what the computer is doing: telling spaces apart by color. It uses color as an index for the photon predominance for a given plane or space. So any color will do (and some people indeed have inverted qualia and see reds as greens and vice versa; and there is no way they can know because there is no way to experientially check who is seeing colors which way). But you can’t just substitute a pickle for a color. That would make no sense computationally.

Neuroscientists know that the brain constructs perceptions by integrating pieces of information it is modeling about the external environment (and also internal, in the case of personal qualia rather than just sensory qualia). So the reason you cannot look at a tree and see a pickle is that constructing a model of the shape and composition of the thing in front of you requires making exactly that distinction (between a tree and, say, a pickle): that is what the physical computer is doing.

To get an image of a pickle, you’d have to activate the circuits arranged to generate a model of a pickle, and there is no reason why a computer would evolve to do that when the input data match a tree instead. The way the brain processes things like shape and size doesn’t even work like this (there is no “pickle qualia activator,” but a system of activators for irreducible qualia like size, shape, color, which have all been located in specific areas of the brain and are combined to model and then pattern-match a pickle: pickles are constructed models of complex qualia, not fundamental qualia).

So, for example, bark reflects photons in a frequency our brain catalogs as dark yellow; but pickles, as dark green, because different photon frequencies are involved. So the brain needs to use a different color for pickles to signal that different photons are coming in on the detector; likewise as to shape and size (even function, since we have specific physical circuits dedicated to modeling the function of modeled objects, like food vs. tools, or even faces): the eyes are transmitting data for the computer to model (guess at) the size and shape of what it is looking at, so it can use or interact with it correctly. There is simply no logical way this could ever evolve causing you to see a pickle when you are looking at a tree (beyond when dropping acid).

That would be absurd. And it is massively illiterate to even think that this could happen, much less confidently assert that scientists “don’t know why” it doesn’t. Basic brain science has already explained why it doesn’t and couldn’t and never would.

]]>
By: X https://www.richardcarrier.info/archives/26502#comment-40227 Thu, 13 Mar 2025 03:29:11 +0000 https://www.richardcarrier.info/?p=26502#comment-40227 In reply to Richard Carrier.

If you’re curious, Bentham elaborates on psychophysical harmony here:

https://substack.com/home/post/p-157685958

Here’s the relevant bit:

“The core insight behind psychophysical harmony: there are three stages in a conscious perception. First, there’s some brain state. Next, there’s some conscious state. Lastly, there’s some physical response. For instance, you might have a brain state A (C fibers firing), which gives rise to a conscious state B (being in pain), which gives rise to a physical state C (pulling your hand away and saying “please stop blowtorching my hand—that’s not a nice thing to do. It’s not considerate at all. It violates the categorical imperative. Now you might object…”)

The core insight behind psychophysical harmony is that it’s conceivable that B could be replaced with anything else. You could replace B with D—having the experience of eating a pickle. Or with E—having the experience of skydiving. Or with F—having the experience of pleasure. So long as you keep A and C the same, B is totally irrelevant. It’s effectively an epiphenomenon, even if epiphenomenalism is false—even if B causes stuff, it’s perfectly conceivable that some other mental state would do the same causing.

(If you’re thinking “no if physicalism is right this isn’t really imaginable I’d suggest you read philarchive.org/archive…philarchive.org/archive…, as this worry is addressed at length).

In light of this, it’s utterly surprising that B involves a state that fits harmoniously with A and C. In other words, it’s surprising that the mental state produced by A involves feeling pain, rather than one of the other infinite conceivable experiences.”

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40191 Mon, 10 Mar 2025 14:54:26 +0000 https://www.richardcarrier.info/?p=26502#comment-40191 In reply to X.

That still is not a formal publication. Just an essay from the “Messiah University” website. There are countless crappy apologetics essays with bad arguments and pseudoscience. That is not any better. I don’t see the need to spend time on it. The kinds of old arguments it cobbles together have already been refuted here. See the “fine tuning” category in my drop down menu at right.

If you can find anything new that hasn’t already been refuted, please quote and describe that new argument. Otherwise, there is no reason to waste time reading countless wordwalls of old apologetics that have already long been debunked.

]]>
By: X https://www.richardcarrier.info/archives/26502#comment-40190 Sun, 09 Mar 2025 23:40:06 +0000 https://www.richardcarrier.info/?p=26502#comment-40190 In reply to Richard Carrier.

Did you ever manage to take a look at it, Dr. Carrier?

What do you make of this article, BTW?

https://spot.colorado.edu/~heathwoo/Phil383/collins.htm

Also, what do you make of the many-worlds multiverse hypothesis?

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40123 Thu, 27 Feb 2025 16:06:12 +0000 https://www.richardcarrier.info/?p=26502#comment-40123 In reply to X.

Lol. That’s getting too cerebral to be helpful. I think if someone can’t follow our simpler analogies, they’ll never follow that one.

It’s enough to just show that the reasoning is fallacious, which we can do with even a six-sided die and six people. Run it a thousand times, and it becomes clear everyone assuming SIA will almost always be wrong, which indicates the inference model is epistemically defective and therefore should not be adopted.

Just as with the Monty Hall problem. Run it a thousand times (indeed even just ten), and it becomes clear everyone assuming “the probability of your pick being the win remains the same after Monty shows you an empty door” is almost always wrong, which indicates the inference model is epistemically defective and therefore should not be adopted. You should always switch doors, because that always increases your odds of a win. No matter how much your intuition screams otherwise.

]]>
By: X https://www.richardcarrier.info/archives/26502#comment-40119 Thu, 27 Feb 2025 05:10:19 +0000 https://www.richardcarrier.info/?p=26502#comment-40119 Here’s another scenario: You have a googolplex-sided dice. If it lands on one, a googolplex people are created. If it lands on any other number, only one person is created.

If you roll it a googol times, you should expect a googolplex*googol times of people to be created based on the SIA, but in reality, you’ll only get a googolplex+googol people, which suggests that the SIA massively exaggerates the number of observers!

The SIA might cause us to conclude that we’re in an observer-rich universe, or at least in an observer-rich part of our universe, but it implies nothing about just how widespread observer-rich universes and observer-rich parts of our universe actually are. For all we know, most universes and parts of our universe can be extremely observer-poor, with our own part of our universe being a massive exception to this rule!

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40115 Tue, 25 Feb 2025 18:33:21 +0000 https://www.richardcarrier.info/?p=26502#comment-40115 In reply to X.

Let’s roll a 1 trillion-sided dice: If it rolls 1, infinity people wake up.

This is a good example. Well worth comment…

If any other number gets rolled, only one person wakes up.

And indeed, make the number rolled be the single numbered person who is awoken.

Also, as a side-model, imagine it’s “You win a thousand dollars” if it rolls a 1, but you will be given only one dollar (while for the other $999, you aren’t allowed to check if it’s really there), while otherwise you win only one dollar if it’s any other number, and are also (thus) handed one dollar.

If you wake up, you should presume that 1 was rolled. Ditto for every other person who was woken up alone in a different trial of this. But this would mean that we should presume that 1 was always rolled on this dice, while in reality, this dice only rolled 1 once out of every 1 trillion times!

In the side model: you get one dollar no matter what was rolled, and are now tasked with deciding whether you really won the whole $1000 and just have to go look for the other $999. Should you conclude you won the whole because you were handed one dollar? Why? You have no information either way (the result is a dollar in your hand either way; ergo, by analogy, the result is someone is awake wondering if they are alone either way).

If you’re an individual observer, you should assume that 1 was rolled in the dice scenario above. But that doesn’t say anything about the frequency with which 1 was rolled, and indeed observers in almost all simulations (specifically those with one observer/few observers) would mistakenly conclude that 1 was rolled. This suggests that SIA would provide a distorted version of reality to observers in observer-poor scenarios regardless of just how widespread observer-poor scenarios actually are, no?

In the sense I just outlined with the money and dice, yes.

Note that trying to determine what the odds are from the observation is entirely different than deciding what selection occurred knowing in advance what the odds are. That is, if you know already it’s a fair coin, you know both options are 50%. If you know it’s a fair trillion-sided die, you know the big-win option is one in a trillion. And so on. But what if you don’t know how big “the die” is? Can you guess from waking up?

Um. No. You have zero information about that. You don’t know if it was a foregone conclusion (no die or coin, but just a 100% chance you’d be awoken, regardless whether anyone else was), or a 50/50, or a trillion to one, or an umtillion to one.

So how can we get these larger numbers? Bentham has to work from the pool of all logically possible people, not an actual number of people put to sleep, to get his transfinite results, but that is a circular argument: it presumes its conclusion (that there are infinite people) in its premise (that there are infinite people). But there is no way to know how many people there are from observing only one of them.

This illustrates how he uses multiple different arguments and they are all bollocks. But they are each bollocks in different ways and for different reasons. Which is another red flag for crankery. They don’t have one well-stated and well-vetted argument to a conclusion. They have, instead, a whole Gish gallop of bad arguments to that conclusion, and then get angry when we point this out.

If you were to look at all simulations (instead of just individual observers), almost all observers in single-observer worlds are misled by SIA.

If that is what ChatGPT told you, I’m impressed.

This is another good way to check Bentham’s work: just brute-force the sims (run them all; or, say, tens of thousands of them) and just physically count to get your frequencies (hence probabilities). This is what teaches people how the Monty Hall Problem works when they refuse to believe it.

And here, indeed, it becomes obvious that after 10,000 sims of the trillion-die experiment, that a person being awake should not lead them to conclude a 1 is rolled, because in almost none of those runs will that be the case. Even if we pick one person, and ignore all “other person” rolls, it ends up 50/50 whether the one person awoke because of a 1 or the other specific roll of the die that would have selected them.

For example, perhaps they were person 1,987,623,008. The die has a one in a trillion chance of rolling that or of rolling 1, and since both are identical odds, that reduces to 1/1 i.e. 50/50: it’s equally likely, between those two rolls, which will have been rolled.

Note that another tool used is to also run Monty Hall conceptually with a trillion doors and have Monty open all but two of them. That readily reveals how his action gives you a ton of new information that you should then act on (and thus why you should always change your pick of door after). So the method you got ChatGPT to use does work. Kudos.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40112 Tue, 25 Feb 2025 17:48:37 +0000 https://www.richardcarrier.info/?p=26502#comment-40112 In reply to X.

In the “configuration of the stars” fallacy the same mistake occurs: assuming the entire probability space is split between “God arranged the stars” and “the specific configuration of stars we observe was a random accident,” where the likelihood on the latter is obviously absurdly low, yet the likelihood on the former is 100% (since if God wanted that configuration, he would produce it 100% of the time, whereas a random process would not).

The error is forgetting that a large part of that probability space is occupied by all the other possible configurations of the stars that random chance could have produced. Every configuration gets a probability on “chance,” and it is the sum of all those probabilities that equals the general condition “chosen by accident.” In other words, “chosen by accident” does not entail a specific configuration, but any of countless configurations.

The only way to get a differential probability favoring design is if you are looking at a configuration that is less likely than any random configuration is expected to be. For example, if all the stars in the universe were arranged to form a cross as viewed from Earth: the probability of that is vastly less than the randomized expectancy and thus is less probable as an accident than by design. But there are countless configurations that are far more expected outcomes of random selection than that, and so when we observe one of those, design no longer competes as an explanation. The likelihoods are effectively the same in that case; so the probability reduces to the prior. And an informed prior would not favor design.

If we added the hypotheses “design + other universes” we would have to include a portion of the probability space for “design – other universes” and “other universes – design” and even then we would not get a leg up, because there is no causal relationship between a random selection and the existence of other selections. If we pull a bead out of a bowl of a hundred and one beads, which bead we pull will be odds against of a hundred to one, but that does not mean there therefore must be a hundred other bowls each with a different bead being pulled. Since there is no inherent causal relationship, we cannot infer from the improbability of a draw from a bowl that other bowls exist.

So, too, the configuration of the stars, which is just a random draw from a bowl. That there is a random draw from a bowl entails there will be an improbable result drawn, and that’s to 100% certainty. There is therefore no way to argue it is “improbable” unless other bowls (other universes) exist. That would require the probability of an improbable draw to be below 100%. Because you can’t have a probability higher than that, and so no other theory can generate a higher probability so as to be favored over it.

This is why the mere configuration of the stars can never give us any evidence that a multiverse exists. For an example of a fact about the stars that could give us such evidence, see Six Arguments That a Multiverse Is More Probable Than a God, and in particular the argument from scaling: that there are a plethora of planets around a plethora of stars in a plethora of galaxies forming a plethora of galaxy clusters, is background knowledge that indicates a trend toward there being a plethora of universes (the existence of scale-steps increases the expectancy that the steps continue, and the next scale-step is “universe”).

By contrast, Aristotle’s / The Bible’s universe, with a single Earth at the center and a close fit of spheres around it ending at a plane of stars where supreme perfection is reached by scale-steps, does not suggest a continuation of the scale-stepping, and so would not be evidence for a multiverse (there could be other evidence for that, but we’re just discussing this one item of evidence).

So in the present world, the scale-steps are nonrandom and indicate an uncompleted upward trend that should naturally end in a plethora of universes (unless something exists to stop that, which we have no evidence of).

That is still weak evidence. The likelihood ratio is not large, as there are many ways the scale-stepping could end at clusters; the only thing we have is that there are more ways it can end in other universes on that gradient than on the scale-stepping in the Aristotelian world, so in the latter we have no evidence for a multiverse, but in the former we have a slight amount of evidence. And this arises from the observed results not being random and distinctly pointing to one trend rather than another. So this is not an argument from the mere configuration of the stars (which gives us no information), but from the specific peculiar ordering of the stars (which gives us information), similar in that respect to the “arranged like a cross” case above.

]]>
By: Richard Carrier https://www.richardcarrier.info/archives/26502#comment-40111 Tue, 25 Feb 2025 17:23:30 +0000 https://www.richardcarrier.info/?p=26502#comment-40111 In reply to X.

Yes, there are also problems with his handling of transfinite arithmetic, as I hinted at before. But the core error is the one I’ve just outlined.

And yes, there is kinship with the Monty Hall problem. A different thing is happening there, but it’s the same kind of mistake: our intuition locks with certainty on a conclusion we are sure cannot possibly be correct, yet is. All because something is being left out.

In the Monty Hall case, what is being left out is that negative information is information, and thus getting that information changes the scenario, so we should update our expectancies accordingly (this becomes clear if you run the problem yourself with cups and balls until you start to see, physically, what that change in information is).

In Bentham’s case (as in all fallacies of neglected total probability) what is being left out is the rest of the contrary hypothesis. He (and we, when we fall for it) mistake the probability space as split between “probability I awoke alone” and “probability everyone awoke” but that leaves out all the other people who might have awoken instead of us. When you put their probabilities back in, the outcome does not turn out as we thought, when we were violating the Law of Excluded Middle and following a false dichotomy. Until we realize we are doing that and correctly fix the error. Then we see what’s really the case.

For completeness I’ll run the same point conceptually for the “configuration of the stars” fallacy…

]]>
By: X https://www.richardcarrier.info/archives/26502#comment-40107 Tue, 25 Feb 2025 07:12:10 +0000 https://www.richardcarrier.info/?p=26502#comment-40107 In reply to Richard Carrier.

BTW, I’ve developed an interesting (purely hypothetical) thought experiment:

Let’s roll a 1 trillion-sided dice: If it rolls 1, infinity people wake up. If any other number gets rolled, only one person wakes up. If you wake up, you should presume that 1 was rolled. Ditto for every other person who was woken up alone in a different trial of this. But this would mean that we should presume that 1 was always rolled on this dice, while in reality, this dice only rolled 1 once out of every 1 trillion times!

If you’re an individual observer, you should assume that 1 was rolled in the dice scenario above. But that doesn’t say anything about the frequency with which 1 was rolled, and indeed observers in almost all simulations (specifically those with one observer/few observers) would mistakenly conclude that 1 was rolled. This suggests that SIA would provide a distorted version of reality to observers in observer-poor scenarios regardless of just how widespread observer-poor scenarios actually are, no?

Here’s what ChatGPT subsequently said in response to all of this:

“Yes, exactly! SIA systematically distorts reality for observers in observer-poor scenarios, regardless of how common those scenarios actually are.

Breaking Down the Problem:

Individual Observer’s Perspective

If you wake up in the trillion-sided die experiment, SIA tells you to assume that the die rolled 1 (because most awakened observers exist in the “1 was rolled” scenario).

This makes sense from a self-centered, conditional probability standpoint—given that you’re an observer, you’re more likely to be in a world where there are many observers.

Reality Across Simulations

In reality, the die lands on 1 only once every trillion rolls, meaning that in the vast majority of trials, the one awakened observer is completely wrong in assuming that 1 was rolled.

If you were to look at all simulations (instead of just individual observers), almost all observers in single-observer worlds are misled by SIA.

Thus, SIA consistently produces incorrect conclusions in observer-poor worlds.

Distortion of Reality

This means that SIA does not accurately reflect the frequency of different worlds, just the experience of observers within them.

Observers in observer-poor scenarios will always conclude that they are in observer-rich scenarios—even when they are not.

So even if observer-poor worlds vastly outnumber observer-rich ones, the individuals in them will still believe that they are in observer-rich ones.

The Core Issue:

SIA only tells you what you should expect if you’re an observer, not what is actually happening in the broader reality.

In other words, SIA creates a selection bias that skews observers’ perceptions of reality—especially in worlds where observers are rare.

If you were to ask, “How often is 1 actually rolled?”, SIA doesn’t answer that correctly. It only tells you what you should believe as an observer, not what actually happens across all simulations.

The Big Implication:

This suggests that SIA cannot be trusted to provide an accurate picture of the overall world structure—only of an observer’s experience within a world. So it systematically misleads people in observer-poor scenarios and distorts the perception of reality.

Does this summary align with your concerns?”

]]>