A new law of nature has been fleshed out and proposed, in a research paper by Wong, Cleland, Arend, and Hazen, which theory I will just label for convenience the Wong-Hazen thesis. To read the original paper, see “On the Roles of Function and Selection in Evolving Systems,” PNAS 120 (8 July 2023). To read a good lay summary, see Will Dunham’s treatment for Reuters in “Scientists Propose Sweeping New Law of Nature, Expanding on Evolution.”

The debate over theology isn’t at all on their radar. But it’s important to explain how what they are proposing closes a gap atheists have long suspected and even argued is filled with something like this, and why that makes theism now even more improbable than it already was. Curiously, it hits upon topics that have recently been fashionable: the origin and causes of order and organization in the universe; and what “the first cause” (in whatever sense you want) is starting to look like from an empirical point of view (the point of view theists are loathe to adopt because it goes badly for them).

Some Recent Context

This all reminds me of the fundamental difference between the foundational methodologies of science vs. theology, and why science discovers knowledge and makes real progress, and theology doesn’t: science is empirical, objective, integrative, and measures confidence in respect to surviving serious falsification tests; whereas theology is armchair, anecdotal, and cherry-picking, and evades real falsification tests. This distinction was recently spelled out in another paper, by Nieminen, Loikkanen, Ryökäs, and Mustonen, “Nature of Evidence in Religion and Natural Science,” Theology and Science 18.3 (2020), which I’ll refer to for convenience as the Nieminen-Mustonen thesis. Ironically, an upset theist who tried refuting them in the next volume (19.3; 2021) ended up validating their thesis by using every tactic they had just criticized as invalid procedure relative to the real operations of science, all in a vain effort to emulate the appearance of adopting objective procedures (“Bias in the Science and Religion Dialogue? A Critique of ‘Nature of Evidence in Religion and Natural Science'”).

The distinction, and its importance, will become apparent by the end of my analysis here. But this all relates to a recent “debate” (if you can call it that) between German philosopher Volker Dittmar and inept internet theist Jose Lameiro on Germany’s Quora, where Dittmar answered the question “Kann das Universum ohne eine höhere Intelligenz entstanden sein?” (“Can the universe have come into being without a higher intelligence?”). I won’t fisk these authors’ remarks. They are a mixed bag, but Dittmar is the one who comes out as mostly correct: his point, which I have long made myself (and which Lameiro’s reply avoids answering, by throwing up a hodgepodge of apologetical tropes instead), is that we observe complex things always arise from simpler things, a trend nixing God (who is maximally complex) from being the “first cause,” or even a cause at all. Once we clear aside all the side-points, distractions, and gobbledygook that Lameiro bombs his comments with, the most Lameiro has to say as far as actually addressing Dittmar’s argument is that information cannot arise, it must come from somewhere, i.e. the universe can never be more complex than its first cause, necessitating a “God cause.” Because you can never get out more than was already put in, as he puts it.

This is all false. And the Wong-Hazen theorem demonstrates why it is false. It thus refutes the only pertinent argument Lameiro made against Dittmar’s point. The rest of Lameiro’s argument is boilerplate, and thus already refuted in my debate with a far more competent theist, Wallace Marshall, in respect to cosmology, such as Lameiro’s inept understanding of the mathematics of infinity, or the science of cosmology, or why God is not simple but informationally complex. See also my recent discussions of many of the same kinds of points Lameiro (albeit confusedly) makes in Why Nothing Remains a Problem and Is Science Impossible without God? and The Argument to the Ontological Whatsit.

But here I will address the central point of Volker Dittmar: that, empirically, non-intelligent complexity only ever arises from ever-greater simplicity; and lo, that is what we observe in the universe. Apart from human action (which can intelligently skip steps), all observed complexity is built out of ever-simpler things (and that, ultimately, even includes human action: as our intelligence also arose out of ever-simpler things). This makes God the least likely explanation of the universe—because a god is informationally complex, and, being God, would have skipped steps. He also would have required steps to “build” him in the first place, just as was required for us; but I’ll set that aside today and focus instead on the steps he would skip, and why that matters. Because, instead, what we see is exactly what we’d expect to see if there is no God, a strange choice to make for a god. This is a universal problem for theism. And Wong-Hazen now gives us a rigorous scientific reason why.

The Wong-Hazen Proposal

Wong et al. propose a new law of nature, which they dub the “Law of Increasing Functional Information.” They give it a mathematical formalism. And they show how it explains such disparate phenomena as the evolution of life, the evolution of stars, even the crystallization of minerals. In fact, it can be shown to explain all natural increases in complexity, from the condensation of atoms after the Big Bang to the distillation of all the elements on the Periodic Table. It might even play a linchpin role in explaining the Big Bang itself, as it would certainly apply to most cosmological theories yet in play, although these authors do not directly propose this. Of course, this is all now just at the proposal stage; whether it gets struck down or bolstered and thus becomes an established law of physics awaits further scientific research. But it has immediate empirical verification, and elegantly explains quite a lot in a rather straightforward way.

The gist of this new law is, as I will colloquially put it, “Chaos + Natural Selection + Time = Order,” which can even be put as “Chaos + Death + Time = Order,” since what Wong et al. formalize as “selection for function” (my “natural selection”) means simply that nonfunctional outcomes die out, leaving only functional outcomes; and this death is inevitable, by virtue of simply being nonfunctional, because they define function as, in effect, that which has the capacity to stick around (and thus “not die out”). And since this is itself inevitable—in all possible worlds, some organizations of things will, by their own inherent attributes in interaction with their environment, stick around, while others will fail or die out—we don’t need to specify “death” in the equation, as that is entailed by “time.” So really, the Wong-Hazen law is “Chaos + Time = Order.” They use different terms and more formal definitions and equations and metrics than I am using here; I’m just translating the gist of things in ready-to-grasp language.

This means that their Law will govern all possible worlds, just like Thermodynamics and Mathematics (see All the Laws of Thermodynamics Are Inevitable and All Godless Universes Are Mathematical), and therefore is a necessary property of any existence, and therefore requires no further explanation. You don’t need to posit a God, or any cause at all, for why these laws manifest and govern the worlds they do; just as you don’t need anything to explain why spacetimes obey the laws of geometry: they cannot not obey the laws of geometry; that’s a logical impossibility. And that means these kinds of laws are a necessary being, in the subordinate sense that, though the worlds they describe might not be necessary beings, they themselves are necessary beings once any spacetime world exists (see, again, The Argument to the Ontological Whatsit). They are therefore not brute facts or properties begging explanation. They are self-explaining; they are caused by existence itself. Once there is a spacetime, necessarily there is geometryand the Laws of Thermodynamics, and the Wong-Hazen Law (just as Victor Stenger suspected for all laws of physics, as articulated in The Comprehensible Cosmos).

Thus, the Wong-Hazen effect couples with, for example, the Second Law of Thermodynamics, errantly regarded as a law stating that everything tends toward disorder (that’s not really how that law operates), to produce an opposite effect, a natural tendency toward order. Which also requires no mysterious design or forces to explain; it literally follows automatically from any system of any objects in any spacetime (see, again, All the Laws of Thermodynamics Are Inevitable). The actual Second Law states that the net entropy (disorder) of a (closed) system can never decrease and will tend to increase; but because this is a statement about the system as a whole, not the subordinate parts of it (hence the relevance of a system being “closed” in its equation), that law does not say anything about what kind of organization and order can arise within the system. And in fact we now know dissipative systems trade entropy for order: order and organization arise naturally in a system by “burning entropy” to produce it. The result is that while pockets of order and complexity naturally increase within the system, the total system becomes more disordered, e.g. crystals form naturally (order), but the amount of available heat dissipates away into the background (disorder). More, and more complex, things arise naturally by the Wong-Hazen Law, as more of the total energy of the system becomes dissipated and unusable by the Second Law. The two laws are complimentary and in fact fuel each other. Inevitably.

As Wong et al. put it:

The manifest tendency of evolving systems—especially nonliving ones, such as those involved in stellar, mineral, and atmospheric evolution … —to become increasingly ordered with the passage of time seems to stand in contrast to the [similar] temporally asymmetrical character of the second law of thermodynamics, which characterizes natural phenomena as becoming increasingly disordered with the passage of time. One of the distinguishing features of our proposal is formulating a universal law for both living and nonliving evolving systems that is consistent with the second law of thermodynamics but may not follow inevitably from it.

They are thus discovering the missing other side of the coin, as it were.

How This Works

A lot of this has to do with the power of randomness, the opposite of order or intelligence. I think theists fail to grasp how powerful randomness is as a creator. Possibly because they have a naive understanding of probability theory (see Three Common Confusions of Creationists). They think, like many people erroneously do, that “randomness” means “uniform disorder,” but “uniform disorder” would require an highly non-random distribution. Randomness is actually extremely powerful: it is chock full of information (in statistics, random sampling from a system produces acute knowledge about that system; the digits of pi, read with (say) base-ten ASCII, include all possible books that have ever and even could ever be written); and it is an inherent ordering force (the more random events there are, the more likely any conceivable organization of them will arise). Randomness is what powers the Problem with Nothing and causes the Laws of Thermodynamics. It’s what created us out of primordial soup. It’s what built our minds. Randomness is powerful. And it drives the Wong-Hazen process.

Imagine a six-sided die rolled a thousand times, and it never rolls the same number until all numbers have been rolled; for example, 1, 3, 5, 6, 2, 4, 4, 5, 3, 6, 1, 2, etc. That looks random, but in fact it is not: it is highly ordered. Some “law” or “force” would have to be preventing other results, like 1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3 (and I actually just rolled that sequence right now on my desk). The naive think such seemingly ordered sequences like I just randomly rolled must be nonrandom, when in fact they are more random than the forced sequence I wrote down earlier, where the die can’t roll a number again until all its numbers have been rolled. That requires an ordering force (something “stopping” the die from rolling certain numbers until certain conditions are met). Whereas a truly random process will inevitably generate order (notice the run of 5’s in my random sequence); and in fact ever more complex order, the larger the number of randomized events. This is how life started. This is probably what explains observed fine-tuning. Lots of random events entails the emergence of order.

Add to this fact leveraging—the ability of order, once arising at random, to build on itself over time, creating even more organized complexity—and you have Wong-Hazen. And all you need to get leveraging is selective death: randomly organized processes that by chance have the properties ensuring their endurance long enough to build on or build something out of (like, say, most molecules), will be “selected” by that very fact to stick around, while other outcomes die off, leaving room (and material) for continued evolution, by that same process (e.g. the off-cast detritus of stars, a.k.a. heavy elements, becomes planets). Natural selection thus explains everything, not just life. The only thing particular to life is that it sticks around now as a stable complex molecule (DNA) that is able to preserve a considerable amount of information across time, while randomly tinkering with it (mutation). Thus, life can build from a primeval proto-cell into a mammoth or a man. But even inorganic objects can preserve some information over time so as to leverage up into greater and greater complexity: the first stars burned helium and hydrogen into heavier elements; which by sticking around carried information on into other processes, like the building of planets, and eventually life itself.

Information is preserved in the form of environments, for example: when we say natural selection produced evolution of life on Earth, what we mean is that information from the environment (the Earth’s atmosphere and geology and climates, and eventually its ecosystems) gets transferred into living organisms, by selection: what “can survive” is based on information in the environment itself. By inevitably killing things, that information gets transferred into DNA (by being selected to stick around). No explanation is needed for how or why this happens: death (killing what can’t survive) is an inevitable outcome of the system. There is no mysterious force involved in choosing what dies; what dies is chosen by what’s lethal. The environment itself does this automatically, without any intelligent intervention. What Wong et al. have found is that this applies to every other leveraged increase in complexity in cosmic history, from stars and galaxies to the periodic table and Earth itself.

And the origin of all this information is randomness. No organized intelligence required.

To see what I mean, follow their analysis of stellar nucleosynthesis.

The Big Bang erupts randomly, leaving a disordered but ultra-hot soup (where most of its low entropy comes from the mere heat density of that soup and not its organization; it actually lacks much in the way of nonrandom organization). As that soup expands and cools (its heat thereby becoming dissipated, increasing its entropy), that random chaos leads to some pockets of matter being denser than others (by simple chance; cast density around completely at random, and inevitably some pockets will be denser than others), thereby collapsing inevitably into stars. All the information here comes from the random distribution itself. It does not come from a mind.

Stars then stick around for a while because (and for as long as) they happen to fall into stable processes; while less stable “collapsing” events don’t stick around. Stars are thus not intelligently planned or designed; they are just the accidental outcome of being stable. Some things by chance accident just have the property of being able to endure, against time and other forces that would destroy them. Again, this is expected by accident: cast matter into random forms, and just by chance some (like stars) will be more enduring than others (like mere shapes in intergalactic dust clouds). Stars thus form randomly (not by design), and then are “naturally selected” to endure by their accidental properties (like density and atomic interactions), while everything else gets “killed off” (intergalactic dust clouds being, in effect, the corpses of failed, aged-out, killed, or never-were stars). Then these stars inevitably burn light elements into heavier elements, again increasing complexity in the universe, all while at the same time increasing entropy, by burning concentrated heat off into space.

Why do heavy elements stick around? Because they can. Those that can’t, don’t. Think not only of the artificial elements on the Periodic Table that can’t form naturally or wouldn’t survive long enough to do much even if they did, but also of all the different configurations of quarks that have a disastrously short lifespan. The reason all matter is made today out of electrons, protons, and neutrons, is that those are what by chance accident had the randomly-selected properties of being stable and sticking around. We aren’t made of pions simply because they are selected away (killed off) by lacking the properties needed to stick around. There are tons of randomly assembled forms of matter; by chance accident we expect some to be surviving and others not. Most are not. But inevitably, some will be. And so that’s what we and our planet and household goods are made of. All because of randomness.

As Wong et al. put it:

Thus, stellar evolution leads to new configurations of countless interacting nuclear particles. Inexorably, the system evolves from a small number of elements and isotopes to the diversity of atomic building blocks we see in the universe today.

And hence on to, as well, an inevitably resulting geological evolution…

[T]he average chemical and structural information contained in minerals increases systematically through billions of years of planetary evolution. Thus, as with stellar nucleosynthesis, mineral evolution occurs as a sequence of processes that increase the system’s diversity in stages, each building on the ones that came before.

When information is preserved across a random process, by merely possessing the randomly-selected capacity to survive and stick around, it can accumulate, and thus inevitably increase order, diversity, and complexity. Inevitably. No intelligence required. To the contrary, like biological evolution, you would need an intelligence to intervene to stop this from happening, not to produce it.

As again Wong et al. put it (emphasis mine):

Life, though distinct in the specifics of its evolutionary mechanisms, can be conceptualized as equivalent to the previous examples of nucleosynthesis and mineral evolution in the following way: Whether viewed at the scale of interacting molecules, cells, individuals, or ecosystems, biological systems have the potential to occur in numerous configurations, many different configurations are generated, and natural selection preferentially retains configurations with effective functions.

When you have a ton of random stuff, “the potential” logically necessarily exists for it to end up in numerous configurations. Add time, and random action will therefore logically necessarily produce “many different configurations” of that stuff. Throw those configurations all in together in a giant random mess, and some will by chance have the right properties to “stick around,” while the others won’t; indeed most things won’t, the vast majority of “stuff” and its random configurations will be destroyed or broken up or otherwise “killed off” (which is why almost all the contents of our universe is lethal, not conducive, to life; it’s mostly empty or scattered junk). This sifting effect thus results in the emergence of order, organization, and complexity (stars from primordial matter, planets from stellar dust, organic molecules from random chemical interactions, self-replicating molecules from a lot of that random mixing, then natural selection on up the ladder of life), amidst an ever-expanding background of volatile chaos (a vast and growing dust- and detritus- and radiation-filled vacuum). Wong-Hazen predicts this will occur in nearly every possible world where there is “enough stuff” for this randomization to have such effects.

And it is indeed randomness that ensures this law takes effect and produces the predicted observations (emphasis again mine):

These three evolving natural systems differ significantly in detail. Stellar nucleosynthesis depends on the selection of stable configurations of protons and neutrons. Mineral evolution relies on selection of new, locally stable arrangements of chemical elements. Biological evolution occurs through natural selection of advantageous heritable traits. Nevertheless, we conjecture that these examples (and many others) are conceptually equivalent in three important respects:

  1. Each system is formed from numerous interacting units (e.g., nuclear particles, chemical elements, organic molecules, or cells) that result in combinatorially large numbers of possible configurations.
  2. In each of these systems, ongoing processes generate large numbers of different configurations.
  3. Some configurations, by virtue of their stability or other “competitive” advantage, are more likely to persist owing to selection for function.

In other words, each system evolves via the selection of advantageous configurations with respect to systemic persistence.

And this suggests an actual natural law is at work here, explaining the fact that evolving systems exist across the entire cosmos and at all scales, and the fact that “evolving systems are asymmetrical with respect to time” and “they display temporal increases in diversity, distribution, and/or patterned behavior.” Hence “these three characteristics—component diversity, configurational exploration, and selection,” which they “conjecture represent conceptual equivalences for all evolving natural systems,” might well “be sufficient to articulate a qualitative law-like statement that is not implicit in the classical laws of physics.”

In other words, once you have those three things, you have that leveraged increase in complexity all the way up over time. And random chance can explain all three. Component diversity, if selected at random, will always be high—and for, ironically, the same reason as the Law of Entropy: there are far more randomly selectable states that are highly component-diverse than pervasively simple. Add time, and configurable exploration is then the statistically inevitable product of what is called random walk. And selection is the inevitable outcome of any randomly configured system: vastly more randomly accessible configurations will be selective in what they kill and allow to survive, than configurations that allow everything or nothing to stick around.

Their most concise statements of this new law of nature is:

  • “Systems of many interacting agents display an increase in diversity, distribution, and/or patterned behavior when numerous configurations of the system are subject to selective pressure.”

Or more formally:

  • “The functional information of a system will increase (i.e., the system will evolve) if many different configurations of the system are subjected to selection for one or more functions.”

Which, I observe, will happen to describe most randomly selected systems.

The origin of order, organization, and complexity in nature therefore has no need of a God to explain it. The question of cosmogenesis, i.e. where “all the stuff” comes from, meanwhile, can be explained in much the same way: eternal inflation, for example, essentially describes an evolving Wong-Hazen process on an extra-cosmic scale; and if we need any more fundamental an explanation (such as for what started or manifests that eternal inflation, as opposed to something else), we have other random-selection hypotheses that work better, and are more in evidence, than theism (see, for example, The Problem with Nothing and Six Arguments That a Multiverse Is More Probable Than a God and Why the Fine Tuning Argument Proves God Does Not Exist).

If you are familiar with arguments for the inevitability of the Law of Entropy from statistical mechanics, whereby randomly mixing atoms results in ever-more-probable states being selected. Every state, every organization, of atoms of gas in a tank has the same probability as every other; but most of those states are a chaos, very few are orderly; therefore, as you keep “rolling that die” it gets more and more disordered over time, as more disordered states are far more likely outcomes with each roll. Now imagine adding something into that tank, a “disruptor” or what Wong et al. call “frustrators,” things that cause selection to occur: rather than all possible states being equally likely, some now become very unlikely, by being “killed off,” leaving other states, which avoid dying out, to become unequally far more probable over time. Let’s say, suddenly atoms of that gas become sticky. Now clumps of those atoms have a “survival” advantage over scattered chaoses of them—because they “stick around” (pun intended). This creates escalating order within the tank, even as the total entropy still keeps going up as well (since that clumping comes at a cost of increasing disorder everywhere else in the system). If such “frustrators” can arise by accident, due to a randomization of properties within the tank, then no intelligence is required for this resulting evolution of specified complexity within that tank. It will just happen of its own.

This might be hard for a theist to understand, again because of their naive understanding of probability and randomness. As Wong et al. explain:

The wide diversity of materials in our universe are a result of these barriers [to a rapid dissolution of the whole]. The elements of the periodic table exist because light nuclei do not easily fuse to form iron, and many heavy nuclei are stable and do not decay. A visible photon does not by itself transform into many thermal photons. Minerals forged at the pressure and temperature conditions of Earth’s mantle can persist on the surface due to kinetic stability. Similarly, organic matter does not spontaneously combust in an oxygen atmosphere due to the high activation energy of combustion. We owe our existence to all of these metastable features of our universe.

A theist might hear that and say “all these barriers require intelligent selection.” But that would be like looking at that nonrandom sequence I wrote down earlier (1, 3, 5, 6, 2, 4, 4, 5, 3, 6, 1, 2…) and claiming that is random, while looking at the actually random sequence (1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3…) and claiming that requires intelligent design. They have it exactly backwards. A universe with no barriers to the rapid dissipation of its contents would have to be very acutely selected; not a universe with many such barriers. Because, of all randomly selectable universes, vastly most will randomly contain such barriers; you’d have to get extremely selective to choose out one that “just happened” to have none, like “just happening” to roll 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, and claiming “that’s” expected on random chance. It isn’t. Randomly selected worlds will randomly contain some kind of barriers (Wong-Hazen “frustrators”) that activate the Wong-Hazen Law. Just like randomly distributed matter after the Big Bang: that will far more likely lead to stars than “perfectly distributed matter” preventing star formation. Perfectly uniform distribution is like rolling 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1. Random distribution looks more like 1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3. It more likely contains everything and the kitchen sink.

Wong et al. also state a point that applies as well to cosmological “fine tuning” arguments: “Even if we could analyze a specific instance where one configuration enables a function, we cannot generally know whether other solutions of equal or greater function might exist in configuration space,” i.e. it is impossible to work out how many life-producing worlds there are in the configurable space of possible physical constants, because there are potentially infinite constants which can all vary at random. Theists usually hold all constants fixed but one, and calculate from there the number of viable worlds. But this is not how a randomization of constants-space would ever operate. There are, in fact, infinitely many values for the strength of gravity that exist in the same ratio as ours with some value for the strength of electromagnetism. Because, remember, the theist assumes both values can vary at random; and in any actual random selection, they will. There are also constants that have a value of zero in our universe that could have a nonzero value in another universe, rendering a different ideal ratio of gravitational to electromagnetic force strength than ours. And so on. We cannot calculate, much less explore, an infinitely variable configuration space. The fine-tuning argument thus can’t even get off the ground. (And that’s even apart from the fact that the fine-tuning argument is already self-refuting.)

All we can say for sure is that in any random selection of possible existence-states (like configurations of atoms of gas in a tank), most by far will be quasi-infinite multiverses that explore that information space for us, rendering the existence of life-producing configurations near certain. No God required. Or likely. Because a “single universe” (or none) is an extremely narrow selection from among all possible options, and thus in fact the least likely to occur by chance; just like a configuration of gas in a tank whereby all the atoms randomly collapse into a singularity. Even if this exploration of possibility space takes place over time, that would conform to the Wong-Hazen law: because more tries = higher rates of escalating complexity; indeed, eternal inflation can itself be a random-walk pathway inevitably to our universe.

But more important than all this is how all this relates to the evidence. Because it is, in the end, evidence that decides the probability of theism vs. atheism…

The Dittmar Argument

Now to bring this all back to the Dittmar-Lameiro debate. To give you a clearer sense of why Dittmar’s argument is immune to Lameiro’s rebuttal, you need to understand two things: Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them and The Argument from Specified Complexity against Supernaturalism. The overwhelming empirical trend has been that mindless natural causes underlie everything; never once have we found a supernatural cause of anything. This keeps the prior probability extremely high in favor of a Wong-Hazen-style explanation of observed complexity and order in nature. Likewise the overwhelming empirical trend has been that every complexity arises out of the assembly and interaction of ever-simpler things, never the reverse, exactly as Wong-Hazen predicts.

Lameiro has no response to this. He avoids it with semantical legerdemain instead. For example, he wants to insist God is “simple,” but to get there he uses bogus measures of simplicity (like counting God’s geometric parts!), ignoring the fact that only one kind of simplicity matters here: informational. And God is informationally maximally complex; not simple. The mind of a worm is far simpler than God’s. And a stone is far simpler than a worm. And an electron is far simpler than a stone. What underlies the electron, therefore, should be simpler still, and not suddenly, inexplicably, maximally complex (it’s thus significant to note, by the way: nothing can be simpler than absolutely nothing).

This is Dittmar’s point, and Lameiro never really addresses it, except with false assertions (like “God is simple”), arrived at by bogus arguments (like choosing an irrelevant measure of complexity and hiding from the pertinent one). This all illustrates the difference in methodologies between science and theology outlined in the Nieminen-Mustonen thesis: Dittmar is formulating a falsifiable hypothesis, and demonstrating it is empirically confirmed; Lameiro is formulating an unfalsifiable hypothesis, and in its defense avoiding all pertinent evidence. Lameiro also appeals to his own inexpert intuitions and rests on a litany of undefended assertions as if they were established facts; while Dittmar appeals to competently and independently confirmed scientific facts, holding strictly to what is actually in evidence, rather than conjectures masquerading as facts.

The hypothetico-deductive method is erosive of all bullshit like the god-hypothesis. Rather than making excuses for why your theory can still be true despite all the evidence against it, a genuinely truth-finding procedure is to sincerely ask what your theory predicts (what it directly entails) and what its best competitor predicts, and then go and look and see which one’s predictions come to pass. This procedure must have a real and substantive chance of falsifying your theory (and the other); it can’t be weak or gamed or corralled with defensive excuses. There must be something meaningfully different between what your theory predicts and what the alternative does; and the only alternative that matters here is the steel man of your competition, not its straw man. Theology flees in terror to the straw men, finds them wanting, and praises Jesus. Science has the courage to face the steel man, head on; and that is why it alone (and methods akin) discovers the truth.

And here in the Wong-Hazen proposal we have a definite differential prediction between intelligent and nonintelligent organizing-forces. Wong-Hazen predicts that all observed complexities in the universe will correspond to “Chaos + Death + Time = Order.” So, we should expect complexity to slowly evolve from simpler components over extremely long periods of time and through graduated steps of destructive environmental selection; we will see inevitable selection (“killing off” what can’t survive), and leveraging (subsequent steps of survival building on prior steps of survival); and we should expect to see an extraordinarily large, messy, and random configuration-space being explored, with vastly more death and failure than successful complexity development—in their words words, we should expect: “component diversity, configurational exploration, and selection,” and “large numbers” of all three.

And lo, as Dittmar points out, this is what we observe: the components and contents of the universe began capacious but simple, as a single, simple inflaton decayed into subatomic particles that began to diversify until atoms could form, then mostly just hydrogen and helium; but slowly, through random distribution and exploration of possibility-space, stars formed and leveraged these elements into more and more complex elements; then slowly, through random distribution and exploration of possibility-space, the resulting detritus formed swirls of chaotic dust that slowly, through random distribution and exploration of possibility-space, formed countless different configurations of planets and moons; which by sheer numbers inevitably, randomly, included habitable environs; and then slowly, through random distribution and exploration of possibility-space, self-replicating molecules were struck upon; and then slowly, through random distribution and exploration of possibility-space, life evolved; and thence into us.

This is not how intelligent design operates. This is conspicuously how things proceed in the absence of intelligent design. Because the one thing that distinguishes ID from mindless natural processes is the ability to explore options in mental space; and thus most of the exploration of possibility-space and selection forces take place in thought, and then an outcome is realized. So, if an intelligent agent were to make a universe, they’d just make one. There’d be no long random exploration of options in physical space, letting natural selection kill-off the way to sustainable results. Instead of eons of slow stellar development with multiple birth-and-death stages of building ever larger elements to get heavy elements necessary for life, an intelligent agent would just make the elements needed. You’d just get a sun. You’d just get Earth. Life would happen immediately; no evolution. Designers simply make the things they want, having done all the messy exploration and selection shit in their minds. That is, in fact, the only meaningful difference between intelligent and nonintelligent creation.

This is why a literal reading of the Genesis account of creation is closer to what a God hypothesis predicts than what we actually found—and thus why that’s what early theists thought of. They were correctly deducing that an intelligent designer just makes things. Per Paley, you just “get a watch” rather than watching one grow slowly through chaotic eons of natural selection; you just “get stars,” rather than watching them emerge inevitably from eons of chaotic building processes; you just “get complex molecules,” rather than watching them take eons to slowly evolve through stellar synthesis; you just “get a planet,” rather than watching it emerge from blind forces acting on an accretion disk over chaotic eons, amidst untold trillions of failed planets emerging randomly from that same process. And so on.

Thus, with ID you get an instant sun, Earth, and biosphere, all to plan. Not billions of years of random fucking around with a hodgepodge of stars and planets, and billions of years of random fucking around with bacteria and algae. There would simply be people. And we’d have confirmed that by now. There are a dozen different physical ways we could confirm our planet and the rest of the universe to be basically six thousand years old—if it actually were. Likewise that all life arose pretty much at the same time, rather than what we did find. We could also prove everyone was descended from the same single man and woman, were that actually the case. It wasn’t. Sure, maybe the authors of Genesis were idiots, and figured wrong what exactly God would do. But whatever a God would do, it would correspond to the same functional model: immediate creation, skipping steps—all the steps God used his mind (his intelligence) to skip. If God skipped no steps, if he just randomly made things and watched what popped out, he did not intelligently design anything.

The fact that all the evidence conforms to the predictions of the mindless Wong-Hazen process, and thus matches what we expect if existence were random and not designed, falsifies the god hypothesis. It’s done for. And there is no way “around” this. You can’t claim “but God wanted to make the universe look exactly like it would look if there was no God,” because, (a) you don’t know that (you just made it up; it’s an excuse you are inventing to avoid rather than address the evidence; it is, literally, bullshit—and that is the difference between theology and actual methods of discovering the truth: one depends on mountains of bullshit; the other clears them away), and (b) it’s not honestly likely (so you just made your hypothesis less rather than more likely), and (c) it is self-contradictory to claim that the absence of any intelligence being involved (zero skipped steps = zero intelligent involvement) is evidence of intelligence being involved. You are at that point just trying to bullshit your way out of the consequences of the evidence, rather than following that evidence to the truth.

The same follows for every other apologetic tactic you might want to cling to in desperation. It’s all just excuse-making. None of it empirical. None of it capable of confirming your hypothesis against all the evidence refuting it. Gods are not simple; they are maximally complex. Gods are not plausible; supernatural powers make next to no sense. And Gods are not in evidence; they are, to the contrary, conspicuously absent everywhere we look. But what does explain everything—eerily well, even the weirdest shit about existence—is randomness.

Randomly selected universes, with randomly distributed shit—including randomly distributed “frustrators”—are guaranteed to produce a universe relevantly like ours. This explains the universe’s bizarre age, bizarre size, its bizarre scale of lethality, its bizarre scale of wasted content: it all conforms to the prediction of the Wong-Hazen Law. A large, randomized, eternally explored configuration-space will not only generate our world, it will explain all its weird features. This cannot be escaped by contradictorily insisting God used none of his intelligence when “intelligently” designing everything. The far more probable fact is: God had nothing to do with it.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading