A new law of nature has been fleshed out and proposed, in a research paper by Wong, Cleland, Arend, and Hazen, which theory I will just label for convenience the Wong-Hazen thesis. To read the original paper, see “On the Roles of Function and Selection in Evolving Systems,” PNAS 120 (8 July 2023). To read a good lay summary, see Will Dunham’s treatment for Reuters in “Scientists Propose Sweeping New Law of Nature, Expanding on Evolution.”
The debate over theology isn’t at all on their radar. But it’s important to explain how what they are proposing closes a gap atheists have long suspected and even argued is filled with something like this, and why that makes theism now even more improbable than it already was. Curiously, it hits upon topics that have recently been fashionable: the origin and causes of order and organization in the universe; and what “the first cause” (in whatever sense you want) is starting to look like from an empirical point of view (the point of view theists are loathe to adopt because it goes badly for them).
Some Recent Context
This all reminds me of the fundamental difference between the foundational methodologies of science vs. theology, and why science discovers knowledge and makes real progress, and theology doesn’t: science is empirical, objective, integrative, and measures confidence in respect to surviving serious falsification tests; whereas theology is armchair, anecdotal, and cherry-picking, and evades real falsification tests. This distinction was recently spelled out in another paper, by Nieminen, Loikkanen, Ryökäs, and Mustonen, “Nature of Evidence in Religion and Natural Science,” Theology and Science 18.3 (2020), which I’ll refer to for convenience as the Nieminen-Mustonen thesis. Ironically, an upset theist who tried refuting them in the next volume (19.3; 2021) ended up validating their thesis by using every tactic they had just criticized as invalid procedure relative to the real operations of science, all in a vain effort to emulate the appearance of adopting objective procedures (“Bias in the Science and Religion Dialogue? A Critique of ‘Nature of Evidence in Religion and Natural Science'”).
The distinction, and its importance, will become apparent by the end of my analysis here. But this all relates to a recent “debate” (if you can call it that) between German philosopher Volker Dittmar and inept internet theist Jose Lameiro on Germany’s Quora, where Dittmar answered the question “Kann das Universum ohne eine höhere Intelligenz entstanden sein?” (“Can the universe have come into being without a higher intelligence?”). I won’t fisk these authors’ remarks. They are a mixed bag, but Dittmar is the one who comes out as mostly correct: his point, which I have long made myself (and which Lameiro’s reply avoids answering, by throwing up a hodgepodge of apologetical tropes instead), is that we observe complex things always arise from simpler things, a trend nixing God (who is maximally complex) from being the “first cause,” or even a cause at all. Once we clear aside all the side-points, distractions, and gobbledygook that Lameiro bombs his comments with, the most Lameiro has to say as far as actually addressing Dittmar’s argument is that information cannot arise, it must come from somewhere, i.e. the universe can never be more complex than its first cause, necessitating a “God cause.” Because you can never get out more than was already put in, as he puts it.
This is all false. And the Wong-Hazen theorem demonstrates why it is false. It thus refutes the only pertinent argument Lameiro made against Dittmar’s point. The rest of Lameiro’s argument is boilerplate, and thus already refuted in my debate with a far more competent theist, Wallace Marshall, in respect to cosmology, such as Lameiro’s inept understanding of the mathematics of infinity, or the science of cosmology, or why God is not simple but informationally complex. See also my recent discussions of many of the same kinds of points Lameiro (albeit confusedly) makes in Why Nothing Remains a Problem and Is Science Impossible without God? and The Argument to the Ontological Whatsit.
But here I will address the central point of Volker Dittmar: that, empirically, non-intelligent complexity only ever arises from ever-greater simplicity; and lo, that is what we observe in the universe. Apart from human action (which can intelligently skip steps), all observed complexity is built out of ever-simpler things (and that, ultimately, even includes human action: as our intelligence also arose out of ever-simpler things). This makes God the least likely explanation of the universe—because a god is informationally complex, and, being God, would have skipped steps. He also would have required steps to “build” him in the first place, just as was required for us; but I’ll set that aside today and focus instead on the steps he would skip, and why that matters. Because, instead, what we see is exactly what we’d expect to see if there is no God, a strange choice to make for a god. This is a universal problem for theism. And Wong-Hazen now gives us a rigorous scientific reason why.
The Wong-Hazen Proposal
Wong et al. propose a new law of nature, which they dub the “Law of Increasing Functional Information.” They give it a mathematical formalism. And they show how it explains such disparate phenomena as the evolution of life, the evolution of stars, even the crystallization of minerals. In fact, it can be shown to explain all natural increases in complexity, from the condensation of atoms after the Big Bang to the distillation of all the elements on the Periodic Table. It might even play a linchpin role in explaining the Big Bang itself, as it would certainly apply to most cosmological theories yet in play, although these authors do not directly propose this. Of course, this is all now just at the proposal stage; whether it gets struck down or bolstered and thus becomes an established law of physics awaits further scientific research. But it has immediate empirical verification, and elegantly explains quite a lot in a rather straightforward way.
The gist of this new law is, as I will colloquially put it, “Chaos + Natural Selection + Time = Order,” which can even be put as “Chaos + Death + Time = Order,” since what Wong et al. formalize as “selection for function” (my “natural selection”) means simply that nonfunctional outcomes die out, leaving only functional outcomes; and this death is inevitable, by virtue of simply being nonfunctional, because they define function as, in effect, that which has the capacity to stick around (and thus “not die out”). And since this is itself inevitable—in all possible worlds, some organizations of things will, by their own inherent attributes in interaction with their environment, stick around, while others will fail or die out—we don’t need to specify “death” in the equation, as that is entailed by “time.” So really, the Wong-Hazen law is “Chaos + Time = Order.” They use different terms and more formal definitions and equations and metrics than I am using here; I’m just translating the gist of things in ready-to-grasp language.
This means that their Law will govern all possible worlds, just like Thermodynamics and Mathematics (see All the Laws of Thermodynamics Are Inevitable and All Godless Universes Are Mathematical), and therefore is a necessary property of any existence, and therefore requires no further explanation. You don’t need to posit a God, or any cause at all, for why these laws manifest and govern the worlds they do; just as you don’t need anything to explain why spacetimes obey the laws of geometry: they cannot not obey the laws of geometry; that’s a logical impossibility. And that means these kinds of laws are a necessary being, in the subordinate sense that, though the worlds they describe might not be necessary beings, they themselves are necessary beings once any spacetime world exists (see, again, The Argument to the Ontological Whatsit). They are therefore not brute facts or properties begging explanation. They are self-explaining; they are caused by existence itself. Once there is a spacetime, necessarily there is geometry—and the Laws of Thermodynamics, and the Wong-Hazen Law (just as Victor Stenger suspected for all laws of physics, as articulated in The Comprehensible Cosmos).
Thus, the Wong-Hazen effect couples with, for example, the Second Law of Thermodynamics, errantly regarded as a law stating that everything tends toward disorder (that’s not really how that law operates), to produce an opposite effect, a natural tendency toward order. Which also requires no mysterious design or forces to explain; it literally follows automatically from any system of any objects in any spacetime (see, again, All the Laws of Thermodynamics Are Inevitable). The actual Second Law states that the net entropy (disorder) of a (closed) system can never decrease and will tend to increase; but because this is a statement about the system as a whole, not the subordinate parts of it (hence the relevance of a system being “closed” in its equation), that law does not say anything about what kind of organization and order can arise within the system. And in fact we now know dissipative systems trade entropy for order: order and organization arise naturally in a system by “burning entropy” to produce it. The result is that while pockets of order and complexity naturally increase within the system, the total system becomes more disordered, e.g. crystals form naturally (order), but the amount of available heat dissipates away into the background (disorder). More, and more complex, things arise naturally by the Wong-Hazen Law, as more of the total energy of the system becomes dissipated and unusable by the Second Law. The two laws are complimentary and in fact fuel each other. Inevitably.
As Wong et al. put it:
The manifest tendency of evolving systems—especially nonliving ones, such as those involved in stellar, mineral, and atmospheric evolution … —to become increasingly ordered with the passage of time seems to stand in contrast to the [similar] temporally asymmetrical character of the second law of thermodynamics, which characterizes natural phenomena as becoming increasingly disordered with the passage of time. One of the distinguishing features of our proposal is formulating a universal law for both living and nonliving evolving systems that is consistent with the second law of thermodynamics but may not follow inevitably from it.
They are thus discovering the missing other side of the coin, as it were.
How This Works
A lot of this has to do with the power of randomness, the opposite of order or intelligence. I think theists fail to grasp how powerful randomness is as a creator. Possibly because they have a naive understanding of probability theory (see Three Common Confusions of Creationists). They think, like many people erroneously do, that “randomness” means “uniform disorder,” but “uniform disorder” would require an highly non-random distribution. Randomness is actually extremely powerful: it is chock full of information (in statistics, random sampling from a system produces acute knowledge about that system; the digits of pi, read with (say) base-ten ASCII, include all possible books that have ever and even could ever be written); and it is an inherent ordering force (the more random events there are, the more likely any conceivable organization of them will arise). Randomness is what powers the Problem with Nothing and causes the Laws of Thermodynamics. It’s what created us out of primordial soup. It’s what built our minds. Randomness is powerful. And it drives the Wong-Hazen process.
Imagine a six-sided die rolled a thousand times, and it never rolls the same number until all numbers have been rolled; for example, 1, 3, 5, 6, 2, 4, 4, 5, 3, 6, 1, 2, etc. That looks random, but in fact it is not: it is highly ordered. Some “law” or “force” would have to be preventing other results, like 1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3 (and I actually just rolled that sequence right now on my desk). The naive think such seemingly ordered sequences like I just randomly rolled must be nonrandom, when in fact they are more random than the forced sequence I wrote down earlier, where the die can’t roll a number again until all its numbers have been rolled. That requires an ordering force (something “stopping” the die from rolling certain numbers until certain conditions are met). Whereas a truly random process will inevitably generate order (notice the run of 5’s in my random sequence); and in fact ever more complex order, the larger the number of randomized events. This is how life started. This is probably what explains observed fine-tuning. Lots of random events entails the emergence of order.
Add to this fact leveraging—the ability of order, once arising at random, to build on itself over time, creating even more organized complexity—and you have Wong-Hazen. And all you need to get leveraging is selective death: randomly organized processes that by chance have the properties ensuring their endurance long enough to build on or build something out of (like, say, most molecules), will be “selected” by that very fact to stick around, while other outcomes die off, leaving room (and material) for continued evolution, by that same process (e.g. the off-cast detritus of stars, a.k.a. heavy elements, becomes planets). Natural selection thus explains everything, not just life. The only thing particular to life is that it sticks around now as a stable complex molecule (DNA) that is able to preserve a considerable amount of information across time, while randomly tinkering with it (mutation). Thus, life can build from a primeval proto-cell into a mammoth or a man. But even inorganic objects can preserve some information over time so as to leverage up into greater and greater complexity: the first stars burned helium and hydrogen into heavier elements; which by sticking around carried information on into other processes, like the building of planets, and eventually life itself.
Information is preserved in the form of environments, for example: when we say natural selection produced evolution of life on Earth, what we mean is that information from the environment (the Earth’s atmosphere and geology and climates, and eventually its ecosystems) gets transferred into living organisms, by selection: what “can survive” is based on information in the environment itself. By inevitably killing things, that information gets transferred into DNA (by being selected to stick around). No explanation is needed for how or why this happens: death (killing what can’t survive) is an inevitable outcome of the system. There is no mysterious force involved in choosing what dies; what dies is chosen by what’s lethal. The environment itself does this automatically, without any intelligent intervention. What Wong et al. have found is that this applies to every other leveraged increase in complexity in cosmic history, from stars and galaxies to the periodic table and Earth itself.
And the origin of all this information is randomness. No organized intelligence required.
To see what I mean, follow their analysis of stellar nucleosynthesis.
The Big Bang erupts randomly, leaving a disordered but ultra-hot soup (where most of its low entropy comes from the mere heat density of that soup and not its organization; it actually lacks much in the way of nonrandom organization). As that soup expands and cools (its heat thereby becoming dissipated, increasing its entropy), that random chaos leads to some pockets of matter being denser than others (by simple chance; cast density around completely at random, and inevitably some pockets will be denser than others), thereby collapsing inevitably into stars. All the information here comes from the random distribution itself. It does not come from a mind.
Stars then stick around for a while because (and for as long as) they happen to fall into stable processes; while less stable “collapsing” events don’t stick around. Stars are thus not intelligently planned or designed; they are just the accidental outcome of being stable. Some things by chance accident just have the property of being able to endure, against time and other forces that would destroy them. Again, this is expected by accident: cast matter into random forms, and just by chance some (like stars) will be more enduring than others (like mere shapes in intergalactic dust clouds). Stars thus form randomly (not by design), and then are “naturally selected” to endure by their accidental properties (like density and atomic interactions), while everything else gets “killed off” (intergalactic dust clouds being, in effect, the corpses of failed, aged-out, killed, or never-were stars). Then these stars inevitably burn light elements into heavier elements, again increasing complexity in the universe, all while at the same time increasing entropy, by burning concentrated heat off into space.
Why do heavy elements stick around? Because they can. Those that can’t, don’t. Think not only of the artificial elements on the Periodic Table that can’t form naturally or wouldn’t survive long enough to do much even if they did, but also of all the different configurations of quarks that have a disastrously short lifespan. The reason all matter is made today out of electrons, protons, and neutrons, is that those are what by chance accident had the randomly-selected properties of being stable and sticking around. We aren’t made of pions simply because they are selected away (killed off) by lacking the properties needed to stick around. There are tons of randomly assembled forms of matter; by chance accident we expect some to be surviving and others not. Most are not. But inevitably, some will be. And so that’s what we and our planet and household goods are made of. All because of randomness.
As Wong et al. put it:
Thus, stellar evolution leads to new configurations of countless interacting nuclear particles. Inexorably, the system evolves from a small number of elements and isotopes to the diversity of atomic building blocks we see in the universe today.
And hence on to, as well, an inevitably resulting geological evolution…
[T]he average chemical and structural information contained in minerals increases systematically through billions of years of planetary evolution. Thus, as with stellar nucleosynthesis, mineral evolution occurs as a sequence of processes that increase the system’s diversity in stages, each building on the ones that came before.
When information is preserved across a random process, by merely possessing the randomly-selected capacity to survive and stick around, it can accumulate, and thus inevitably increase order, diversity, and complexity. Inevitably. No intelligence required. To the contrary, like biological evolution, you would need an intelligence to intervene to stop this from happening, not to produce it.
As again Wong et al. put it (emphasis mine):
Life, though distinct in the specifics of its evolutionary mechanisms, can be conceptualized as equivalent to the previous examples of nucleosynthesis and mineral evolution in the following way: Whether viewed at the scale of interacting molecules, cells, individuals, or ecosystems, biological systems have the potential to occur in numerous configurations, many different configurations are generated, and natural selection preferentially retains configurations with effective functions.
When you have a ton of random stuff, “the potential” logically necessarily exists for it to end up in numerous configurations. Add time, and random action will therefore logically necessarily produce “many different configurations” of that stuff. Throw those configurations all in together in a giant random mess, and some will by chance have the right properties to “stick around,” while the others won’t; indeed most things won’t, the vast majority of “stuff” and its random configurations will be destroyed or broken up or otherwise “killed off” (which is why almost all the contents of our universe is lethal, not conducive, to life; it’s mostly empty or scattered junk). This sifting effect thus results in the emergence of order, organization, and complexity (stars from primordial matter, planets from stellar dust, organic molecules from random chemical interactions, self-replicating molecules from a lot of that random mixing, then natural selection on up the ladder of life), amidst an ever-expanding background of volatile chaos (a vast and growing dust- and detritus- and radiation-filled vacuum). Wong-Hazen predicts this will occur in nearly every possible world where there is “enough stuff” for this randomization to have such effects.
And it is indeed randomness that ensures this law takes effect and produces the predicted observations (emphasis again mine):
These three evolving natural systems differ significantly in detail. Stellar nucleosynthesis depends on the selection of stable configurations of protons and neutrons. Mineral evolution relies on selection of new, locally stable arrangements of chemical elements. Biological evolution occurs through natural selection of advantageous heritable traits. Nevertheless, we conjecture that these examples (and many others) are conceptually equivalent in three important respects:
- Each system is formed from numerous interacting units (e.g., nuclear particles, chemical elements, organic molecules, or cells) that result in combinatorially large numbers of possible configurations.
- In each of these systems, ongoing processes generate large numbers of different configurations.
- Some configurations, by virtue of their stability or other “competitive” advantage, are more likely to persist owing to selection for function.
In other words, each system evolves via the selection of advantageous configurations with respect to systemic persistence.
And this suggests an actual natural law is at work here, explaining the fact that evolving systems exist across the entire cosmos and at all scales, and the fact that “evolving systems are asymmetrical with respect to time” and “they display temporal increases in diversity, distribution, and/or patterned behavior.” Hence “these three characteristics—component diversity, configurational exploration, and selection,” which they “conjecture represent conceptual equivalences for all evolving natural systems,” might well “be sufficient to articulate a qualitative law-like statement that is not implicit in the classical laws of physics.”
In other words, once you have those three things, you have that leveraged increase in complexity all the way up over time. And random chance can explain all three. Component diversity, if selected at random, will always be high—and for, ironically, the same reason as the Law of Entropy: there are far more randomly selectable states that are highly component-diverse than pervasively simple. Add time, and configurable exploration is then the statistically inevitable product of what is called random walk. And selection is the inevitable outcome of any randomly configured system: vastly more randomly accessible configurations will be selective in what they kill and allow to survive, than configurations that allow everything or nothing to stick around.
Their most concise statements of this new law of nature is:
- “Systems of many interacting agents display an increase in diversity, distribution, and/or patterned behavior when numerous configurations of the system are subject to selective pressure.”
Or more formally:
- “The functional information of a system will increase (i.e., the system will evolve) if many different configurations of the system are subjected to selection for one or more functions.”
Which, I observe, will happen to describe most randomly selected systems.
The origin of order, organization, and complexity in nature therefore has no need of a God to explain it. The question of cosmogenesis, i.e. where “all the stuff” comes from, meanwhile, can be explained in much the same way: eternal inflation, for example, essentially describes an evolving Wong-Hazen process on an extra-cosmic scale; and if we need any more fundamental an explanation (such as for what started or manifests that eternal inflation, as opposed to something else), we have other random-selection hypotheses that work better, and are more in evidence, than theism (see, for example, The Problem with Nothing and Six Arguments That a Multiverse Is More Probable Than a God and Why the Fine Tuning Argument Proves God Does Not Exist).
If you are familiar with arguments for the inevitability of the Law of Entropy from statistical mechanics, whereby randomly mixing atoms results in ever-more-probable states being selected. Every state, every organization, of atoms of gas in a tank has the same probability as every other; but most of those states are a chaos, very few are orderly; therefore, as you keep “rolling that die” it gets more and more disordered over time, as more disordered states are far more likely outcomes with each roll. Now imagine adding something into that tank, a “disruptor” or what Wong et al. call “frustrators,” things that cause selection to occur: rather than all possible states being equally likely, some now become very unlikely, by being “killed off,” leaving other states, which avoid dying out, to become unequally far more probable over time. Let’s say, suddenly atoms of that gas become sticky. Now clumps of those atoms have a “survival” advantage over scattered chaoses of them—because they “stick around” (pun intended). This creates escalating order within the tank, even as the total entropy still keeps going up as well (since that clumping comes at a cost of increasing disorder everywhere else in the system). If such “frustrators” can arise by accident, due to a randomization of properties within the tank, then no intelligence is required for this resulting evolution of specified complexity within that tank. It will just happen of its own.
This might be hard for a theist to understand, again because of their naive understanding of probability and randomness. As Wong et al. explain:
The wide diversity of materials in our universe are a result of these barriers [to a rapid dissolution of the whole]. The elements of the periodic table exist because light nuclei do not easily fuse to form iron, and many heavy nuclei are stable and do not decay. A visible photon does not by itself transform into many thermal photons. Minerals forged at the pressure and temperature conditions of Earth’s mantle can persist on the surface due to kinetic stability. Similarly, organic matter does not spontaneously combust in an oxygen atmosphere due to the high activation energy of combustion. We owe our existence to all of these metastable features of our universe.
A theist might hear that and say “all these barriers require intelligent selection.” But that would be like looking at that nonrandom sequence I wrote down earlier (1, 3, 5, 6, 2, 4, 4, 5, 3, 6, 1, 2…) and claiming that is random, while looking at the actually random sequence (1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3…) and claiming that requires intelligent design. They have it exactly backwards. A universe with no barriers to the rapid dissipation of its contents would have to be very acutely selected; not a universe with many such barriers. Because, of all randomly selectable universes, vastly most will randomly contain such barriers; you’d have to get extremely selective to choose out one that “just happened” to have none, like “just happening” to roll 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, and claiming “that’s” expected on random chance. It isn’t. Randomly selected worlds will randomly contain some kind of barriers (Wong-Hazen “frustrators”) that activate the Wong-Hazen Law. Just like randomly distributed matter after the Big Bang: that will far more likely lead to stars than “perfectly distributed matter” preventing star formation. Perfectly uniform distribution is like rolling 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1. Random distribution looks more like 1, 5, 5, 5, 5, 2, 2, 1, 2, 5, 6, 3. It more likely contains everything and the kitchen sink.
Wong et al. also state a point that applies as well to cosmological “fine tuning” arguments: “Even if we could analyze a specific instance where one configuration enables a function, we cannot generally know whether other solutions of equal or greater function might exist in configuration space,” i.e. it is impossible to work out how many life-producing worlds there are in the configurable space of possible physical constants, because there are potentially infinite constants which can all vary at random. Theists usually hold all constants fixed but one, and calculate from there the number of viable worlds. But this is not how a randomization of constants-space would ever operate. There are, in fact, infinitely many values for the strength of gravity that exist in the same ratio as ours with some value for the strength of electromagnetism. Because, remember, the theist assumes both values can vary at random; and in any actual random selection, they will. There are also constants that have a value of zero in our universe that could have a nonzero value in another universe, rendering a different ideal ratio of gravitational to electromagnetic force strength than ours. And so on. We cannot calculate, much less explore, an infinitely variable configuration space. The fine-tuning argument thus can’t even get off the ground. (And that’s even apart from the fact that the fine-tuning argument is already self-refuting.)
All we can say for sure is that in any random selection of possible existence-states (like configurations of atoms of gas in a tank), most by far will be quasi-infinite multiverses that explore that information space for us, rendering the existence of life-producing configurations near certain. No God required. Or likely. Because a “single universe” (or none) is an extremely narrow selection from among all possible options, and thus in fact the least likely to occur by chance; just like a configuration of gas in a tank whereby all the atoms randomly collapse into a singularity. Even if this exploration of possibility space takes place over time, that would conform to the Wong-Hazen law: because more tries = higher rates of escalating complexity; indeed, eternal inflation can itself be a random-walk pathway inevitably to our universe.
But more important than all this is how all this relates to the evidence. Because it is, in the end, evidence that decides the probability of theism vs. atheism…
The Dittmar Argument
Now to bring this all back to the Dittmar-Lameiro debate. To give you a clearer sense of why Dittmar’s argument is immune to Lameiro’s rebuttal, you need to understand two things: Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them and The Argument from Specified Complexity against Supernaturalism. The overwhelming empirical trend has been that mindless natural causes underlie everything; never once have we found a supernatural cause of anything. This keeps the prior probability extremely high in favor of a Wong-Hazen-style explanation of observed complexity and order in nature. Likewise the overwhelming empirical trend has been that every complexity arises out of the assembly and interaction of ever-simpler things, never the reverse, exactly as Wong-Hazen predicts.
Lameiro has no response to this. He avoids it with semantical legerdemain instead. For example, he wants to insist God is “simple,” but to get there he uses bogus measures of simplicity (like counting God’s geometric parts!), ignoring the fact that only one kind of simplicity matters here: informational. And God is informationally maximally complex; not simple. The mind of a worm is far simpler than God’s. And a stone is far simpler than a worm. And an electron is far simpler than a stone. What underlies the electron, therefore, should be simpler still, and not suddenly, inexplicably, maximally complex (it’s thus significant to note, by the way: nothing can be simpler than absolutely nothing).
This is Dittmar’s point, and Lameiro never really addresses it, except with false assertions (like “God is simple”), arrived at by bogus arguments (like choosing an irrelevant measure of complexity and hiding from the pertinent one). This all illustrates the difference in methodologies between science and theology outlined in the Nieminen-Mustonen thesis: Dittmar is formulating a falsifiable hypothesis, and demonstrating it is empirically confirmed; Lameiro is formulating an unfalsifiable hypothesis, and in its defense avoiding all pertinent evidence. Lameiro also appeals to his own inexpert intuitions and rests on a litany of undefended assertions as if they were established facts; while Dittmar appeals to competently and independently confirmed scientific facts, holding strictly to what is actually in evidence, rather than conjectures masquerading as facts.
The hypothetico-deductive method is erosive of all bullshit like the god-hypothesis. Rather than making excuses for why your theory can still be true despite all the evidence against it, a genuinely truth-finding procedure is to sincerely ask what your theory predicts (what it directly entails) and what its best competitor predicts, and then go and look and see which one’s predictions come to pass. This procedure must have a real and substantive chance of falsifying your theory (and the other); it can’t be weak or gamed or corralled with defensive excuses. There must be something meaningfully different between what your theory predicts and what the alternative does; and the only alternative that matters here is the steel man of your competition, not its straw man. Theology flees in terror to the straw men, finds them wanting, and praises Jesus. Science has the courage to face the steel man, head on; and that is why it alone (and methods akin) discovers the truth.
And here in the Wong-Hazen proposal we have a definite differential prediction between intelligent and nonintelligent organizing-forces. Wong-Hazen predicts that all observed complexities in the universe will correspond to “Chaos + Death + Time = Order.” So, we should expect complexity to slowly evolve from simpler components over extremely long periods of time and through graduated steps of destructive environmental selection; we will see inevitable selection (“killing off” what can’t survive), and leveraging (subsequent steps of survival building on prior steps of survival); and we should expect to see an extraordinarily large, messy, and random configuration-space being explored, with vastly more death and failure than successful complexity development—in their words words, we should expect: “component diversity, configurational exploration, and selection,” and “large numbers” of all three.
And lo, as Dittmar points out, this is what we observe: the components and contents of the universe began capacious but simple, as a single, simple inflaton decayed into subatomic particles that began to diversify until atoms could form, then mostly just hydrogen and helium; but slowly, through random distribution and exploration of possibility-space, stars formed and leveraged these elements into more and more complex elements; then slowly, through random distribution and exploration of possibility-space, the resulting detritus formed swirls of chaotic dust that slowly, through random distribution and exploration of possibility-space, formed countless different configurations of planets and moons; which by sheer numbers inevitably, randomly, included habitable environs; and then slowly, through random distribution and exploration of possibility-space, self-replicating molecules were struck upon; and then slowly, through random distribution and exploration of possibility-space, life evolved; and thence into us.
This is not how intelligent design operates. This is conspicuously how things proceed in the absence of intelligent design. Because the one thing that distinguishes ID from mindless natural processes is the ability to explore options in mental space; and thus most of the exploration of possibility-space and selection forces take place in thought, and then an outcome is realized. So, if an intelligent agent were to make a universe, they’d just make one. There’d be no long random exploration of options in physical space, letting natural selection kill-off the way to sustainable results. Instead of eons of slow stellar development with multiple birth-and-death stages of building ever larger elements to get heavy elements necessary for life, an intelligent agent would just make the elements needed. You’d just get a sun. You’d just get Earth. Life would happen immediately; no evolution. Designers simply make the things they want, having done all the messy exploration and selection shit in their minds. That is, in fact, the only meaningful difference between intelligent and nonintelligent creation.
This is why a literal reading of the Genesis account of creation is closer to what a God hypothesis predicts than what we actually found—and thus why that’s what early theists thought of. They were correctly deducing that an intelligent designer just makes things. Per Paley, you just “get a watch” rather than watching one grow slowly through chaotic eons of natural selection; you just “get stars,” rather than watching them emerge inevitably from eons of chaotic building processes; you just “get complex molecules,” rather than watching them take eons to slowly evolve through stellar synthesis; you just “get a planet,” rather than watching it emerge from blind forces acting on an accretion disk over chaotic eons, amidst untold trillions of failed planets emerging randomly from that same process. And so on.
Thus, with ID you get an instant sun, Earth, and biosphere, all to plan. Not billions of years of random fucking around with a hodgepodge of stars and planets, and billions of years of random fucking around with bacteria and algae. There would simply be people. And we’d have confirmed that by now. There are a dozen different physical ways we could confirm our planet and the rest of the universe to be basically six thousand years old—if it actually were. Likewise that all life arose pretty much at the same time, rather than what we did find. We could also prove everyone was descended from the same single man and woman, were that actually the case. It wasn’t. Sure, maybe the authors of Genesis were idiots, and figured wrong what exactly God would do. But whatever a God would do, it would correspond to the same functional model: immediate creation, skipping steps—all the steps God used his mind (his intelligence) to skip. If God skipped no steps, if he just randomly made things and watched what popped out, he did not intelligently design anything.
The fact that all the evidence conforms to the predictions of the mindless Wong-Hazen process, and thus matches what we expect if existence were random and not designed, falsifies the god hypothesis. It’s done for. And there is no way “around” this. You can’t claim “but God wanted to make the universe look exactly like it would look if there was no God,” because, (a) you don’t know that (you just made it up; it’s an excuse you are inventing to avoid rather than address the evidence; it is, literally, bullshit—and that is the difference between theology and actual methods of discovering the truth: one depends on mountains of bullshit; the other clears them away), and (b) it’s not honestly likely (so you just made your hypothesis less rather than more likely), and (c) it is self-contradictory to claim that the absence of any intelligence being involved (zero skipped steps = zero intelligent involvement) is evidence of intelligence being involved. You are at that point just trying to bullshit your way out of the consequences of the evidence, rather than following that evidence to the truth.
The same follows for every other apologetic tactic you might want to cling to in desperation. It’s all just excuse-making. None of it empirical. None of it capable of confirming your hypothesis against all the evidence refuting it. Gods are not simple; they are maximally complex. Gods are not plausible; supernatural powers make next to no sense. And Gods are not in evidence; they are, to the contrary, conspicuously absent everywhere we look. But what does explain everything—eerily well, even the weirdest shit about existence—is randomness.
Randomly selected universes, with randomly distributed shit—including randomly distributed “frustrators”—are guaranteed to produce a universe relevantly like ours. This explains the universe’s bizarre age, bizarre size, its bizarre scale of lethality, its bizarre scale of wasted content: it all conforms to the prediction of the Wong-Hazen Law. A large, randomized, eternally explored configuration-space will not only generate our world, it will explain all its weird features. This cannot be escaped by contradictorily insisting God used none of his intelligence when “intelligently” designing everything. The far more probable fact is: God had nothing to do with it.
Packing for a move and very selfishly wishing you made recordings of these blog entries so I can listen while cleaning out and defrosting the deep freeze. Will circle back for what I’ve missed in a couple weeks. 😉 (i realize this is ot and will take no umbrage if it disappears)
I sympathize.
Alas, I lack the time and equipment to do that, although there is the possibility a volunteer will in future. I’ll announce that if ever that does happen!
Until then, all we have are robots. You can try speak-text options. It’s not my voice, nor great, but it might suit?
I was just wishing I could read these on Kindle. I just finished a 1200 page “book” which was composed of Yudkowski’s LessWrong blog posts. So much easier to read…. Well, if Richard is interested, I’d be happy to help create this. As for audio, in no time at all AI programs will have natural sounding voices down pat and converting things to audio will be cheap and easy.
Note:
You can read any blog article on kindle.
Also, some mobile devices have readers built in. For example, using iPad with Safari, to the left of the URL line is a button labeled “aA” which either automatically activates Reader Mode or supplies a dropdown menu with Reader as an option. This presents the text in a clean, more readable format, like reading a document. There are even settings you can adjust. Other devices and browser setups usually have some accessible equivalent.
And there are also RSS feed readers that essentially do the same sort of thing for entire websites.
Yes, understood, I do read these blogs on my Kindle Fire, but it’s awkward compared to reading a book (i.e. tabs and having to navigate).
For reading the 1200 page “book” of Yudkowsky’s LessWrong blog posts (of which the introduction to me of Bayesian principles, in a weird way, is what lead me here) it was much easier for a few reasons:
The blog posts were catagorized, no navigating around (although you still can via the table of contents).
No wifi is required once the book is downloaded.
There were editorial introductions to the different sections by an editor explaining why the sections were grouped as they were, when I use my Fire to read your posts, I end up with a dozen open tabs so I can navigate to the next likely thing to read. Granted, using footnotes in Kindle leads to the same thing, but it’s still easier to navigate, save notes, etc.
With the Kindle, I can use the dictionary, something very necessary when reading your work (I’m not edumakated like y’all) and when you highlight a word in Kindle, not only does the dictionary pop up, the wiki article, if there is one, pops up, AND I can make highlights and notes for myself that get stored on the Kindle (I can also email them to myself).
The Fire allows me to read in the same font settings and such as I use for reading books.
Finally, the author gets paid, maybe not much, but more than zero….and it’s more visibility, anyway, I guess if your KU book, Hitler, Homer doesn’t generate much KU money, then it’s likely not important, at least from a revenue standpoint…however, if it generates even a little, and makes it easier, and gives you more visibility, than it seems like something to consider, especially considering it would be the natural launching point for a smooth transition to an audio book of the same material (and an AI can reproduce your voice even today pretty well, so in a short time, you can authorize yourself to be your own voice in an AI produced book based on all these blogs).
Well, just thoughts. I continue to read the blog posts on my computer and reader, but having them together and grouped by subject would be a real advantage (at least to me).
Thank you. That’s unlikely to happen (return on investment is too low even just in time, for material already free to the world). But I appreciate the enthusiasm. In the lieu…
My category dropdown menu (right margin) and of course search box can help a little with that. I also have some summary articles designed for the purpose that are worth bookmarking (e.g. An Ongoing List of Updates to the Arguments and Evidence in On the Historicity of Jesus).
And that Read feature I mentioned allows controlling font and size etc. There are also extensions that do other of those things for various browsers (e.g., Google Dictionary does that dictionary thing for any webpage).
Book anthologies need value-added, so I only do them when they make expensive things more affordable. For example, Hitler Homer Bible Christ includes several hundred dollars worth of articles (that’s what you’d have to pay to get access to them all independently) alongside some hard-to-find research papers, as well as a couple otherwise-alread-free blog articles that suit that ensemble—and all for just twenty bucks or so (less on kindle). When I accumulate enough peer-reviewed philosophy articles for the purpose, I will develop and publish a companion to Hitler Homer on that subject, following the same kind of selection.
As far as tipping the author as a reward for their work, do note we often have ways to do that! I’m not alone (any author might have ways). But for my own part, see How to Help for the various options. Indeed, patron support like that is what keeps my blog articles free and clean of intrusive ads.
I tried the Push to Kindle option outlined in the article Richard mentioned above and it works great! You have to allow the Push to Kindle email in your Kindle permissions, but other than that quite easy to setup. You can turn off images so it’s just text too, without the graphics in the side bars. This is how I’m reading this blog from now on.
Jere, thank you for reporting. I often won’t know how well or if these things really work without someone using them giving me feedback. I appreciate it.
Yudkowski’s work is interesting do you have a link to this “book”?
Rationality: From AI to Zombies
https://www.amazon.com/Rationality-AI-Zombies-Eliezer-Yudkowsky-ebook/dp/B00ULP6EW2/ref=sr_1_3?crid=2G7J9A80U4X2N&keywords=AI+to+zombies&qid=1698333774&sprefix=ai+to+zombies%2Caps%2C130&sr=8-3
OH, I was wrong, it’s 2000+ pages, not 1200, no wonder it took me a couple months to finish it.
Jason, that is well worth exploring. Yudkowski’s site (to which he isn’t the lone contributor; all the content is typically good) is Less Wrong. They have a dedicated page for e-book reading its content. But Michael might have a more specific link to offer for the version he means.
Mind blow with this article as I’d just read a journalist piece on this and it was, as usual, from a journalist, so I was doubting it’s veracity and thinking it would turn out like that room-temp conductor thing a few months ago.
Glad to read someone I trust vet the idea as it made sense to me when I read the short piece about it.
I once had a theory that certain poker players were “unnaturally” lucky. Yeah, sure, this friend of mine, who was a good skillful player, who always seemed to win could be just playing to my confirmation bias (and my bitterness) but it still seemed wrong to me. He’d just river me too often to be “random” in my mind.
But this idea that actual randomness would mean everyone gets their AA busted at the same frequency is obviously wrong, no, in fact, in real randomness, we’d expect (like the 5, 5, 5, 5 dice sequence) certain players to have their AA hold up more often.
Not in a million billion games, obviously, but perhaps over a season (or a few years or maybe even their lifetime…could explain why certain guys win so many WSOP bracelets, they’re good and skilled, yes, but also selected “lucky” ones, just because randomness doesn’t and can’t make “everyone” lucky).
Which makes me think of the Christian idea that “of course Christianity is true” because “look at it’s success early on when nobody would, obviously ,” they say, “continue to worship a savior they knew was really dead.”
But, it seems, “randomness” comes in clunks (this does seem axiomatic when we think about it, at least to anyone that’s played poker even recreationally for more than a minute) so it goes to reason that some religion would necessarily be successful, just like Taleb showed in “Fooled by Randomness” that some stock brokers are superstars every year (or even some for many years) just because….that’s how randomness works.
It is counterintuitive, ain’t it?
Indeed, if you think of what are in fact the relatively few times you play or observe a specific opponent (it won’t likely even be thousands, possibly not even hundreds, often mere dozens), then indeed stochastic runs of success or failure have a higher probability, particularly situating yourself among a billion or so other players each in their own observation bubble.
This is an example of where probability theory clashes with human intuition, quite a lot.
There are a large number of cognitive biases involving probability (search the Wikipedia list for “probab” and there are eighteen hits; and not all such biases will be described with that letter string). And most superstition and much erroneous belief is caused by probability error (see Stuart Vyse, Believing in Magic, for example).
But I also survey some particular examples relative to religious thinking in Everything You Need to Know about Coincidences.
This has particular consequences for politics and economics. The “hot hand” fallacy produces most of the arrogance of the rich and their admirers. Because when it comes to financial success, in large systems left to their own devices, luck matters more than talent. But everyone “reads” it (by fallacy of affirming the consequent) as evidence of superior talent. Because everyone is bad at probability.
In the poker case, statistically there will always be some chance-run lucky actors who do well at poker regardless of skill. Because there are millions of players. Someone always wins the lottery. They did not “manifest” that outcome. There are just a lot of people playing.
careful. To my knowledge we don’t actually know this.
Yes, irrational numbers expand as non-repeating decimals, but non-repeating is not the same as “contains every possible finite sequence of digits”; there are plenty of examples of irrational numbers that do not
(0.11010010001000010000010000001 is irrational and does not contain any sequence that includes a 2…)
To be sure, it has been conjectured that π is normal, a stronger notion, that all sequences occur and with the “expected” frequency, and, but, again, this is not proven, and it will probably be big news if somebody manages it.
https://en.wikipedia.org/wiki/Normal_number
Good point. Worth noting this as a footnote, thank you.
I am of course operating on the conclusion that pi’s normality is highly probable. That makes it an empirical fact, not yet a mathematical fact. But yes, a formal proof would up the probability to 100% minus epsilon.
This is all nonsense. Everything militates for the existence of God. Anti-theists are a small dying contingent of religious zealots who deny the obvious pervasive evidence for design and historicity of the Christian faith. (Atheism is defined by the Supreme Court as a religion.) [Neo-pagan humanistic earth worship cults, scientism, magic and fideism].
The founding of science and the holding of the majority of relevant Nobel’s prove the misotheists have no valid appeal to intellectually superior arguments. It’s a collection of opinions that disregard the more established science.
Examples?
The opposite is the case.
I have bad news for you.
You people keep claiming this evidence exists. But when we check, you never have any. I think you might have a problem understanding what “evidence” is.
Only when you define “religion” neutrally as “things you believe” and only in the context of human rights (because there can be no freedom of religion without freedom from religion).
I can list random words, too. [Neo-expressionist origami cults, gerrymandering, socks and mixology].
Identify a single Nobel prize won for proving any religion true or any god exists.
If none, then where are all these supposedly “intellectually superior arguments” for them?
God hypotheses have consistently failed to pass peer review in every genuine science journal for longer than the entirety of my life.
You evidently forgot to check.
I am not sure that you are totally on the right track here, vis a vis the underlying physics. For a quick review of entropy and information, I’d direct you to Sean M. Carroll’s Biggest Ideas:
This is of interest because of Carroll’s review of the fundamental objections to the concept of the second law from Loschmidt and Zermelo. These objections would be fatal to Boltzmann’s law without another constraint, viz., the past hypothesis. This postulates that the initial entropy at the time of the Big Bang was extremely low.
Roger Penrose has suggested (in The Emperor’s New Mind) that the likelihood of our universe starting with the entropy that it did is 1 in 10^10^123, an enormously small probability. If you are interested in a detour, consider the response of Laura Mersini-Houghton speaking at the Royal Institution.
James Clerk Maxwell noted the core principle of defeating the second law of entropy in his “demon.” Carroll discusses this model and explains how the demon ties together the concepts of information, entropy, and erasure. For several decades now, researchers in non-linear thermodynamics and information theory have shown how systems of increasing complexity can arise by trading entropy flow for information storage. My personal favorites in this arena are Stuart Kauffman’s Origins of Order:
https://www.amazon.com/Origins-Order-Self-Organization-Selection-Evolution/dp/0195079515/
and Nick Lane’s Transformer:
https://www.amazon.com/Transformer-Deep-Chemistry-Life-Death/dp/0393651487/
There is also Friston’s Free Energy Principle (see https://en.m.wikipedia.org/wiki/Free_energy_principle)
with its close ties to Bayesian inference. As well as E. T. Jaynes on maximum entropy, which I believe you have referenced in the past.
https://bayes.wustl.edu/etj/articles/stand.on.entropy.pdf
In short, I do not believe that the Wong-Hazen proposal adds anything to the existing theory on the arrow of time, evolution, complexity, et al. As Carroll has explained, the forward evolution of our universe depends upon two competing dynamics: particle physics and gravity. In the absence of gravity, the Big Bang would have just been a bag of gas and it would have quickly reached thermal equilibrium, without creating galaxies, stars, planets, black holes, etc.
However, your basic observation that theists propose viz, that one needs an Intelligent Designer with more complexity than it imparts to the universe that it creates is spot on. This contrasts with the present cosmological picture, as I just stated.
I think this crystallizes the two views in a nutshell: Can order arise ex nihilo, or not? Quantum field theory shows how, in our universe, a vacuum is in principle impossible because of the Heisenberg Uncertainty Principle: hence delta E times delta t > ½ h-bar. In other words, “empty space” is filled with quantum fields that must have a non-zero ground energy level. Put this together with gravity and you get a universe from nothing.
I want to make this point very forcefully. As Kauffman asserts, order arises naturally in systems poised at the edge of chaos. If a system is too simple, one gets repeating patterns without information. If a system is overly complex, it produces pure randomness. However, at the edge of chaos, systems will generate signals within their parts, communicate these signals to other component parts, and evolve to maximize adaptation to an external environment.
All this without an “Intelligent” Designer…
You will have to specify what that says that pertains here and the timestamp where it says it.
I don’t see anything here that pertains to my article. I address the “extremely low” entropy vs. complexity distinction already. It’s right there in the article. Nothing I am saying contradicts any of what you are mentioning. And indeed the study’s authors address this as well (albeit not in the context of cosmology).
That actually has a total probability of ~100% given a random selection of the options from 0 to infinity. That’s the Problem with Nothing. You’ll note Penrose agrees with the random walk hypothesis to that entropy state; his multiverse theory produces it inevitably, and by a process that conforms to the Wong-Hazen process. So do other popular theories (like chaotic eternal inflation).
Note that all his number is measuring is how small the universe was relative to its energy content, i.e. density and temperature. He is not measuring organization or order or structure, all of which were, in fact, random. Most cosmologies explain that initial high density, high temperature state as inevitable and not a random selection from existing states, so Penrose’s calculation is as irrelevant as calculating the random chance of getting a habitable planet by randomly nuking the moon. That simply isn’t how planets are made. So the calculated probability is of no relevance to the probability of planets; likewise, universes (and multiverses).
This is literally what my article said. Particularly in the article it repeatedly directed you to on this very point: All the Laws of Thermodynamics Are Inevitable. This has no bearing on Wong-Hazen. They fully account for all of this.
Again, none of this has any effect on any point I made in my article.
Then read the study. They actually spend several paragraphs explaining what it adds and thus why it needs to be specified and formalized independently.
This is just a specific example of the Wong-Hazen law. That is, in fact, Wong et al.’s point. So you should read it.
That does not actually address the theist’s point. They are asking why those fields exist and why they have those specific powers and properties and not others. That still requires explaining. The only thing they are wrong about is that the explanation is at all likely to be a Fantastical Space Ghost. Dittmar’s point is that the entire evidence trendline goes in the opposite direction, toward something vastly simpler. Wong-Hazen merely confirms his point in formal scientific terms.
That is literally the intuition that Wong-Hazen is formalizing. They explain why it needed a formalization, which indeed until now it had lacked.
Let me try again and start with Sean Carroll on entropy. That video introduced five very distinct definitions of entropy, beginning with Clausius, who coined “entropy,” “energy,” “enthalpy,” etc. Clausius defined entropy as Delta_S = Delta_Q/T, where Delta_S is change in entropy, Delta_Q is change in heat, and T is temperature.
With this definition, the second law came down to the net change in S around a closed loop in a Pressure-Volume curve (in a heat engine). This would be S=0 if the transitions were reversible and always S>0 if any transition was irreversible. Don’t worry what this all means. I don’t expect anyone who hasn’t done at least a four-year course in physics to grok all my BS.
The second definition was due to Boltzmann. His definition was S = k T ln(W), where k was Boltzmann’s constant, T was temperature, ln was the natural logarithm and W was the number of ways to configure a system. [This expression is often given without the T, but that displays an error in units.] An implicit assumption was that the ways being counted were equally likely. Boltzmann gave a much broader definition that could be applied to any system of atoms or molecules, without restriction to heat engine efficiency, per Clausius.
With this definition, Boltzmann claimed that entropy would always increase towards macroscopic states for which the microscopic configurations were more likely. This claim was challenged by both Loschmidt and Zermelo. Both challenges are fundamentally correct, and Boltzmann’s claim of entropy increase fails without an additional hypothesis, often called the Past Hypothesis.
The third definition was due to Gibbs. His definition was S = -k T Sum[ Prob(p,x) ln(Prob(p,x))] with k and T and ln as above, and Prob(p,x) was the probability of some small volume in a “phase space” comprising the momentum, p, and position, x, of all the particles in the system. The summation would be done over all the phase space. This notion of phase space becomes central in quantum mechanics, where particles are no longer allowed independent momentum and position under Heisenberg’s Uncertainty Principle, but I digress.
With this definition, Gibbs’ concept could be applied to a vast array of thermodynamic systems comprising distinct energy storage mechanisms, chemical, electrical, magnetic, kinetic, and so on. The calculation did involve a concept of “coarse-graining” in which specified volumes of phase space were estimated so that no impossible amount of information was required. Many physicists rejected the subjectivity of this approach, preferring good old Clausius, even into the 1970s.
The fourth definition was due to Shannon. His definition was H = – Sum[ Prob(a) * log_2(Prob(a))] where Prob(a) is the likelihood of a symbol, “a,” in a signaling alphabet, occurring in a message, and log_2 is the logarithm base-2. We now have an expression given in terms of binary digits (bits) instead of “nepers.”
Shannon’s definition explicitly applied to communications systems. He stated that “H” measured information as opposed to entropy. He demonstrated the entropy of the English alphabet, broken down into one, two, and three-symbol constructs. E.T. Jaynes explicitly asked whose information Shannon was considering. (See that paper by Jaynes that I referenced.) It was not the sender of any specific message. If it were, it would be a piece of cake to encode the entire message at the receiver and reproduce it without error when the sender sent it. Jaynes concluded that the information in question was that of the system designer, who had to design a system that could transmit any possible message within constraints of the “language,” e.g., English text, spoken English, video, or audio. Hold that thought.
The fifth definition was due to von Neumann and captured entanglement entropy in quantum mechanical systems. His definition was
S = -Tr[ rho * ln(rho)] , where rho is an NxN density matrix for a QM system that entails various mixed and pure states. The “trace” function is the sum of the elements on the main diagonal.
You can think of a pure state as comprising uncertainty only in terms of observation, whereas a mixed state entails some traditional uncertainty in the state. For example, a stream of electrons emitted from a hot electrode in a cathode ray tube (think old TV) is in a mixed state regarding their spins. Capture one of these and measure its spin on an up-down axis; it is now in a pure state, either spin-up or spin-down. For the first observation, there is traditional uncertainty in the result: up or down, 50-50. Once the observation has been done, any subsequent spin measurement for that electron on an up-down axis will produce the same result, e.g., up. However, all the bets are off if that “up” electron is measured on a right-left axis. It will show right or left with equal probability. Then, if you try to measure the electron again on an up-down axis, it will show up or down once more with 50-50 chances. This is your intro to Fun with Qubits.
With this definition, S = 0 for a pure state. S is the maximum for a mixed state and is equal to ln(N), where N is the dimensionality of the Hilbert space containing the system.
This definition is applicable to, for example, black hole entropy, (leading to the black hole information paradox,) computed by Bekenstein and Hawking as S_BH = k * Area_EventHorizon / (4* length_Planck^2) where k is the Boltzmann constant, Area_EH is the area of the event horizon, and length_Planck is the Planck length, about 10^-35 meters. This says that information is entangled on the surface of the black hole and its amount is given by the area of the event horizon divided up into miniscule cells of a Planck length on a side.
So what?, I hear you ask. Each of these definitions of entropy yields a different concept of information. Some of these precisely contradict one another. As Carroll explains, if a scientist sees a tightly constrained probability distribution, they conclude that they have a lot of information about a system. In contrast, Shannon says there is almost no information being given. He would say maximum information is comprised in a system with uniform probability.
Let’s get back to Wong-Hazen. They define a functional entropy as I(E_x) = – log_2[F(E_x)] where E_x is some measure of fitness within a system parameterized by a variable, x, and F is some fraction of the (countable) elements of the system for which fitness, E_x satisfies some constraint; e.g. E_x > E_threshold.
This definition is reminiscent of Boltzmann’s in that a number of ways of achieving some configuration are counted. It differs in that it’s given not by the absolute number of ways but by a ratio to a total count of possible configurations. Hence, we need a negative of the log to get positive information. One explicit example they give is some finite (countable) RNA sequence of the four letters of the coding alphabet, so two-bits per A, C, T, or G. For the subset,M, of such sequences that satisfy some fitness threshold for the species with that RNA, we get I(E_x) = -log_2[M(E_x) / N], where N = 4^n, and n is the RNA length.
Of course, in practice, an RNA sequence comprises some codon sequences of three symbols that yield given amino acids and non-coding strings. Coding strings are further organized into genes, either “on” or “off” depending upon the methylation state of the RNA. This methylation defines whether genes are exposed for transposition to protein. Additionally, combinations of gene status are known to influence other genes. For example, if genes A and B are both “on,” then gene C is forced “off.” In my humble opinion, even this simple case in Wong-Haxen misses the complexity of defining how a biological system achieves fitness.
In contrast, Kaufman (The Origins of Order) spends pages 29-279 in Part I discussing genetic evolution and adaptation to rugged fitness landscapes in terms of populations where gene status (on, off, and mutual logical circuitry) are considered in detail. This volume was published in 1993, fully 30 years ago. His exposition is far more complete, and while Wong-Hazen references a few papers from Kaufman, it omits The Origins of Order.
Equally troubling is the introduction of stellar evolution. The weak nuclear force is the underlying mechanism in the fusion of elements of higher atomic number from lower elements (say helium from hydrogen). Not only are isotopes of higher atomic number being produced within a stellar interior, but many are unstable with limited half-lives. These isotopes quickly degrade by fission into lighter elements, along with types of radiation. The cascades of these processes continue and are dependent upon the generation of the star in question. For example, our sun is a generation-3 star, which is why its accretion disc produced a rocky planet with an iron core, like Earth.
But if one were to compute a functional information value for stellar evolution, one would be compelled to use a von Neumann entropy formulation. Recall that the necessary density matrices have a dimension of the Hilbert space necessary to describe any star under study. To estimate this dimension, a component calculation would give the information density of the star, just like the black hole entropy I refer to above. A decent estimate would come from working out the event horizon of the black hole that the star would collapse into and then computing the number of qubits based on the area of that horizon. Without giving any details, suffice to say that we are talking about 10^70 or so. My fourth-year theoretical physics prof (Dr. Ranga Sreenivasan, see for example https://ieeexplore.ieee.org/abstract/document/8152595/authors#authors ) was an expert in stellar interiors. I can only imagine his response to the concept of functional fitness in a stellar interior. How to assess the subspace of a Hilbert space that would entail any arbitrary fitness criterion relative to the whole space seems ill-defined.
Of course, first-generation stars are believed to have formed from the hydrogen gas created from recombining protons and electrons around 370,000 years after the Big Bang. This leads to consideration of the Past Hypothesis, discussed in that Sean Carroll video (and elsewhere). While the universe’s initial conditions were relatively uniform (seemingly high entropy), two factors intervened: quantum fluctuations and gravity. Minor density fluctuations are shown in the cosmic microwave background of 10^-4. Under the influence of gravity, these minuscule variations were amplified to pull this gas together until pressures and heat increased to yield fusion in stellar interiors. Hence, the picture of initial entropy is transformed once gravitation is introduced into the physics.
But we don’t have to venture into stellar interiors to encounter the entanglement entropy roadblock. Consider biological detection of good old visible photons. Photosynthesis is an excellent example since it achieves nearly 100% quantum efficiency, meaning virtually every candidate photon received at a chloroplast yields an electron at the reaction center for energy conversion. This unusual efficiency level (aka fitness for a function) has been considered for decades. In contrast, the quantum efficiency of most optical sensors in digital cameras is about 25-35%. At issue is the transport of a quantum composite particle, an “exciton,” created in a magnesium atom by an incoming photon. You can think of an exciton as a coupled pair of fermions (which cannot be in the same quantum state) acting like bosons (which love to be in the same quantum state). An example of this behaviour is shown in superconductivity, where pairs of electrons of opposing spin-states are coupled. When such a state of excitons is created, it is called a Bose-Einstein condensate, and it typically happens only at very low temperatures.
However, current research suggests that a Bose-Einstein condensate performs energy transport through a superposition of pathways in a chloroplast with nearly superconductivity efficiencies even at average temperatures. (See https://journals.aps.org/prxenergy/abstract/10.1103/PRXEnergy.2.023002) Similar hypotheses make similar arguments about coherent exciton transport through a quantum superposition of pathways to achieve efficient energy transport. I invite you to explore this research yourself. Just google photosynthesis efficiency. There are adjacent examples: a frog’s eyes can detect a single photon, while human eyes need seven or so.
My main point is that if one attempted to construct a functional information equation for photosynthesis, one would be forced to use a von Neumann entropy. To make the problem more explicit, the classic logic of bits is that their state is either “0” OR “1,” but qubits are both “0” AND “1” simultaneously. For example, excitons are transported through path 1 AND path 2 AND path 3 … AND path N simultaneously. The APS article I referenced explicitly calculates the quantum state for an N-qubit superposition (equation 3, page 023002-5) for the photosynthesis problem. From this, one could generate an NxN density matrix and, hence, an entanglement entropy,
Anyway, I’ve gone on for too long, and I’m just boring everyone (as usual.) But I should be more transparent about why I was grousing about Wong-Hazen. (I almost never succeed in being more transparent.) They don’t have a bad idea. I think their formulation can’t capture sufficient detail in all the areas they’re aiming at. People have a habit of taking “entropy” and “information” from statistical mechanics, quantum mechanics and communication and plonking them down into new territory, I think, without a clear idea of whether they fit.
But back to your primary thesis, which is that complexity in our universe arises from simple initial conditions. One does not need a complex deity to perform Intelligent Design. I would argue that the correct concept here is “complexity” instead of functional information. If you want to fall off the deep end of the pool in modern theoretical physics, try
Or google “Susskind quantum complexity” and be prepared to hold your breath for a while.
None of this relates to anything I have said. None of this affects my point or that of Wong et al. Indeed, you are largely just repeating things I already said (as for example in my article on the laws of thermodynamics).
You need to go back and start over: read what my article actually says (and its linked articles where appropriate). Do not assume things. Just read what it says and take it at what it says. Then regroup and respond with something that actually responds to something I actually said. It might help to specify which words of mine you are responding to. That will at least put controls on your imagination and focus the discussion.
Case in point:
First, you did not show this. Second, they are not even attempting that.
Their formulation is highly general and thus subsumes most other models (e.g. they make no assumptions at all about which model of measurement is being used; their formula works with all of them). And they talk specifically about how precision is impossible owing to unavailable information, and that all that can be discussed is directionality effects, not precise measures. Conclusions unaffected by any of the distinctions you go on about.
Another example:
Where in their study do they do this? And how would their conclusions be changed by any different approach? Quote the study where pertinent. Then give an example of how their conclusion would be altered with a different set of assumptions.
Then you need to quote their study’s argument to the contrary and why you are right and they are wrong. Because they actually refute this statement of yours, quite elegantly.
You may be missing this because you “assume” they are working with some specific definition of information. They are not. They suggest one, but their conclusions do not require it. Any definition works with their theory, because their theory only predicts transformations in the information-state that will be expected on all measurement models. And they explain why. That’s their whole study’s point.
I actually rely on Susskind in several of my articles here.
I must admit confusion at this point.
If I look at Wong-Hazen, they make three points, listed in the abstract; viz., (1) systems with large numbers of interacting components can achieve huge numbers of configurations; (2) processes exist that can generate huge numbers of configurations; and (3) configurations are selected on the basis of function. From such, they posit a law of increasing functional information. Within the paper, they offer a definition of functional information. I challenged that definition because, IMHO, if would not capture the structure of entropy/information in many of the systems that they take as exemplary.
If I read your review of their paper, you state their law as expressing a continuing increase in functional information. You add
“Thus, the Wong-Hazen effect couples with, for example, the Second Law of Thermodynamics, errantly regarded as a law stating that everything tends toward disorder (that’s not really how that law operates), to produce an opposite effect, a natural tendency toward order.”
Let’s take the very very big picture of cosmology. The universe begins at the Big Bang, and, ignoring many details, light (photons) begin to flow through the newly transparent space about 370,000 years later. At this time, there were minor temperature/density fluctuations of the order of 100 parts per million. Now, if the only mechanisms were typical kinetic particle collisions, thermal equilibrium would have been established, per the second law; and nothing more would have happened. Ever.
But there was gravity. And instead of damping out these tiny fluctuations, gravity amplified them. In the course of the next 13.7 billion years, gravity gives us galactic clusters, quasars, stars, planets, and most importantly, black holes.
The early universe had an entropy of about kx10^88. Currently, all the stars and interstellar gas have much less, about kx10^82. Neutrinos and photons contribute about kx10^90. But the dominant supply is already in black holes. Here we find about kx3x10^104 (of entanglement entropy.) So entropy has been increasing and after less than 14 billion years, it has increased by a factor of over 10^16. But this has nothing to do with life or anything interesting or functional. It’s just all sorts of stuff falling into black holes.
In the future, all matter will collapse into black holes, which will then radiate their mass into space as Hawking radiation over around 100 billion years or so. All that will be left then is a thin gruel of radiation at nearly absolute zero. This is the heat death of the universe. There will be no functionality whatsoever.
Sean Carroll has a fine analogy for all the fun we are having now. If you look at cream mixing into a glass of coffee, there is an intermediate point in mixing where the cream is swirling around creating fractal-like shapes in a turbulent flow. Then, all the cream and coffee are mixed and everything interesting is done and gone.
Life has evolved in the universe, but the current cosmological story is not much different today than it was in 1890. So yeah, maybe someone can write a science fiction story about capturing all of the information back out of all the black holes in the universe before any life-forms are sucked in.
I just want to point out that while complexity is increasing presently, unless you want to include being sucked into a black hole and contained on its event horizon as a bag of entangled qubits, I have to suggest that our luck is going to run out. And just like no Intelligent Designer started the ball rolling, ain’t no ID going to save our collective bacon in the End Times a-waitin’ on down the road.
Be careful of the error of only reading the abstract and not the arguments and formalisms in the study.
But yes, in a nutshell, those are their three points.
First, they do not do this. They never specify which of the systems of measuring information you delineate, for example. The most they do is suggest one indirectly (in the cited RNA combinatorics studies, mainly from the thought of Szostak).
Second, you have not given any example of this supposed conflict. So far as I can see, you can substitute any definition and get their same results.
This is literally Wong et al.’s point. They even mention this as a “frustrator” (s. “frustrate”) essential to their law.
They explicitly say this: in a model without such things, “systems smoothly march toward states of higher entropy without generating any long-lived pockets of low entropy, for example, because of an absence of attractive forces (gravity, electrostatics) or universal overriding repulsive forces.”
So you are not signaling to me that you have read their paper.
This is actually false, BTW. Read my article on Thermodynamics: probability entails eventual rare entropy-reversals even in heat death conditions because every entropy reversing state (classical or quantum) has a nonzero probability, and on an infinite timeline all probabilities approach 100%.
But that has nothing to do with Wong-Hazen. They are not questioning the scenario you describe. And it has no effect on their proposed law.
He also generated the statistics I reference in my previous point: that a new Big Bang is an inevitable outcome of even a heat-death state of our future universe. He calculated its quantum mechanical probability.
But again, none of this has any bearing on Wong-Hazen.
It may be that the universe will at some future point lack sufficient randomization and frustrators to inevitably produce entropy-reversals sufficient to drive complexity any further. That vindicates rather than refutes Wong-Hazen. Because their entire point is to illustrate what conditions are needed to observe the effect. Take those conditions away and the effect goes away: that’s a scientific confirmation of the law.
But Carroll would correct you here anyway: there will never be a future state of the universe in which randomization stops; and randomization plus an infinite timeline equals all configurations of the space will eventually be explored. That then entails entropy reversals sufficient to drive the Wong-Hazen process again.
Again, this is not Wong-Hazen’s concern. They don’t make any declarations about the deep future. They are only explaining observations in the present: given the conditions they state, the effect will be observed. This is not affected by a possible future state lacking those conditions.
Oh contraire, mon frère. I’ve read through their paper 3-4 times. I’ve also checked out the references that deal with functional information. I just have a problem with the thesis. I think this problem is finally coming to the fore in our discussion.
I do not think that gravity is a “frustrator” of entropy growth. It is of the essence. Note the point that if there were no Einstein General Relativity in the universe, the conditions after the Big Bang would quickly have led to temperature equilibrium, just like a gas in an enclosure, per Clausius, Boltzmann, and Gibbs. I pointed out an estimate for the entropy after the Big Bang, and I referenced the current entropy of the universe. This is almost entirely due to the influence of General Relativity together with quantum mechanics (GR+QM). Mass and energy falls into black holes and entanglement entropy appears on their event horizons. The current entropy of the visible universe is about 10^16 times higher than just post Big Bang. This is not gravity frustrating the growth of entropy, rather, it is GR’s warping of space-time pulling even massless particles like photons into black holes.
If someone has a grand theory that
(1) defeats, say, Sagittarius A* (the BH in the center of the Milky Way) from continuing its project of consuming our galaxy, while joining up with the BH within Andromeda, and then the two consuming both galaxies; and
(2) the rest of the visible universe will achieve a velocity greater than c because of vacuum energy; and
(3) that one BH left in this patch of the visible universe gradually evaporating due to Hawking radiation into a local buzz of particles,
then they should publish and defend that.
I have heard various theorists propose other outcomes that might arise at this point. As you mention in your article on thermo, perhaps another Big Bang, however unlikely, with kick off from the spent ashes of this universe. Roger Penrose likes a recurring cyclic postulate. Mersini-Houghton favors a multiverse spinning out of the string theory landscape.
All of these hypotheses are rather speculative to say the least. I will admit that Mersini-Houghton and collaborators have some hard evidence for their string theory conjectures.
Back to life on Earth, I think that the mechanism is that the Sun in the sky is a concentrated supply of entropy. It’s sending photons from its surface at about 6,000K. Since the average Earth surface temperature is relatively stable at about 400K, the surface is radiating about as much heat as it receives (neglecting growth of upper atmosphere CO2, damn it.) So we radiate about 15-20 photons for each we receive. Life is living off that entropy growth. And that point source of entropy in the sky is due to GR+QM.
To suggest the gravity is a frustrator of this process misses the point that without this GR+QM combination we would not have a sun or a rocky planet with elements suitable for biology at all.
I think that I should probably take my further comments over to your article on thermo. Perhaps my problems with Wong-Hazen have more to do with your interpretation of statmech than the concept of functional information.
As far as the big picture concerning theists, I am in great agreement with you. Back in 1977, Ilya Prigogine won the Nobel in Chem for showing how entropy flow through non-linear systems would yield complex structures. I concur completely that complexity arises out of entropy flow through non-linear dynamics (per Prigogine, Kaufman, Lane, et al) and there is no need for any god. I just step off at the suggestion that randomness, in and of itself, (e.g. in a simple linear system) can yield stable increase in complexity.
Sigh. That is not what Wong-Hazen say.
How can you have read their article and still not understand what they said? It isn’t obscure. Their English style is actually remarkably clear for science articles.
How do you not know what a Wong-Hazen frustrator is?
How did you come to think it is something that causes “entropy growth”?
This is bonkers.
They never say any such thing.
They very clearly say a frustrator is an essential component of any process that would naturally generate ordered complexity, not entropy.
They also don’t say “it alone” can do this. So “just having gravity” would not suffice to activate the Wong-Hazen process.
Please go back, re-read everything I and they said, and start over. Because you have clearly gone completely off the rails pretty far back.
“…the digits of pi, read with (say) base-ten ASCII, include all possible books that have ever and even could ever be written.”
and:
“Colorless green ideas sleep furiously”
are for me both functionally nonsense, I’m afraid. An example of chaos arising in your order, perhaps? 🙂
Reading this piece was as if I were making myself of no reputation and taking upon myself the form of a slave-boy in Socratic dialogue. Elegant and aesthetically pleasing. It simply follows and is implied from what is already known in that eternal “Duh!” moment that Truth lives in.
“DEATH APPROVES THIS CONTENT.”
“EEK!” – Death of Rats seconds this emotion.
(Of course THEy do, the preternatural good looks of forever about thirty imply either foul magicks and an ageing portrait in an attic, or DEATH has flipped an hour-glass at least once! ;-))
Death as neccessity for eternal, if sequential and consecutive, life; stick that in your thurible and smoke it!
This post was a blast, a maximum dopamine hit and I’ve the silliest smile atm.
I didn’t actually come here to read this post though, but to ask if you’d come across and read this:
Romans 1:3 and the Celestial Jesus: A Rebuttal to Revisionist Interpretations of Jesus’s Descendance from David in Paul.
https://www.researchgate.net/publication/369830801_Romans_13_and_the_Celestial_Jesus_A_Rebuttal_to_Revisionist_Interpretations_of_Jesus's_Descendance_from_David_in_Paul
just summarised by Litwa. I can see a couple of things wrong straight up, but my minimum mythicism depends on elsewhere in “Paul”; and yours and Doherty’s writings are as Hillel’s “The rest is commentary” was to his “That is the whole of the Law” stood on one leg; so such merely holes my sail, (and never mind the hull!) even if correct.
That Hansen article was already refuted before it was even written (as often happens with these people), in Empirical Logic and Romans 1:3. The best starting point for addressing all of Hansen’s propaganda and rhetoric is my article Chrissy Hansen on the Pre-Existent Jesus which actually covers every article Hansen has ever written, one way or another.
In general, just FYI, you should post questions like that under blog articles closer to the topic. You can use the search box or the category drop-down menu to find one (right margin).
Tah. Chrissy and Christopher are the same person? I wouldn’t know and wouldn’t assume.
The journal for Wong et al is PNAS (Proceedings of the National Academy of Sciences of the USA) (Biophysics and Computational Biology is a section of the journal not the journal title)
Good catch. Thank you. Fixed!
Yet another example of the fundamental problem of all Christian cosmological and metaphysical arguments.
They’re all based on intuitions, which we all agree (yes, even the theists) at the beginning of things.
“There just is an answer”, “It’s logically necessary” or “It’s eternal, so there’s no point at which it starts” all feel unsatisfying… because what we’re talking about is not anything we have any reason to expect our intuitions will apply to.
Theists disingenuously end their analysis at the point where they can invoke God, but God is just a yet-greater mystery. How does God work? How does a disembodied mind operate or think? If it’s a trinity, how the heck does that work?
The hidden premise in the theist position is that “The beginning of the physical universe must be explicable as compared to the actual cause of the universe”. But even that doesn’t cut it because, as Bogardus complained, if you don’t have an explanation for a cause, you’re not really done. “The universe happened because magic” is not an explanation, it’s handwaving. So they don’t even have an actual explanation for the beginning of the universe. They are conflating their theory having seemingly good parameters with its lack of mechanism: “Our theory explains the appearance of design by mandating design” is not a response to “Your theory doesn’t explain, probably even in principle, what mechanism acted to make the universe”.
Once we all accept that it is likely that aspects of fundamental reality aren’t going to be intuitive compared to day-to-day life, all of the objections of theists essentially evaporate.
Stephen J Gould has some interesting things to say about how living evolutionary systems select for complexity and excellence in his book Full House
That has long been a staple concept in evolutionary biology. Wong-Hazen are expanding the observation to the entirety of physics.
thanks… i went looking for my copy of Full House to check my memory but it’s gone missing… i’m gonna keep working on this blog….
I’d like to hear a follower of Spinoza speak on this. It seems to fit quite well with his theology (which, granted, is only different from atheism in subtle ways.)