In 2004 I composed for The Secular Web a detailed Critical Review of Victor Reppert’s Defense of the Argument from Reason. I still reference it whenever the “Argument from Reason” comes up. But anyone who visits it will notice it’s quite long. Although it had to be, in order to be thorough and to not be accused of “leaving out” or “ignoring” any of Reppert’s arguments, or of glossing over key issues without a full explanation, still, a shorter summary would be helpful. Now this year Graham Oppy has produced for the 70th volume of Annals of Philosophy a brief summary article of his own debunking of Arguments from Reason (see “Anti-Naturalistic Arguments from Reason”). Though Oppy’s article is too brief to satisfy, it has inspired me to compress my critique of Reppert into my own summary of key points, with some of the observations now made by Oppy. This article will then serve as a more convenient précis of why the Argument from Reason (or AfR) doesn’t work that you can refer people to. (They can still dive into further details in my original article if they wish to.)
The Gist of the AfR
The Argument from Reason comes in many forms but the basic idea is this: natural selection, or indeed even physics, cannot explain how humans reason as they do; whereas a God who installed that as a supernatural power in us would explain it entirely; therefore, the fact that humans reason as they do proves God exists (and arranged our minds to exist as they do). Neither premise of this argument is true. And I do not merely mean to say that the apologist has failed to prove either premise is true (which alone would be enough to render the argument unsound, as premises not known to be true can only produce conclusions not known to be true). I am saying both premises are provably false. By which I only mean, of course, that both premises are demonstrably too improbable to credit, not that they are logically impossible; but that a claim about reality is too improbable to credit is what we all actually mean by saying a claim “is false.” So, false it is.
Like all apologetics, this argument completely reverses into an argument against the existence of God as soon as you “put back in” all the pertinent evidence the apologist has left out. I already summarized this for the Argument from Reason in my article on Bayesian Counter-Apologetics:
If God did not design us, our innate reasoning abilities should be shoddy and ad hoc and only ever improved upon by what are in essence culturally (not biologically) installed software patches (like the scientific method, logic and mathematics, and so on), which corrected our reasoning abilities only after thousands of years of humans trying out different fixes, fixes that were only discovered through human trial and error, and not communicated in any divine revelation or scripture. But if God did design us, our brains should have worked properly from the start and required no software patches, much less software patches that took thousands of years to figure out, and are completely missing from all supposed communications from God.
Thus, observation confirms that the actual evidence of human reason is far more probable if God did not exist than if he does. Thus, even the Christian’s own Argument from Reason argues that God does not exist, rather than that he does. Because once again, when we bring in all the evidence, the Bayes Factor strongly supports atheism.
Which point I further supported in The End of Christianity:
[The human ability to reason] derives from our evolved capacity to use symbolic language (which is of inestimable value to survival yet entails the ability to learn and use any language—including logic and mathematics, which are just languages, with words and rules like any other language) and from our evolved capacity to solve problems and predict behaviors (through hypothesis formation and testing, and the abilities of learning and improvisation, which are all of inestimable value to survival yet entail the ability to do the same things in any domain of knowledge, not just in the directly useful domains of resource acquisition, threat avoidance, and social system management).
So we don’t have anything we need to explain. As to how humans could achieve the heights of modern science and logical reasoning through other cognitive skills (language and problem-solving) that were acquired through blind natural selection, or why we can trust the conclusions of these new invented procedures over those original cognitive skills, that’s all an inevitable outcome of underlying and predictable steps of evolution. That we acquired these capacities in that way—and why we preferred the ones we invented to the ones we inherited—is proved by the fact that our innate reasoning abilities are in fact quite poor and flawed, and by the fact that our achievements in logic and science took hundreds of thousands of years for us to even figure out, and have observably been far more successful, yet continue to require considerable years and expense to “install” in each and every new human being born. And still we are prone to not using those tools (even if we’ve had them installed), and to forming beliefs instead based on our flawed natural means, thereby even today producing a plethora of self-defeating and wildly false beliefs, religions, ideologies, and worldviews.
This is what we expect if the human capacity to reason effectively was a merely human invention using naturally evolved talents. It is not at all what we expect if it were installed magically by God. Worse, any God worth any respect at all would never tolerate this result. We have a vast quantity of evidence regarding how competent and considerate engineers behave, and from that evidence we can confidently predict that any (literally any) such engineer would have built our minds to reason reliably from day one. Therefore, that that isn’t the case proves there can be no competent or considerate engineer responsible. When we look around for the most likely culprit, random and capricious natural selection is abundantly in evidence as responsible instead—confirmed not only by predictive models of cognitive evolution, but by the actual accumulated historical evidence of the evolution of brains themselves, from worms to lizards to mice to apes to us, and by anatomical and physiological—and social, historical, and psychological—study of how our brains produce the reasoning they then and now do.
There is a great deal else about how our minds actually did arise and actually do work that quite soundly refutes the existence of any worthwhile god (see my Argument from Mind-Brain Dysteleology). But here my focus will be solely on the human capacity to reason. Where did it come from? Why does it work? Why should we trust it?
The Importance of Causal Reduction
It is usually said the AfR was invented by C.S. Lewis—and then soundly trounced by Elizabeth Anscombe. That’s mostly true, although Graham Oppy traces the history of the AfR even further, finding versions of it presented by several earlier thinkers, even as far back as the late 19th century. It has also found variants in the hands of subsequent authors as Oppy documents, most famously in the form of Alvin Plantinga’s Evolutionary Argument Against Naturalism, which I’ve refuted before in a single paragraph:
If our faculties evolved by natural selection, Plantinga says, then we could never trust them. This is false. Because his argument for this conclusion, “What evolution requires is that our behavior have survival value, not necessarily that our beliefs be true,” is false. Or at least, not relevantly true. Because beliefs evolved to regulate our behavior in ways that would be conducive to our survival. The more capable you are of forming true beliefs about the world, the more adaptable to that world you will be. Certainly, evolution would not ensure our beliefs are always true; but lo and behold, that’s exactly what we observe: our cognitive systems fail in systematic ways that are fully explicable on their having been naturally selected, but inexplicable on their having been intelligently designed. Thus, the evidence here, once again, argues against the existence of a god, not for one.
So much for that nonsense (for more on that, see Why Plantinga’s Tiger Is Pseudoscience).
By contrast, Victor Reppert makes probably the best effort to make the AfR “work,” in his book C. S. Lewis’s Dangerous Idea (and a few subsequent articles and chapters in later anthologies), assigning himself the task of “fixing” all of C.S. Lewis’s boner mistakes in formulating it. Reppert’s basic approach is to say that “if some purposive or intentional explanation can be given” for any phenomenon, like human reasoning, “and no further analysis can be given in non-purposive and nonrational terms” (p. 51), then purpose or intention must somehow be basic properties of the universe. But “purposive basic explanations” cannot “be admitted into a naturalist’s worldview” (p. 53). Therefore, if one such phenomenon should turn out to be the formation of logical inferences, then “reason must be viewed as a fundamental cause in the universe,” which cannot be true on naturalism (p. 51).
As formulated, this argument is formally valid: if it is the case that the basic explanation of human reason is inherently purposive or intentional (or indeed, in any sense mental), then it is the case that naturalism is false. This is basically a tautology, since the absence of such explanations is precisely what defines naturalism against supernaturalism (see Defining the Supernatural). It’s just that the condition isn’t met: there are no basic explanations of that kind. Every purposive or intentional explanation is reducible to non-purposive, non-intentional, physical-mechanical causes (in the case of the human brain, that would be the electrochemical interaction of neurons). That’s why we’re sure naturalism is true (see Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them; Why A Neo-Aristotelian Naturalism Is Probably True; and The Mind Is a Process Not an Object).
You might notice, though, that even if the key premise here were true, it still didn’t get to God as a conclusion. So one more step is needed. Reppert fills that gap by arguing that if we rule out naturalism (if it is falsified by sufficient evidence—such as by proving that stated condition is true), then theism becomes the next best explanation. Unfortunately for Reppert, and unlike that first step in his argument, this isn’t a valid inference. Nontheistic supernaturalism would actually fare better for a number of reasons, not least being that “God” as an explanatory entity possesses maximal (indeed, patently absurd) specified complexity and will thus always be the least likely explanation of anything that hasn’t also been ruled out (see The Argument from Specified Complexity against Supernaturalism; A Hidden Fallacy in the Fine Tuning Argument; Bayesian Counter-Apologetics; and The Argument to the Ontological Whatsit).
But that’s already a well-established problem for theists across the board. So let’s assume for the sake of argument that someone found a way to solve that particular problem, and thus could actually rule out with evidence all the far simpler supernaturalist worldviews without gods in them, thus making Reppert’s final step also “work” (thousands of years to date and no one has done that, but still). That leaves us to address the first step: to even so much as get naturalism off the board, an “argument from reason” must demonstrate the causal irreducibility of some purposive or intentional event—and the events it focuses on in that category are rational events, namely logical inferences (deductive and inductive). AfR proponents are in effect claiming that it is impossible either to have or to evolve any capacity for rational inference that is reducible to nonrational causal steps; therefore naturalism, by claiming to be arrived at by rational inference, is self-refuting. Which is a claim that seduces many a gullible person unaware of its pseudoscientific basis (and for an example of that, see my recent treatment of this point in respect to Justin Brierley).
But the approach to the world they are embracing here is ignorant and illogical. It’s like trying to argue that bricks, being just bricks, can never comprise a house. Obviously, a house can be reduced to mere bricks, none of which has doors or windows or a living-space inside. Yet those bricks can be organized so as to produce such a functionally different thing—a thing that can exist in no other way except as an assembly of simpler things that are not themselves a house. Or like someone going around insisting a wheel has to be composed of parts that are themselves “in the last analysis” round. Yet the wheel can roll, even when its parts cannot. Causal properties thus arise from the organization of a material, not just from the material itself. A gold ring will roll down an incline, but a gold block will not—despite these objects being made of nothing whatsoever but the very same gold. Obviously, in relevantly the same way, a reasoning system can arise from the organization of simpler nonreasoning systems. And you have to be pretty darned foolish not to realize this.
It’s important to note here that Graham Oppy’s response to this is, I think, confused. He argues that “by the lights of identity theorists” such as himself “if naturalism is true, then it is not the case that all beliefs can be fully explained in terms of non-rational causes” (p. 25), but then he simply provides an explanation of beliefs that consists entirely of non-rational causes, namely, the electrochemical interaction of neurons. Exactly as he should. I can only assume there is some confusion here (either on Oppy’s part or that of the philosophers he is responding to) over what it means to reduce a rational process to solely non-rational causes. I take it to mean that each fundamental part of the process is non-rational, even if the aggregation of those parts is rational.
It seems clear Oppy’s position is actually the latter: that any rational thought process experienced in the mind is identical to a causal sequence of electrochemical events in the neural network of the human brain. And that is what makes that process a rational one. But this is still entirely reducible to non-rational events: take it apart into its individual components (one electron, caused to take one synaptic pathway, by some other cascade of events, each of which individually is nonrational), and no component is, of itself, “rational.” What is rational is the whole system together; not its individual parts. I don’t think Oppy disagrees here (and scientifically, he most definitely can’t). And I think this is what his opponents are trying to deny. But their denial is contrary to scientific fact.
I think Oppy might be stumbling over a common confusion of reductionism with fallacies of division and composition. I took Justin Brierley to task for the same confusion recently; there the subject matter was of “love” being “just” a “bunch of atoms in motion.” But that is to miss a key component of reductionism: reductionism does not say there is no difference between a brain and a tank full of randomly moving gas, for example (or no physical difference even between a brain experiencing love and a brain experiencing hate). Both are just a bunch of atoms in motion (even brains experiencing different things are made of all the same parts). But the difference between them exists in the way they are arranged. Which is also an entirely physical, reductive fact.
That a network of synapses is arranged differently than, and this arrangement has causal effects distinct from, a tank of randomly moving gas, is a fully reductive fact. Thus, it is not enough, when reducing a thing or process to its component parts, to simply list the parts. Their physical (indeed, outright geometrical) arrangement also has to be on that list. (See my discussion of reductionism in Sense and Goodness without God, index.) No one of these things is “rational.” A single atom in a neuron can’t think. Nor can a geometric pattern for interconnecting a bunch of neurons think—without the neurons. You need both: the neurons, and their pattern of arrangement (and to have the neurons, you need the atoms; and to have the atoms, you need the quarks and leptons and the physical laws they obey; and so on). No one part of this “is rational.” But all the parts together can generate (and thus is) a rational thought process.
So, contrary to what Oppy says, if naturalism is true, then it is the case that all beliefs can be fully explained in terms of non-rational causes, those non-rational causes being a bunch of isolated electrochemical neural events, and the physical pattern into which they have been causally arranged. This is more obvious in the case of solid state computers, where unlike the human brain, we know every single causal step occurring, and why it has the outcome it does; so we can actually verify there is nothing else going on but purely physical causation. You can take a bunch of transistor logic gates and wire them together (arrange them) in a random way, which will not produce any coherent or rational calculation. Or you can take those very same transistors and wire them up in a particular way that exactly replicates a rational inference in Boolean logic. Not a single one of those transistors is “a rational inference in Boolean logic,” nor is the idea of their arrangement without the transistors (as a mere diagram of a circuit will not compute anything, just as an absolutely complete description of a heart still won’t pump blood; an error we find in the infamously failed Mary the Scientist thought experiment). Only the combination of all these things is a rational inference in Boolean logic.
Causal reduction therefore requires accounting not just for the parts (the transistors; the neurons) but also how they are causally connected (the exact arrangement of a circuit; the exact arrangement of neurons). Hence Oppy is right when he says, “By the lights of identity theorists, given that mental events just are neural events, and given that it is unproblematic that neural events have causal consequences, it is not true that mental events are invisible to evolutionary selection” as some AfR proponents claim (p. 26). Because the functioning neural system and the mental events that it generates are one and the same thing. The idea that naturalists have to just “assume” as a remarkable unexplained coincidence that mental content (qualia) and neural content (causal modeling) are always consistently correlated “is one that identity theorists will quite properly dismiss,” because for identity theorists, those two things are identical. Which means the probability of their correlation is logically necessarily 100%. No further explanation is needed.
Therefore, as Oppy puts it, “we do not need a commitment to” any “pre-established harmony” between mental and physical content “to underwrite the claim that mental states and processes are neural states and processes” (p. 27). There is simply no evidence against (and some evidence for) the conclusion that you can’t have two identical neural states and end up with different (or no) qualia being experienced at the same time. The physical information processing of those neurons just is the experiencing of those qualia. And as that’s the case, there is no sense in which we can’t causally reduce all rational thought to the interaction of individually nonrational parts. To the contrary, that we can do that—as has been conclusively proved in the case of solid-state circuitry–is a decisive refutation of the AfR proponent’s claim that that can’t be done.
Take literally any rational inference—literally any whatever—and we can write up a purely physical circuit of transistors that will replicate it exactly; and then build it, and run it, and observe that it works. And we can fully explain why it works with nothing more than standard physics (electrons, configurable resistors, known laws of physics). We have thus already proved no “extra” anything is needed for rational inferences to occur in a purely physical universe. All logic is reducible to nonrational physical operations. All. There has never been an exception.
One might then try to insist that, well, okay, you got us, clearly rational thought is reducible to physical-causal interactive circuits; but the brain isn’t made out of solid state transistors, so maybe we can’t explain how the brain does this. But that doesn’t succeed as an argument because we do know enough about how the brain is constructed to know that it operates essentially in the same way as a circuit of transistors. A signal comes into a neuron (usually an electron even, just like in a transistor circuit; although modulating signals in the brain also include chemicals, which have a comparable physical-causal effect), and some chemistry happens in the neuron that determines which signals (if any) leave that neuron, and along which synaptic pathway(s). Input + I/O protocol = output. And this works in exactly the same way as logic gates: two inputs along two particular synapses can reliably cause a different output signal than one input alone would do. And so on. The result is a physical computation of information, to ever-increasing complexity corresponding to the physical complexity of the computing circuit. We have even replicated some of the structural physics of this, emulating the neuralnet structure (the physical arrangement) of synapses in the human brain in the physical or virtual circuits of computers, and then demonstrated those arrangements of circuits behave in many ways similar to and distinctive of the human brain’s arrangement of synapses.
So there is no basis for claiming the brain isn’t running rational inferences in much the same way as computers do. Not identically; but with enough demonstrated functional similarity as to leave no need of any further explanation of how our brains can reason. So when AfR proponents insist we can’t replicate a “ground-and-consequent relationship” in a “physical cause-and-effect system,” they are simply wrong as to the science. “If A, then B; A, therefore B” is a fairly simple “ground-and-consequent relationship,” which we have replicated endlessly in “physical cause-and-effect systems.” Every computer and computerized device on Earth uses this inference model to make decisions by. So the claim that it can’t be done is false. And all the evidence we have already points to our brains doing it in much the same way.
So What’s the Secret Sauce?
A naive AfR proponent will say that on naturalism all false beliefs are the product of causal-deterministic systems; therefore naturalism cannot explain how we could tell the difference between a false belief and a true one, since true beliefs are also the product of causal-deterministic systems. But this is the fallacy of affirming the consequent. Simply because it just so happens that all false beliefs are formed causally, it does not follow that all causally formed beliefs are false. The fact of a belief being causally formed is, after all, a Red Herring: what makes a belief dubious is not that it was causally formed, but that it was formed by a process that was not significantly truth-finding. And whether a process is significantly truth-finding cannot be ascertained by examining whether it is a causal process or not.
Indeed, rational and irrational thought must both be just as causally deterministic even if supernatural souls are doing it. So the AfR proponent pushing this tack is shooting their own foot, to be honest. Any rational thought process deterministically always has the same output from the same input (once you have the premises, only one conclusion follows). Thus, it cannot be said that rational inference cannot be a causal process simply because non-rational inferences are. All rational inferences must necessarily be a completely deterministic causal process too, even on supernaturalism. So you have to instead analyze whether the process in question is reliably truth-finding or not. That’s the only difference. Just as a brain works because it is physically arranged differently than a tank full of random gas, so does a truth-finding algorithm work because it is physically arranged differently than any other algorithm.
“If A, then B; A, therefore B” works. And we know this because we can observe it working. Countless iterations; always works. But we can also know this because we can model the process in our thoughts and by observing that model see exactly why it always works. Likewise the converse. Affirming the Consequent we know to be a fallacy not only because when we try out the inference model “If A, then B; B, therefore A” enough times, we see it fail—a lot—but also because we can model it and observe in that working model why it will fail a lot: there are many ways to get B without A; so obviously we can’t count on the model “If A, then B; B, therefore A” to be reliably truth-finding. To the contrary, we can predict it will lead us to a lot of failure, because of the inconsistent correlation of B with A.
And we can predict this in essentially the same way that we can predict that trying to hammer a nail with a lemon won’t work: all by simply modeling and examining the process in our computational mind; an obviously advantageous, evolved capacity possessed even by most mammals and some birds. But when there is only a consistent correlation of A with B, then that is the a correlation we know we can count on: because we can observe that when we model the statement “If A, then B,” there is no other possible way to have A without B; that is, in fact, the very circumstance that that statement is describing. So its truth is always assured; and observably assured.
And we have indeed tested all this with computers and robots: the systems of inference that work, prove themselves to work; and their results generate feedback that reinforces trust in that inference model over others that perform poorly by comparison; and searches of the possibility-space confirm why (since “A, then B” is declaring “A without B” an empty set, it is fully observably explicable why we will never observe anything in that set—unless “A, then B” is false, which we will then be able to confirm by observing exactly that: we’ll observe “A without B” isn’t an empty set). With these principles encoded in their circuit designs (hardware or software, it makes no difference to the point), robots can reliably teach themselves, on their own, what the shape and capabilities of their own bodies are, what the shape and contents of the room they are in are, and how to move around in that space with that body to accomplish any assigned task in it.
We’ve even built robots now that can, all on their own, invent, test, and verify or falsify hypotheses about physics. As I wrote in Ten Years to the Robot Apocalypse:
Eureqa is a program developed [that all by itself] figures out how external systems work by observing them [and] building mathematical models that predict the behavior of those systems. Which turn out to exactly match the laws of nature. Laws we humans figured out by watching those same systems and doing the same thing. One of Eureqa’s first triumphs: discovering Newton’s laws of motion. Those took us over two thousand years of scientific pondering to figure out. Eureqa did it in a couple of days.
And that’s not the only example. Failing to pay attention to all the pertinent scientific evidence like this is typical of Christian apologetics generally, but it is especially typical of proponents of the AfR. They don’t even think to ask if we’ve already built purely physical deterministic systems that do all of the reasoning humans do. They don’t even think to ask if we have already isolated physical features of the human brain that replicate functional physical computation like this. They don’t even think to ask if we have already built coherent, evidence-based evolutionary models that explain why brains do this and have gotten better at doing it over evolutionary time. The AfR-claim that physical systems “can’t” do any of this has been as empirically refuted as any claim could ever be. The claim is done. Dusted. Sorry guys, you lost this argument. Time to move on.
Victor Reppert in particular is guilty of paying absolutely no attention to any scientific facts relevant to his argument—which was fatal to his case twenty years ago (when I pointed out a long list of scientific work he ignored that we already had all the way back then). It’s even more dead as doornails now, with all the advancements we have since made in neuralnet computing, robotics, and neuroscience. As per standard practice in Christian apologetics, Reppert simply ignores all the naturalist and scientific literature on a subject, and declares an issue unresolved that in fact has been resolved many times in many different ways. So when he declares that naturalists “invariably fail to explain reason naturalistically” he is always asserting what he hasn’t even come close to proving, and which is invariably found to be false when you actually check.
Even twenty years ago there were many works discussing the physiology of intelligence, for example, such as Raymond Cattell’s Intelligence: Its Structure, Growth and Action (1987). This was even then outdated and yet already covers details Reppert never seems aware of. For example, in “The Physiological and Neurological Bases of Intelligence” (pp. 213-54), Cattell discusses early studies of failures in different aspects of logical reasoning in relation to known brain damage, which is one important basis for concluding that reason is a physical function. Indeed, any theory of a reasoning brain (including Reppert’s) must account for the peculiar facts at hand linking different rational functions with different brain centers. Because the way the brain fails tells us a lot about how it works. Reppert does not address this. He doesn’t even seem aware of it. Cattell also discusses evidence that the more abstract a concept or logical relation, the more physical brain resources it consumes when we entertain it—and, conversely, losses in brain matter degrade such capacity—which again conforms more to a mind-brain physicalist account of human reason than Reppert’s supernatural dualism. And Cattell 1987 is over thirty years ago. A huge amount of progress has been made in these areas since.
For example, fifteen years after Cattell we get Relational Frame Theory: A Post-Skinnerian Account of Human Language and Cognition (2001), which outlines a neo-behaviorist theory of mind and rational cognition, all rooted in scientific evidence. Therein: Steven Hayes, et al., “Relational Frame Theory: A Précis” (pp. 141-54) outlines a physicalist theory of rational cognition; Ian Stewart, et al., “Relations among Relations: Analogies, Metaphors, and Stories” (pp. 73-86) discusses the scientific evidence regarding the neurophysiology of rational cognition; Steven Hayes, et al., “Thinking, Problem-Solving, and Pragmatic Verbal Analysis” (pp. 87-101) discusses scientific evidence regarding the role of language and natural reasoning, and their evolutionary advantages; Dermot Barnes-Holmes, et al., “Understanding and Verbal Regulation” (pp. 103-17) discusses logic as computational rule-following, and the production of intentionality from relational frame perception; and ibid., “Self and Self-Directed Rules” (pp. 119-39) discusses consciousness as a perceptual construct of a self. And that’s just using relational frame theory, which isn’t the only theoretical treatment of the evidence out there. AfR proponents ignore literally all of it.
And it’s only gotten worse for them. Just now I searched Google Scholar for all work published on “the computational physics of rational inference in the brain” since 2018 and got over 15,000 hits, including “Bayesian Modeling of the Mind: From Norms to Neurons,” which includes a section on the “neural implementation of Bayesian inference” in the human brain; “Hidden Computational Power Found in the Arms of Neurons,” which summarizes research on the actual logical operations performed in and by human neurons as well as synaptic networks; “Brain Computation by Assemblies of Neurons,” which describes the building of plausible physical models of human inference operations; “Neural Foundations of Logical and Mathematical Cognition,” which anchors the physics of human reasoning to a physical mapping of brain operations and “visuospatial cognition, language, executive functions and emotion”; and “Resource-Rational Analysis: Understanding Human Cognition as the Optimal Use of Limited Computational Resources,” which builds a higher-level model of the same idea, showing how physical steps of computation can replicate human reasoning behavior (both sound and unsound) and therefore, since we have already observed analogous features in the human brain’s structure and operation, we can infer this is indeed how the brain more or less operates, too. And so on.
And all of this has coherent evolutionary explanations—just see, for example:
- “A Natural History of the Human Mind: Tracing Evolutionary Changes in Brain and Cognition” (2008)
- “Architecture of Explanatory Inference in the Human Prefrontal Cortex” (2011)
- “Convergent Evolution of Complex Brains and High Intelligence” (2015)
- “Rational Inference of Beliefs and Desires From Emotional Expressions” (2017)
- “A Mind Selected by Needs: Explaining Logical Animals by Evolution” (2020)
- … and so on.
As Oppy sharply puts it, “The obvious point to make is that friends of evolutionary naturalism have a perfectly good story to tell about how our senses get to be reliable conduits of information about our environment even though there is no purposive agent who arranges for their reliability,” and the same can be said of all our processes of rational inference, both deductive and inductive, such that “There is not even the slightest hint of a consideration here that should cause evolutionary naturalists to lose sleep” (p. 21). For example:
If—perhaps per impossible—your kind is disposed to perceive large things as small and small things as large whereas my kind is disposed to accurately perceive the relative sizes of things, and all else is equal, then there are all kinds of ways in which your kind will be relatively hampered in its pursuit of the four Fs. Your kind will make systematic errors—about which things to fight, which things to flee, which things to feed upon, and which things with which to try to reproduce—that my kind will not make. All else being equal, your kind is ahead of mine in line for the exit door. (p. 22)
And this is why the Argument from Reason is simply a dead letter. Like much of apologetics, it can try to bootstrap itself with convenient confusions, but once those confusions are corrected, it always false flat.
For example, most AfR proponents consistently conflate sensory and perceptual reliability with cognitive reliability—confusing, for example, evolved capabilities like sight, tool use, and problem solving with our nonnatural (intelligently designed) firmware updates of critical reasoning, formal logics and maths, and the scientific method. They are quite right that we couldn’t have evolved the latter—because we observably didn’t. It took us eons to figure them out (and if we don’t keep teaching them to ensuing generations, we’ll forget them almost instantly in evolutionary time). And we were only able to figure them out by using what we did evolve a capacity for: sight, tool use, problem solving, etc. When you correct this confusion, and get all the facts and distinctions right, then their argument becomes self-refuting: by their own premises, God would have installed in us critical reasoning, formal logics and maths, and the scientific method. Which means the fact that no such firmware was naturally installed is actually evidence against the existence or involvement of God. Oops. (See, again, Why Plantinga’s Tiger Is Pseudoscience; but also the excellent treatment of Aron Lucas in The Argument from Cognitive Biases.)
As I told Justin Brierley this April, the brain is not a random jumble of atoms. And how atoms are arranged is the only thing that can make the difference between a causal process that is truth-finding and a causal process that’s not. As long as atoms are arranged so as to compute information (as computers now indisputably do, and human brains have been well enough demonstrated to do), and as long as that system has a way to check its outputs against reality (like, say, the basic human senses; as well as success or failure at avoiding danger and eating, and other like necessities), then suitably arranged systems of atoms can indeed tell the difference between true and false propositions. And in fact the entire evolution of brains, from their start in mere worms all the way to mammals and thence humans, is a story of an increasing attunement of these computers toward an ever-increasing reliability in ascertaining true facts about the world. Because complex animals have to navigate that world successfully, and doing so better than competitors at an affordable resource cost is always an advantage.
“But who built the computer?” is no response to this, because we already know how evolution by natural selection blindly yet smartly builds all sorts of sophisticated machines that aren’t random jumbles of atoms. Cells. Hearts. Livers. Hands. Kidneys. That among the things it has built over the last half billion years are truth-honing computers, otherwise known as “brains,” is established science. And that procedures like logic, math, science, and critical thought are even better at getting at the truth than any of our inherited equipment is confirmed in plain observation. That’s why we’ve learned to trust them. If they hadn’t worked so much better, they wouldn’t have gotten filed under that category of “rational thought” that AfR proponents are trying to claim physical machines can’t engage in. But that claim has simply been proved false.
Natural selection has selected truth-finding over truth-avoiding computational circuits, building ever-more-reliable belief-forming brains; and memetic selection (arising from creative human intelligence, tested against its directly observed feedback in results) has since done this even better and faster. There is nothing left to explain.
But What about Intentionality?
So rational inference obviously has reductively physical correlates. Any logic is just a step-by-step process; and any computer can carry out a step-by-step process; and human brains have all the expected physical parts to be doing that with (and this has been proved by all six convergent lines of evidence: subtractive, stimulative, comparative, observational, anatomical, and experimental; see Sense and Goodness without God, Part III.6, esp. 6.6, and III.9). Stymied by this fact, AfR proponents like Victor Reppert will retreat to insisting that, at least, some aspect of rational thought can’t be replicated by a computer. The one they like to press the most is “intentionality,” a philosophical term for the “aboutness” of a thought. That my thought of my uncle’s face is “about” my uncle (an actual person elsewhere in the world) is a property of the thought that seems, to them, too mysterious to be physically reducible. But we solved this decades ago. Purely physical computers routinely exhibit intentionality now, being able to relate computations they are making to external objects and events, and even reason about that correlation.
Cognitive science has established that the brain is a computer that constructs and runs virtual models—of things in the world, or things only in the mind, or of the mind itself (a “self” being simply an internal perceptual model of what’s going on inside one’s brain, as distinct from modeling the world outside one’s brain: see What Does It Mean to Call Consciousness an Illusion? and The Mind Is a Process Not an Object). All conscious states of mind consist of or connect with one or more virtual models like this. The relation these virtual models have to the world is that of corresponding or not corresponding to actual systems in the world. What philosophers call “intentionality” is an assignment (verbal or attentional) of a relation between those virtual models and the (hypothesized) real systems. This assignment of relation is a decision (conscious or not), and such decisions, as well as virtual models and actual systems, and patterns of correspondence between them, all can and do exist on physicalism; yet these four things are all that are needed for the philosophical idea of “intentionality” to exist in the world.
Then, from an analysis of data, a brain computes varying degrees of confidence that a virtual model does or does not correspond to a real system. Having confidence in such a correlation is what we call a belief that something is true, while having confidence that there isn’t such a correspondence is what we mean by a belief that something is false. Hence, for example, if there actually is no such correspondence between the virtual model and reality, then having confidence that there is such a correspondence is a false belief, but having confidence that there isn’t such a correspondence would be a true belief. Thus, for thoughts to have the property of being true or false only requires the existence or absence of correspondence and confidence, all of which can and do exist on physicalism. Confidence is computed physically by the strength and intensity of neural pathways and signals. The model being assigned a confidence is physically computed (in long-term memory even physically represented) in the neural network of the brain. And the correspondence (or lack of correspondence) between that model and reality is an ontological fact that can be observed by testing that model against physical interactions with the real world.
So when C.S. Lewis, the world’s worst of philosophers, dumbly says, “To talk of one bit of matter being true about another bit of matter seems to me to be nonsense,” he just wasn’t thinking through what those words mean. “This bit of matter is true about that bit of matter” literally translates as “This system contains a pattern corresponding to a pattern in that system, in such a way that computations performed on this system will match and predict the behavior of that system.” This is not nonsense. It’s just a description of physical relations in a physical computer: neural model (System A), actual reality (System B), and a confidence (signal intensity / synaptic quantity) that the two correlate, or don’t. Hence confidence that System A correlates with System B is what we mean by “believing it’s true,” and confidence that System A doesn’t correlate with System B is what we mean by “believing it’s false.” That’s what it means to say that computations run on System A are “about” System B.
And this is as true of fiction as of fact. The lifespan of Yoda is a fictional characteristic of a fictional person. Though there is no Yoda, and thus no Yoda lifespan, there is a fiction of a Yoda, and that fiction has been related to a Yoda lifespan. The fiction was invented by one or more persons at some point, but since then has become a part of millions of minds, and remains recorded in documents and physical records of all varieties. And this is how there is “a fact of the matter” whether Yoda lived to around 800 years of age. It is not that there really is a Yoda who lived 800 years, but that there is a fiction of a Yoda who lived 800 years, and there is an accepted authority on the matter: those who invented or control the copyright and the material (like “guidebooks” to the Star Wars universe). To disagree whether Yoda lived so long is a debate over whose authority to follow in establishing the “official” Yoda lore. It is not a debate over the actual age of any real person. But there is still a real physical thing being debated: all that actual mental and physical content generated about the fictional character of Yoda (“officially,” and not), which is physically represented in sentences recorded in various books and other media, and in the equivalent of a relational database in human brains (including ours, once we learn of this, and think at all about it).
So, what is a thought about Yoda really about if there is no Yoda? Well, it is about a physically existing lore about a fictional character. And the existence of a fiction of a character is itself a physical reality. The “intention” of a Yoda proposition can and often does encompass certain assumptions or choices regarding which physically-existing authority “counts” as far as establishing what should be accepted as “true” about Yoda. For example, it is a physical fact that there are pornographic sex-stories involving Yoda—but one can still assert that Yoda never had sex by appealing to an accepted authority who is mutually accepted as having the right to establish what shall be true about the fiction of Yoda. And so on. In every case, even with fiction, there is something that physically exists in the world, and in human brains, that thoughts of Yoda are “about,” and per my analysis of intention, we choose that connection, that assignment of intention, and in so doing, we physically create a corresponding connection among recorded data about Yoda in our brain. (For more on the ontology of fiction see my related comparisons in Moral Ontology. Which may better inform your understanding of why Christian essentialism is false.)
The same likewise goes for false propositions. A proposition is still about what we assign it (or are told to assign it or conclude we should assign it) to be about, regardless of whether it states something true or false about that thing. So it is not correct that false propositions “do not correspond to anything in the material world.” It is correct to say that the entire content of such propositions does not so correspond, because that’s the meaning of saying something is false, but that is not what is referred to by the “intention” of a statement. What is referred to by its “intention” is whether a proposition is hypothesized about a particular set of data or a particular system in the real world. And that can be true even if that hypothesis is false. Even when the intentional object is a hypothesized or declared reality that doesn’t exist at all (like “The God Xenu lives in a star system just one light-year from the Sun”), even that model corresponds to something real (the contents of a radius of space one light-year from the Sun). And when we ditch even that (e.g. “The God Xenu lives in the mystical realm of Xanadu”), we still are talking about a physical model in the mind of whoever is formulating that idea (and thus whatever it is they think they mean by “Xanadu,” and so on), and whether it physically correlates with anything outside their mind (where, supposedly, “is” Xanadu?).
Thus, no matter how you cut the cake, propositions are always about something that physically exists—either as potential things, or untrue descriptions of actual things, or socially-constructed fictions, or physically imagined things, or simply, just, actual things. And when we assign an intention to any proposition, all we are doing is physically assigning a referent for it in the relational database of our brain. There is no “star system just one light-year from the Sun,” but there is such a location; so you can model what I mean when I claim that, and go and check the whole radius of a light-year from the Sun and confirm no such star is there. That you know to do that is because you have correctly decoded my sentence’s intentional content, and are thereby caused to physically relate those two pieces of information in your brain: “the existence of a star one light-year from the Sun” and the actual one-light-year radius of space around the actual Sun.
Even digital computers do this all the time now: they can build models of a System B with their own System A, and determine what confidence they should have that that model matches reality, and act accordingly. And this is fully, 100% explained by nothing more than simple physical-causal events in their circuitry. No extra magic required. They don’t get all confused as to which System B their System A is “about.” They keep the two physically linked in computational registers. Similarly, that I can keep distinct what my uncle Bob’s face looks like from what my uncle Joe’s face looks like is a product of the way my neural synapses literally, physically connect each face to each separate set of data (one on Joe; one on Bob); like, for example, their addresses, which I can then test by going and seeing which guy is in which location; and the more I verify that link, the stronger it gets in physical synaptic reinforcement, which my brain sensibly reads as increasing confidence that that link is correct, and thus “true.” There really isn’t anything mysterious here. But that Christian apologists routinely lack sufficient imagination to even comprehend the things they are talking about is a common reason they remain Christians; as is their complete failure to even find out what the relevant science on any subject already is.
Consider, as but one example, and already an old one, Daniel Dennett’s discussion in Consciousness Explained (1991: 85-95) of an actual robot named Shakey who could, all on its own, figure out from incomplete data whether an object before it was a box or a ramp, all by purely mechanical, inductive procedures. No human, no intelligence, told it whether any objects it encountered were boxes or ramps—it figured that out on its own, using rational induction. Reppert might say that humans built Shakey to perform inductive reasoning, but that’s moot. The question is not how the ability was caused, but what the ontology is of the ability itself. And the ontology here is completely physical. There is nothing going on in Shakey except purely physical, step-by-step causation. This can even be proven mathematically by a computer scientist: nothing more is needed for these procedures to be engaged and succeed except the mindless physical parts arranged just so; the circuits and software; electrons on a grid. So Reppert cannot claim that completely physical machines cannot engage in rational thought. We’ve proved they do. So he retreats to claiming that evolution cannot produce such a machine. But all evidence is against him there, too.
The key point is this: a computer that performs deduction or induction, or any rational inference of any kind, and a computer that does not are always physically different. Likewise a computer that assigns some information, like a map of a room, to a hypothesized external reality, like an actual room it then must navigate, and thus trusts that it has correctly mapped, is physically different from a computer that doesn’t assign that map to that hypothesized room. Their output is always observably, physically different given the same inputs; but so is their internal, physical structure. And all the assignments of intentionality (the what any computation is about) is a part of that physical structure. Nothing more is needed for the system to work, and thus for intentionality to exist and be used as data, even by an insentient robot.
If you physically arrange a computer’s components so that it produces the same outputs as a rational deductive or inductive thinker, then you have a physical machine that performs deduction or induction. And if you arrange its components so that it physically associates (and thus “assigns”) some data to one category and other data to another category, it is also physically producing and tracking intentionality. Period. There is nothing more to it. No Platonic realm or any kind of explanatory dualism need be appealed to at all. No need of gods, souls, faeries, angels, magic spells, or any other mumbo jumbo. And this has been well-known to science for nearly my entire lifetime. How did Reppert miss the memo?
For Lewis, you could excuse him for just being unimaginative and not very smart (though all the computational physics I’m talking about was already sufficiently known even then, even if more obscurely). But Reppert has no excuse here. Nor do any others who still try to push this argument. Even twenty years ago, the Stanford Encyclopedia of Philosophy would have told him (if he’d bother to read it) that “among philosophers attracted to a physicalist ontology, few have accepted the outright eliminativist materialist denial of the reality of beliefs and desires” and that in fact “a significant number of physicalist philosophers subscribe to the task of reconciling the existence of intentionality with a physicalist ontology” (see Intentionality and Consciousness and Intentionality). Daniel Dennett, for example, had already presented a fully coherent physicalist theory of intentionality in Kinds of Minds (1997; pp. 19-56, 81-118). Nothing has since changed. Christian apologists are just ignoring everyone, and pretending their questions haven’t already been amply answered, and claiming to their flock of dupes that “no one has answered their questions.” Which is typical of Christian apologetics; but absolute shit as a method of understanding reality.
But What about Propositions?
So, hosed on their claims about physics, hosed on their claims about intentionality, they’ll move the whack-a-mole game to “propositions,” and insist those are too mysterious and and nebulous to be explicable or exist on physicalism, “therefore God.” Or some variant thereof, such as that, well, okay, maybe we can explain the existence of propositions, but then we can’t explain how they can have this mysterious property of being “true” or “false.” To push this line they might straw man physicalism by pretending all physicalists are eliminativists, because that helps them pretend they have a point. But almost no physicalists are eliminativists; and even the few actual eliminativists don’t claim what Christians think.
Eliminativists aren’t really eliminating anything; they just redefine everything. So “truth” and “beliefs” and “desires,” even “propositions” still exist in every eliminativist’s worldview; they just call them something else, or claim they work differently than commonly thought. But apologists, being lazy, make no effort to understand anything eliminativists actually are saying. Anyone actually responsible and attentive will realize eliminativism offers them no harbor. But physicalists don’t have to resort to the semantic trickery of eliminativism anyway. They can just admit that redefining “truth” and “beliefs” and “desires” and “propositions,” or claiming they work differently than “commonly thought,” actually isn’t “eliminating” them; and then just get back to doing philosophy on them. So let’s do that.
In philosophical parlance a proposition is the content of every possible sentence of identical meaning. So, if I write the sentence, “cats have ears,” that is not itself the proposition, but an instantiation of the proposition in the form of a sentence—and only one particular sentence. That same proposition can be instantiated in infinitely many other sentences, all of which mean exactly the same thing. Like, “felines possess organs of hearing” or “Katzen haben Ohren.” Each of these is a sentence. But “the proposition” is the thing, whatever it is, that all these sentences are saying. So in that sense propositions never physically exist in the same way a sentence does. The proposition is the mere concept of the idea of what those sentences affirm. It is the shared content of those sentences. So this can indeed prompt one to ask in what sense propositions do exist, then?
Every meaningful proposition is the content or output of a virtual model. Or to be more precise: actual propositions, of actual models; and potential propositions, of potential models (I have explained elsewhere how all potential things always exist, as potentials; and the sum of all actual things and all potential things exhausts all possible things). As such, propositions are not the primary driver of human cognition. They are a secondary output of what actually is that primary driver: cognitive modeling. Propositions get formulated in a language as an aid to computation, but when they are not formulated, they merely describe the content of a nonlinguistic computation of a virtual model (in the human mind, or a digital computer). But in either case, a brain computes degrees of confidence in any given “proposition” (cognitive model) by running that virtual model and comparing its content and output with observational data, or the output of other computations.
Thus, when I say I “accept” Proposition A (like, something about cats and ears) this means that my brain computes a high degree of confidence that Virtual Model A (conventionally describable by the sentence “Katzen haben Ohren”) corresponds to a system in the real world (or another system in our own or another’s brain; or in stored models in books, as the case may be). Whereas if I “reject” A, then I am saying I have a high degree of confidence that A does not so correspond. And if I “suspend judgment,” then I have a low degree of confidence either way. This is all the view of most informed cognitive philosophers today (from Daniel Dennett to Paul and Patricia Churchland; and many of the scientists whose books and papers I have cited up to now).
Likewise, in the human brain, as in any computer whatever, the output of one computation (including the output of a confidence level) is often physically the input of another ensuing computation, which thereby has a causal effect on that other computation’s output. Every conscious computation in the brain is the computation of either a virtual model or data physically connected to or computed from a virtual model (such as a confidence level). Since a proposition literally is the content or output of a virtual model, propositional content therefore literally has a physical-causal effect on further computation that relies on that virtual-model computation (which literally is the “proposition” in question)—up to and including causing the formulation of a “sentence” to record or communicate that proposition. Which a receiver of that sentence can then decode—for example, by building, as the words instruct, and connected background knowledge implies, a cognitive model of “cats” “having” “ears.”
So physicalists can account for what propositions are, and how they “exist,” and how they have physical-causal effects in human reasoning. The question then is, how do they physically possess a property of being “true” or “false”? With respect to propositions about “logic” (or any form of reliable rational inference), how logic exists, why it governs all rational thought, and how we came to discover and make use of this fact, I already covered earlier this month (see The Ontology of Logic). And the answers there could easily be adapted to any other kind of proposition about anything. But let’s spell that out.
Take Victor Reppert’s formulation: “If naturalism is true, then no event can cause another event in virtue of its propositional content” (p. 78). Reppert only discusses in this context deductive reasoning, which is strange, since inductive reasoning is more important: inductive reasoning could (and in fact does) explain the “discovery” of deductive reasoning through adaptive exploration (in both East and West, logic was discovered by experimentally exploring persuasion-space, in the form of Rhetoric in Greece and Semantics in China). But let’s take deductive reasoning as our example. Appealing to standard Aristotelian syllogistic logic, Reppert correctly says that for “rational inference” to be “possible” then we must come to believe a syllogism’s conclusion is true by our “being in the state of entertaining and accepting” the major premise and the minor premise, and this state of being must somehow “cause” us to entertain and accept the conclusion. The key move here is that for this to be true, mental events must cause other mental events “in virtue of the propositional content of those events.”
That much we agree on. Where we disagree is when Reppert declares that it “might” be the case “that the propositional content of these brain states is irrelevant to the way in which they [causally] succeed one another in the brain” (p. 78). Of course, saying “might” here destroys his own argument. If its central premise only “might” be true, then the argument’s conclusion only “might” be true, and that’s useless. We want to know whether his conclusion is true; not just whether it “might” be. But his premise is false anyway. So he can’t even honestly say it “might” be true. It simply isn’t true. Reppert says “if all causation is physical” then “it might be asked how the content of a mental state could possibly be relevant to what causes what in the world” (p. 79). And we can say, “Sure.” It might be asked. But that doesn’t mean there isn’t an answer.
Every meaningful proposition corresponds to the content of a virtual model or the output of such a model. And again, actual propositions obtain from actual models, while potential propositions obtain from potential models. Hence there is an infinity of “potential” propositions that are not now and probably never will be actualized in thought by any computer or brain, just as there is an infinity of “constellations” that exist in the terrestrial star field that will never be traced and named. Even though we will never name all possible shapes that could be found by connecting those dots in the sky, nevertheless those shapes really are there, since they physically exist as a direct consequence of the physical existence and arrangement of those stars. There doesn’t have to be some nonphysical Platonic “realm” where those shapes all exist. The physical facts alone are sufficient to establish the existence of that infinity of shapes—whether any mind notices them or not. So, also, for all propositions that “could” potentially be thought.
Reppert rightly notes that “if physics is a closed system, then it seems impossible for abstract entities, even if they exist, to make any difference in how beliefs are caused” (p. 54). Worded that way, I agree. But I follow Aristotle: I do not believe there are any such things as abstract “entities,” only “abstractions,” which are essentially just human labels for repeating or repeatable (and thus learnable and recognizable) patterns in sensory or conceptual experience (see Why A Neo-Aristotelian Naturalism Is Probably True). For example, we can say “triangularity” exists because of the physical fact that a shape with three sides is always physically possible, and actually is physically manifest in many places, and the pattern of arrangement in question (the having of three sides) is the same in every one of those cases, and thus has the same consequences in all. All that is required for that to be true is the existence of places and sides, which are physical facts, not Platonic or supernatural ones.
What the human brain does with this (and the brain of many other animals as well) is physically “detect” repeating patterns like that, and remember them, so it can detect where that pattern repeats itself, and take advantage of that information; or model them virtually so as to make predictions about them when they are encountered. Humans, of course, can assign code-words for those repeating or repeatable patterns that their brains detect or constrict, and those words are called “abstractions.” Reifying them into “entities” or “objects,” as Plato did, is a fallacy. But abstract words still do refer to real things: repeatable patterns. Whether these physical patterns are repeated in the physical universe outside our minds or only in our minds, or actually repeated anywhere or only potentially repeatable, does not matter for the point. Because human minds are physical machines and part of the physical universe; and the potential for matter-energy in space-time to be reshaped in any way is always an actual fact of all matter and energy across all space and time.
Once we recognize all this, then it becomes clear that the content of a mental state is literally and physically the content of a virtual model computation, which in turn produces a computational output that physically causes a subsequent computation to produce a certain output by providing the physical input for it, and so on. This is how propositions can physically cause progressions of thought, and thus operate as physical causes in any rational inference. Since propositions simply are the content of models, and models simply are physical computations in a physical machine (whether a human brain or a digital one), their content will obviously have distinct causal effects on what is then computed.
When I run a model of “there is a cat in my house,” I experience seeing an animal with blood circulating in its body, and thus can infer that “there is a cat in my house” includes (and thus, we say, “entails”) “there is blood in my house.” But what is happening physically to produce that experience is an actual physical, computational model nesting blood within the category of cat, which physical fact causes the output: a recognition that “cat in my house” entails “blood in my house.” And we can program a desktop computer to run the same physical computation, without any conscious experience being involved or required—because that is a product, not a cause, of the computation being run in human brains. Consciousness is a secondary output; not a primary input. The process runs perfectly fine, from premise to conclusion, in any computer without it. And this is how propositional content physically causes rational inferences in any computer, whether of meat or wire.
Indeed, on physicalism, a mental state cannot have propositional content without the physical presence in the brain of a model of the very pattern that proposition defines. Without that physical pattern in the computer of the human brain, that brain would never produce any corresponding thought. Conversely, the fact that that physical pattern, rather than some other pattern, is in the computer of the brain is precisely what causes that brain to run its computation in one direction rather than another. Since the pattern must physically exist for the thought (the “proposition”) to exist (at least actually; rather than potentially), and since a physical pattern of brain activity obviously has a causal effect on the future course of that brain’s physical activity—an effect that will differ from that of a different pattern, in precisely the respect that matters here—it simply makes no sense to claim that naturalism entails “no event can cause another event in virtue of its propositional content.” Even on the purest of physicalism, that is the only way mental events can causally operate.
Likewise, if “truth” is the degree to which the pattern of a virtual model computed by a brain corresponds to the pattern of an actual system in the real world that that virtual model is being physically associated with, and “knowledge” (justified true belief) is the possession of a rationally calculated belief (“having confidence”) that such a correspondence exists when in fact it does, then “truth” can obviously exist in a purely physical world-system. For that correspondence (between the virtual and actual model), on which the reality of truth depends, is a physical fact, as is the confidence (physically generated in synaptic activity), on which the reality of “knowledge” depends.
Because how we know a theory is true is not the same thing as what it means for a theory “to be” true—these two things are only related operationally. The meaning of “true” is model-to-world pattern correspondence. The “how” of truth is our observation of feedback in response to our behavior—all the ways we “test” whether there is a model-to-world pattern correspondence. But nothing here is nonphysical. The virtual model, physically computed. The actual model, a physical fact apart from any mind. The confidence that they match, physically computed. The reality of whether they match, a physical fact apart from any mind. Propositions are cognitive models, and are true to the degree they correspond to the real systems they are hypothesized to model, and are known to be true to the degree we can rationally infer from physical interactions that they do so correspond. No magic needed.
The Evolution of Reason
So that approach fails. That leaves just one last desperate move: a naive apologist might try to concede all that and admit brains could in principle have and use a purely physical machinery of rational inference-making, but then insist (despite all we have said already) that there is no way natural selection could have produced such an organ. This is always ridiculous (see TalkOrigins standard entries CB400.1 and CB402 and all the papers on point I cited above). But they try it anyway. So let’s talk about that.
For example, Christians will ask how we could evolve the ability to think of propositions, when almost none of the propositions that can be thought are directly, or even at all, relevant to survival—and thus, supposedly, an ability to do this can’t have been selected by evolution. But that’s answered by the fact that any cognitive skill that is good at discovering useful true propositions will inevitably also be able to discover useless true propositions—because there is no way to know “in advance” which truths will be useful or not. So a truth-finding engine will simply discover true things; only then can it work out if they are useful. For this same reason, because there is no way to know “in advance” which truths will be useful or not, it is also not possible to build a truth-finding engine that reliably produces useful false beliefs—because it would have to first ascertain what is true, before it can ascertain what is useful (which is, after all, just another asking of what is true).
As Victor Reppert put the concern, “we could effectively go through our daily life without knowing, or needing to know, that physical reality has a molecular and an atomic structure” (p. 85), which is true, but not relevant to the question at hand, which is, as Reppert puts it, “would naturalistic evolution give us mostly true beliefs, or merely just those falsehoods that are useful for survival?” (pp. 84-85). The answer to that is easily worked out: almost no falsehoods are actually useful for survival (that Christians don’t notice this, because they suck at imagining the consequences of their own models, I explain in Plantinga’s Tiger and Review of Reppert), and it is much harder to build a belief-finding engine that successfully locates those—because any engine that can do that, will also tend to discover they are false. Quite simply, the only easy and survival-reliable engine to have is one that zeroes in on true beliefs, and then works out what uses that has. To quest for useful false beliefs requires a kind of bizarre Cartesian Demon that evolution could never have produced. I further explain this epistemic point in What’s the Harm? Why Religious Belief Is Always Bad.
Reppert’s assumption here is as fallacious a basis for the AfR as the assertion that we (as animals) could survive and flourish without an opposable thumb (which is true), therefore nature would never have selected them (which is not true). This is the same as claiming evolution can never explain how we can play complex musical instruments like violins, when that has no survival value. The reality is the other way around: evolution selects for complex tool use (hence, opposable thumbs is one expected development to see realized); yet there is no way to be good at complex tool us and not be good at using complex tools—like musical instruments. Our capacity to explore an infinite proposition-space is exactly the same: the cognitive skill of exploring proposition-space is incredibly useful to survival; it just also has the inevitable consequence of making us able to explore any part of proposition-space as well, even those that are “useless to survival.” The only way it could be otherwise is to have some Cartesian Demon gerrymandering our belief-engine in just such a bizarre way as to only discover useful truths (or even more bizarrely, useful falsehoods). That would require intelligent design. Natural selection would get us, instead, what we observe: simply a capacity for discovering the truth about things, without any gerrymandered sequestering of what that capacity then finds.
Like opposable thumbs, you just can’t get the survival advantage, without all the “useless” capability that comes with it as well—like playing musical instruments and writing science fiction…and figuring out that physical reality has a molecular and an atomic structure. Which, contra Reppert, has actually had tremendous survival value, as almost our entire civilization now is built on that revelation. But it is still true that our ability to discover that wasn’t selected because of that fact. That ability evolved long before we figured how to repurpose it to do science. To the contrary, the ability we are talking about here (principally, language and problem solving, hence creative cognitive modeling) was selected because it could get us fire and wheels and huts and diplomacy and trade. It’s just that anything that can do that, unavoidably can get to atoms, vaccines, and landing on the moon—it’s just a matter of time.
Logic follows necessarily from language and problem solving, and language and problem solving are a profoundly unbeatable advantage for coping with any environment, yet once in place make possible the discovery and pursuit of any and every science (see my quote earlier from The End of Christianity). Human evolution is typified by what is called “evolved generalization of functions,” the most notable examples being our teeth (which combine a diverse selection of biting tactics to suit omnivorism—with its little canines and molars, our mouth is a jack of all trades, master of none) and our fleshy, delicate fingers with weak, stubby nails. This development abandons the advantages most animals reap through the specialization of limb function (think of claws, paws, fangs, fins or wings), in favor of the advantages of generalized function. This produces “innate adaptability,” the ability to adapt to changing environments almost immediately, without the prolonged (and haphazard) process of reproduction, mutation, and selection. This is actually the evolutionary advantage Homo sapiens struck upon and exploited and developed across the board.
Literally any organ that provides an increase in innate adaptability will be advantageous—provided it does not come at too great a cost. For instance, some extinct species of hominid evolved brains so large that they probably resulted in enough birth complications to overwhelm all the advantages such brains would have afforded. The species that survives today has the second largest brain-size in comparison with all other hominids; so it obviously found the ideal balance between fatal brain-size and the advantages of large-brain function. Because the basic ability to reason provides an obvious increase in innate adaptability (hence fire, the wheel, shaped weapons—or even farther back in evolutionary history: deception, strategic hunting, planning, negotiation), and so there will certainly be evolutionary pressure toward the development of reasoning in the right conditions.
And that only required an available ecological niche that allowed the ensuing advantages to exceed the concomitant disadvantages that arise from the loss of specialized function, and other setbacks, like a large brain that complicates birth, is more vulnerable to injury, takes years to train up through a vulnerable childhood, and continually exhausts an inordinate amount of oxygen and nutrients. And when we look at the archaeological evidence of when and where humans evolved, we see it happened in just such a niche, right where a pathway like this finally became available, for a species (the great apes we evolved from) well-enough advanced to start exploiting a rapidly changing environment requiring superior prediction, flexibility, and cooperation to successfully navigate (see “The Emergence of Cooperation by Evolutionary Generalization,” “Evolutionary Function of Conscious Information Processing,” “Climate Effects on Human Evolution,” and so on).
The actual biological functions we now use to discover the atomic structure of matter are identical to those that allowed the discovery of vital technologies (like fire, weapons, clothes—or even canteens, e.g. hollowed gourds for carrying water) and the development of advantageous behaviors (like out-thinking stalked prey, deceiving enemies, forming contracts and alliances with friendlies, preserving food and carrying water with you). Both of those skills (inventing adaptive technologies and techniques) prevented our early extinction and led to our planetary supremacy even before we discovered, for example, the principles of deductive logic or the scientific method. And since the early uses of natural reason certainly advanced our ability to cope and increased our differential reproductive success in countless ways, as soon as means and opportunity arose, there was an obvious evolutionary pressure to develop it.
And still, none of the basic functions of this natural reason allow much room for the “useful falsehoods” that Reppert or anyone requires for their AfR to get off the ground (hence Why Plantinga’s Tiger Is Pseudoscience). There are extraordinarily few “useful falsehoods” that could be “believed true” by any organ capable of perceiving better procedures for accomplishing novel tasks, identifying and correcting errors, counteracting and employing deception, and enhancing efficiency. An organ capable of these things can certainly generate falsehoods, and there is no doubt ours does—nor should we expect a flawless organ from natural selection, although (undermining Reppert’s and Plantinga’s dream of theism) we should expect a flawless organ, or at least a far superior one to what we actually have, if our reasoning ability came instead from a flawless engineer who wanted us to reason as well as possible (as any benevolent engineer would). But any organ that does all those other things, will also inevitably be able to detect even the falsity of useful beliefs. Because there is no power around to prevent that use of those skills.
Nature selected for computers (brains) in animals precisely because the making of “correct inferences” aids survival. Animals show an increasing ability to do that as brains evolved over time and in size and complexity, from worms to mice to apes to us. That means these inferences are not true because they are true in anyone’s “perception” but because they are simply and literally true, whether any consciousness is ever around to “notice” or not. If the process did not produce true conclusions (at least conclusions more true, and more often true, than lesser cognitive organs would produce), then brains would not produce the correct responses, and, more importantly, would not be able to learn from mistakes, or even from success, nor innovate new solutions to problems. Therefore, brains have to produce a greater balance of true inferences for them to be of any use to an animal. All what we see in humans is the inevitable apex of this development, directly in line with our other tendencies toward adaptive generalization of function (in teeth and hands).
Hence, when Victor Reppert claims, “One can pursue effective manipulation of the world, or reproductive fitness, or fame, or glory, or sainthood, but these goals are distinct from the purely epistemic goal of gathering as many (or as high of quality) truths as possible” (p. 77), anyone who’s not a lazy thinker can readily see that these being distinct does not mean they’re unconnected. Every single one of these goals can be better achieved—faster, more thoroughly, more surely, more safely, and more efficiently—if one has on hand a generic truth-finding tool to aid them in that pursuit. Therefore, such a tool would definitely confer a survival advantage on anyone who had it regardless of their goals. Because it would confer an advantage for the achievement of almost every conceivable goal; just as human teeth facilitate chewing almost any food, and human hands facilitate handling almost any tool. Imagine two animals: one that has a tool that helps it in almost every conceivable endeavor, and one that lacks such a tool. Which animal has the advantage? That’s a no-brainer. Pun intended.
So there can be no doubt that a truth-finding engine is valuable in and of itself and, if hit upon, would not likely fail to survive and develop. Thus, when apologists like Reppert and Plantinga suppose that an animal can survive “just as well” without some capacity for reason, they are simply wrong. Yes, an animal can do well without reasoning. But they cannot do anywhere near as well as animals who have reasoning. That is why humans are the only species in the history of this planet who have been able to sustain themselves at a population many times beyond the natural ecological capacity of their environment, and who have the means to escape nearly every catastrophe that has driven other species extinct. We now have the ability (even if not yet the will) to avoid destruction at the hands of diseases and asteroids, for example—and have long had the ability to escape all standard primitive catastrophes such as fire, flood, famine, drought, and plague.
Thus, Patricia Churchland is right that, “boiled down to essentials, a nervous system enables the organism to succeed” in “feeding, fleeing, fighting, and reproducing,” but this does not mean that “truth, whatever that is, takes the hindmost” place in importance. For that would be missing the forest for the trees: a truth-finding organ aids “feeding, fleeing, fighting, and reproducing” in a way no other advancement or even array of advancements can ever come close to matching. Thus, it cannot be said that truth takes the hindmost. That is only true for animals who have not developed a truth-finding organ. For animals who have developed such an organ, truth is as vital as the opposable thumb. For it is precisely by being able to construct virtual models of the world and “play them out” in our brains that we can come to understand and predict that world, and make use of that understanding and prediction to benefit our quest for “differential reproductive success.” But this advantage is only gained if we are able to construct, more often than chance, models that match the real world or that approach such a correspondence. And that is what “truth” physically is.
Evolution is still an imperfect engineer. That is why it is no longer the case that the “differential reproductive success” of human genomes drives any advances in science or reason, or that it matters much at all anymore. Genetic evolution accounts for the development of a crude truth-finding organ in the human brain. But “Reason,” as AfR proponents often intend the term, encompasses the formal rules and procedures of various logics, including the logic of the scientific method. And these have not arisen from genetics. They did not evolve by natural selection. Yes, they do describe and refine computational processes naturally present in the human brain; but they greatly improve the accuracy and greatly reduce the errors of that mechanism, by restructuring the brain memetically—which means, through intelligent experiment and environmental learning, not natural selection. We did evolve some handy tools for getting at truths of the world; but eventually we used those tools to figure out that in fact we have to replace them with far more competent and reliable ones, such as logic, math, and science. These teach us, for example, that human intuitive reasoning is deeply unreliable; yet can be made far more reliable by adopting in their place a few simple procedures instead.
Most of what humans “are” is not genetic but memetic. For example, our self-consciousness is constructed over time, and is therefore an assembly of computed and acquired memes, not an assembly of genes. Genes do define and limit and create tendencies in how the brain can respond to, compute, and assimilate memes (“concepts,” the components or content of virtual cognitive models). But the mind itself, our “identities” as persons, from memories to values, is largely a memetic system. We are made, not born. It is thus the case now that the “differential reproductive success” of memetic systems (ideologies, ideas, techniques, technologies, languages, procedures, etc.) matters far more than that of genetic systems. So any account of the role of evolution in the development of human reason must address the role of the memetic ecology even more than the biological.
The AfR is thus based on confusing two completely different things: the cognitive abilities innate to the human brain, which did biologically evolve, and the procedures of reasoning that those abilities were able eventually to discover, which did not. The one is the function of a biological organ, and is deeply (but still not fatally) flawed. The other is a technology, and was intelligently designed to be more reliable than the reasoning skills we inherited biologically—but they were intelligently designed by us, not gods, and not in some immediate flash of supernatural genius, but only after eons of trial and error and guessing and experimentation. And once you start realizing this, the AfR doesn’t just fail—it actually transforms into an argument against the existence of any God. Because the facts as they actually are, are fully expected and predicted by the absence of any gods; yet are not at all expected, nor in any way even plausible, on the premise that God had anything to do with either our naturally constructed minds or our innovated intellectual skills.
Conclusion
The Argument from Reason attempts to go from the claim that rational thought, or some aspect of it, cannot be produced by purely physical machines (whether transistors or neurons), to the conclusion that it must therefore come from God. Which inference isn’t even valid—you actually can’t get to God from that premise even were it true. But the premise is also bollocks. We not only can explain how all observed aspects of rational thought are produced by physical machines, we have explained it, and demonstrated those explanations probable with vast quantities of empirical and scientific evidence.
One might concede this, and abandon the AfR, and try to retreat instead to a completely different argument for God, the AfC, or Argument from Consciousness, insisting that the one thing we haven’t fully explained yet—why the operation of certain physical circuits produces certain qualitative experiences, like why the “redness circuit” generates what we experience as the color red, rather than green, or instead a smell, or “the feeling of a logical relation,” or whatever—therefore requires a God. Which is a standard God of the Gaps fallacy. It’s also not the AfR. The AfR is a well-beat dead horse now.
But even the AfC isn’t going to help you.
As I wrote in The End of Christianity (p. 300):
Scientifically speaking, the God hypothesis is not likely to fare well in the future; after all, we can already deduce from known scientific facts and the presumed absence of [intelligent design] many features of qualitative experience (such as why we don’t normally smell in color or why we see the specific colors we do and not others—including colors we “see” but that don’t really exist as specific frequencies of light, like magenta), whereas we could never have predicted those things from [any theory of intelligent design] and still cannot. So far, every cause of mental phenomena discovered has not been [a product of intelligent design], so the prior probability that any remaining phenomena [of the mind] will be explained by [it] is continually shrinking.
But even setting that aside, we have no knowledge … that renders qualia any more likely on [intelligent design] than on its absence, so qualia make no difference to [any] calculation [here]. There is no more evidence to show that qualia are impossible on the absence of [intelligent design] than that qualia are inevitable on the absence of [it]. We can at best split the difference and say it’s 50 percent. But we must say the same for [intelligent design], because we can only get, for example, “god experiences qualia, too” or “god wants us to experience qualia,” by assuming that’s the case ad hoc (since we don’t actually have any evidence of the fact), which halves the prior probability (since so far as we honestly know, there is at best a 50-50 chance that “a very powerful self-existent being who creates things by design” does either, much less both), and if we don’t assume either theoretical element ad hoc, then the probability of qualia on [a theory of intelligent design] is still only 50 percent. Either way, the math comes out the same. Qualia simply do not argue for or against [intelligent design].
In short, even in the as-yet-unexplained case of experiential qualia, all evidence points toward the explanation more likely turning out to be physical rather than supernatural; no evidence points toward God having anything to do with it. You can’t get an argument for God out of that. Nor can you get one out of the Argument from Reason. The AfR doesn’t even rebut physicalism, much less atheism—which doesn’t require physicalism. That just happens also to be most probably true, thereby rendering atheism all the more certain. And there just is no honest or competent apologetic that can escape that fact.
I’m confused on this point:
“A naive AfR proponent will say that on naturalism all false beliefs are the product of causal-deterministic systems; therefore naturalism cannot explain how we could tell the difference between a false belief and a true one, since true beliefs are also the product of causal-deterministic systems. But this is the fallacy of affirming the consequent. Simply because it just so happens that all false beliefs are formed causally, it does not follow that all causally formed beliefs are false.”
All false beliefs are the product of causal-deterministic systems.
True beliefs are also the product of causal-deterministic systems.
Therefore naturalism cannot explain how we could tell the difference between a false belief and a true one.
Your response doesn’t make sense. The point you’ve just stated (in the list above) is that “naturalism cannot explain how we could tell the difference between a false belief and a true one”, yet you proceed by responding “Simply because it just so happens that all false beliefs are formed causally, it does not follow that all causally formed beliefs are false” — but, that was clearly not a premise. Nobody said “all causally formed beliefs are false” at all. What was asserted is that “naturalism cannot explain HOW WE COULD TELL THE DIFFERENCE” between a false or true belief.
Your response, as written, looks more like a smokescreen than anything else.
Comments?
I’m not sure what you are missing here. The syllogism you present is a non sequitur (“Therefore naturalism cannot explain” does not follow from the given premises preceding). That is my point.
Perhaps you mean to clarify that their position is inductive (probabilistic), not deductive, i.e. the falsely affirmed consequent is not that all causally formed beliefs are false, but that a causally formed belief is unlikely to be true.
In other words, in a deductive fallacy, it would be “if there is a false belief, then it was caused; therefore, if a belief was caused, it is a false belief,” but an inductive version of that fallacy would be “if there is a belief unlikely to be true, then it was caused; therefore if a belief was caused, it is unlikely to be true.” Which is still illogical.
As to how we tell the difference between a causal process that is reliable and a causal process that is not, this is what the entire article you are commenting on is about: how we tell the difference (and not just we, but blind natural selection even) is by observating the congruence of the output of the process and reality (by observational testing or simple death-or-survival outcomes). The role of causation alone has no bearing on this distinction. The likelihood of a causally produced belief can only be ascertained by what is causing it (such as, a rational or a nonrational process), not merely that it is caused.
The syllogism you just made here is false and obviously false.
“All good meals are the product of causal-deterministic systems.
Bad meals are also the product of causal-deterministic systems.
Therefore naturalism cannot tell the difference between good and bad meals”.
Is this sensible in any way? No. The fact that X can produce Y or Z obviously does not mean that you can’t tell when X will produce Y instead of Z, or why. That’s sort of the entire point of Richard’s article: For example, when he says, “As long as atoms are arranged so as to compute information (as computers now indisputably do, and human brains have been well enough demonstrated to do), and as long as that system has a way to check its outputs against reality (like, say, the basic human senses; as well as success or failure at avoiding danger and eating, and other like necessities), then suitably arranged systems of atoms can indeed tell the difference between true and false propositions.”
Thus not only can naturalism explain how people can differentiate between true and false beliefs, but it itself is capable of doing so.
You say that Richard’s article is a smokescreen, but it facially addressed this point.
Worse, how the heck does theism help? Theism very demonstrably does a very bad job of differentiating between true and false beliefs. Theists can look at the exact same text in the same language and derive vastly different information.
The problem with the theistic explanation for reason is that, as with all theistic arguments, it makes predictions that are falsified, and that theists virtually never defend because their goal is to defend theism not explain phenomena. If our reason is the product of a god, it should be consistent and accurate. Our intuitions should be perfect, we should have no psychological biases, and whatever built-in information we have about the world should be perfectly accurate to all contexts.
Figuring out why we are capable of achieving the reason we can is a complex process. Theism is a non-explanation.
Hello, Dr. Carrier. I am not a scholar nor am I a person with halfway decent intelligence. Even though I do a lot of reading, you are probably the best non-biased scholar that I have come across and so I wanted to ask your opinion on the Moral Argument for God’s Existence.
The Moral Argument in which I am speaking of is the one in which Dr. Craig uses in his debates. Now, I am sure you are aware of the premises to this logical argument, I am curious as to how you personally would respond to it. The reason I ask this is because it seems to me that the first premise is false considering objective morality is logically compatible with naturalism. Furthermore, in my studies of theists and their apologetics, there seems to be no evidential reason to accept premise two.
If you’ll notice, most of the reasons that theists give for accepting premise two are nothing more than emotional reasons. To give an example, I had a theist try to discuss this matter with me who claimed that the second premise was obvious because “we all know that killing children is wrong”. The problem that I see with this is that even if this is granted, that doesn’t follow that objective morality exists. So, how would you personally deal with the Moral Argument, considering you are more academically qualified than I am?
Indeed, I have written quite a lot on this, some even under peer review (e.g. my chapter on it in The End of Christianity).
For basic overviews see Section 8 of my Bayesian Counter-Apologetics, then my treatment of the Moral Argument of Alvin Plantinga, then my treatment of it in the hands of Wallace Marshall, and then my discussion recently of its use by Justin Brierley. Combined, those entries link to other work of mine on the subject.
The gist is as you suspect: the objective moral facts that can be proven to exist (and there are some) would never and could never require a god to be true; whereas any moral facts that would require a god to be true are never in any evidence as even being true.
2 blog posts between a few days? Nice
I wouldn’t want to mislead though. Publication doesn’t necessarily correspond with production. Both articles were two weeks in development and just launched around the same time due to conveniences of scheduling. Often an article will be completed but need just a touch of proofing and additional fact-checking that I might not be able to get to for a while, delaying publication.
Sometimes I am working on an article a little at a time for months, and when it finally launches may be fairly random. For example, I have three articles in Drafts that have been there since 2021 half-completed and are awaiting opportunities to spend some days at a library, which has been hard to arrange. So, one of those could end up someday getting published a day after another article, but it wouldn’t mean I wrote it in a day.
I like this sneak peak into your process. 🙂
This is a deep dive into AfR. Far more than I expected to find anywhere.
A thought occurred to me when I was reading the part about propositions:
Do we think in words/images or do we think in propositions?
Apart from some visualizations, I usually form sentences in my head when thinking and not always in the same language, which is kind of funny.
I suppose sentences like that are useful as inputs into the next stage of thinking.
Now I wonder, are propositions the basis of thinking that get externalized as words/visuals?
Anyway, the main thing I wanted to post here is a part of a discussion that goes on when debating about AfR which is omitted in this article.
The omitted part is about justification of reasoning process in the first place.
You have in many ways, with great detail, explained how truth-detection capabilities vastly improve chances of survival and give evolutionary edge to organisms.
However, when explaining this, you are using the very reasoning faculty that is being put into question.
In other words, you are using reason to justify reason, which is circular.
Theists may say that given naturalism, you are on very shaky grounds.
For that, reliability of cognitive faculties is stronger given supernaturalism, since by definition God is perfectly rational. That’s the argument.
However, I see the same problem not being solved. They also need to presuppose reason to get to that conclusion.
As far as I see, there is no way out of this. Reasoning has to be taken as a fact, and it cannot be justified because there are simply no tools to do this. As believers would like to say, you have to have faith in reason.
Otherwise, we are just paralyzed from the start, and any discussion is impossible.
I’m really interested; what are your thoughts on this issue?
I did address that in this article, explaining why we now trust reason, and certain improved models of it (like formal logics and mathematics, and the scientiufic method).
It’s not circular because it operates on an external check. Some methods are confirmed to work because they get material results; others are shown not to work by not as reliably doing that.
So, for example, the scientific method is known to be successful by contact with reality; if it weren’r working, we couldn’t have sent men to the moon or bred genetically modified grain that grows in a drought. We couldn’t even successfully navigate our way to the bathroom if even our natural faculties weren’t roughly reliably contacting a reality apart from our minds.
You cannot explain this in any other way than appealing to a Cartesian Demon, which cannot survive any defensible review (see my article on our not being in a simulation for why).
As for whether we think in words or propositions, it’s both. Words are a computational process we build on top of the inherent one, which is propositional, when we define propositions (as distinct from sentences encoding them in words) as models of reality. The brain operates as a model builder. The models it builds are what we mean by propositions. And we innately think by navigating those models, and testing whether the model then plays out in reality or not, through contact with reality, as our external check.