In a recent issue of Philosophy Now, Christian philosopher Grant Bartley argues “Why Physicalism is Wrong.” In which he exemplifies why it is the critics of physicalism who are wrong. Because Bartley commits basic fallacies in understanding the issue. Which are actually common fallacies. Especially among Christians. Here’s why Bartley is wrong. And why it matters…
What Is Mind-Brain Physicalism?
Mind-brain physicalism is the theory that “states and processes of the mind are identical to states and processes of the brain.” Without remainder. Meaning, once you have all the physical parts in place, and set them in motion, every phenomenon of mind is produced. No “extra stuff” has to be added to make it work. In Sense and Goodness without God I give several reasons why this theory, though not yet proven scientifically, is almost certainly correct (pp. 135-60). Since then, good defenses of it have been published by Melnyk and Churchland. And even some Christians now are starting to concede the point.
One of the most famous and popular ways to argue over this is a thought experiment about zombies. Not flesh eating walking corpses. But the conceptual possibility of a person who has all the working parts of a brain identically to yours and who behaves in every way identical to you—yet experiences no phenomena of consciousness. They experience nothing. If such a person is logically possible, then what we call qualia (the peculiar quality of “what it is like” to experience things, e.g. what the color red “looks like” being the common example) cannot be explained by physics. Rather, some “extra thing” must exist or operate, to let us experience things, and thus “be conscious” in the sense we commonly mean (rather than merely act as if we were conscious).
Christians of course want this “extra thing” to be the soul—combined with the created laws of God (“thou shalt experience a color when a certain bundle of photons agitates your eyeball”). But those guesses are explanatorily useless (they predict nothing and are wholly untestable), probably incoherent (it’s not clear how either souls or gods actually solve the problem of explaining why qualia exist and manifest only in certain ways), and contrary to precedent (everything about the mind so far that we’ve been able to test yet thought couldn’t be physical, has so far always turned out to be physical). I dismantled the Argument from Qualia (“Qualia; therefore God”) in The End of Christianity. And I briefed that already in my Reply to Plantinga. So I won’t bother with it now. Here I’m only concerned with the competing theories of mind: physicalism vs. ensoulment (or some other variety of explanatory “dualism“). Not with whether any of this argues for or against God (though really, the evidence argues against God…once we put all the evidence back in, that Christians leave out).
But there’s a kink in thought experiments. Because they are conceptual in result, they must be conceptually consistent. You are failing to conduct a thought experiment correctly if you don’t do what the experiment actually tells you to do. Searle’s infamous Chinese Room is an example of a philosopher failing to conduct the actual experiment he himself described, and thereby getting a completely bogus result out of it. Pro-tip: the man in the room is only analogous to the circulatory system…and that circulatory systems aren’t conscious, is not a revelation—whereas how we must conceive of the book in the room to meet Searle’s own terms, ends up making the book conscious, proving nothing about consciousness…other than that books can be conscious! (See my discussion of Searle’s fatal mistakes here in Sense and Goodness without God, pp. 139-44.) Another is Mary’s Room, in which the usual mistake is to forget that if Mary has all propositional knowledge, then she already has a complete set of instructions for how to install and activate whatever neurons in her brain are required for her to experience any color she wants. The thought experiment, as usually carried out—incorrectly—confuses process with description, and cognitive with nonconitive knowledge (again see Sense and Goodness, pp. 33, 179, etc.). Not all knowledge is propositional. That does not mean non-propositional knowledge can’t be reductively physical.
Philosophers will make the same mistakes with the zombies thing.
As I wrote in my Reply to Plantinga:
This is similar to why philosophical zombies are logically impossible. To be one, a person must be neurophysically identical to a nonzombie, yet not experience anything when thinking and perceiving (they see no “color red” and hear no voice when asked a question and so on), and yet always behave in exactly the same way. Those three conditions cannot logically cohere. Ever. For example, if you ask the zombie to describe the qualia of its experience (“Do you see the color red? What does it look like? Do you hear my voice? What does my voice sound like?”), it either has to behave differently (by reporting that it doesn’t), or it has to lie (by claiming it does, when in fact it doesn’t), which is also behaving differently, but more importantly, entails a different neurophysical activity: because the deception-centers of the brain have to be activated (and that will be observable on a brain-scan of suitable resolution); but also, their brain has to be structured to be a liar in that circumstance, which will physically differ from a person whose brain is structured to tell the truth when asked the same questions (and those structural differences will be physically observable to anyone with instruments of sufficient precision). To which one might say, “Well, maybe the zombie will lie and not know it’s lying.” Right. And how do you know that is not exactly what you are doing? If you genuinely (yet falsely) believe you are seeing the color red, how is that any different from just actually seeing the color red? In the end, there is no difference between you and your philosophical zombie counterpart […].
This point was illustrated by one of the most important papers yet written on the subject, “Sniffing the Camembert: On the Conceivability of Zombies” by Allin Cottrell, published in the Journal of Consciousness Studies 6.1 (1999): 4-12. He forces the reader to actually conduct the experiment. And when you really do, taking into account what you have to to meet the actual terms of the experiment, the answer seems to be that zombies are impossible, not evidence against physicalism. Qualia appear to be an unavoidable and inalienable product of a certain type of information processing. You can’t make a machine that behaves consciously (and thus is capable of all the remarkable things consciousness allows an animal to do), that doesn’t qualitatively experience what it is processing. The very notion is incoherent. “My hand is in pain and I feel nothing” is simply not an intelligible sentence.
The significance is clear. Apart from the whole gods and worldviews thing—can physics explain everything, or do we need the supernatural?—it matters simply in respect to the scientific understanding of ourselves, of other animals, and of the general AI we will inevitably create. Psychic powers? Telepathy? Reincarnation? Life after death? You’d better have a physical model that we can test. Otherwise, nope. And it matters in respect to the future virtual worlds we will inevitably be able to live in—what colors can we program ourselves to see then, and what emotions can we program ourselves to feel? And how will we program that? What qualia can we then enjoy, that were impossible in our present brains, and why? And it does matter for deciding what research we should be aiming at to solve the scientific question (one of the last great questions science has to answer) of why consciousness exists, and why it has the specific properties it does, instead of others.
Why, after all, does red look “red”? Why do we “see” red instead of taste it? Why do we smell cinnamon instead of hear it? Why does cinnamon smell like cinnamon and not like fish? We already know some things about this. For example, for some people, we know red doesn’t look red. It looks green. And they don’t know the difference. They are qualia inverted: people with genes for both versions of color blindness (a statistical inevitability) will have their red cones wired to their green circuits, and vice versa (see Martine Nida-Rümelin, “Pseudonormal Vision: An Actual Case of Qualia Inversion?” in Philosophical Studies 82.2 (May 1996): 145-57). But because they will only ever have heard us call green things red, they don’t know they are actually experiencing a different color than we are when we both say we are seeing “red.” We also know lots of people have differently wired qualia responses (seeing sounds, hearing colors, tasting shapes, and so on). It’s called synesthesia. And of course animals have sensory systems, and sensory ranges, that we don’t—they must experience qualia wholly alien to us. So could we. If we were physically wired differently.
Why Haven’t We Solved This Yet?
But if physicalism is true, shouldn’t science have proved it by now?
No. That we haven’t done that, is not because physicalism is false. It’s because we don’t have the means to get there yet. In short, the evidence that we haven’t gotten there yet, is 100% expected on both theories: that physicalism is true; and that physicalism is false. It’s therefore not evidence of either.
What we need to answer these questions is better instruments. Just as we couldn’t learn of the Big Bang without better instruments allowing us to see more detail in the cosmos farther out and in more ways (e.g. spectrum analysis; radio telescopes), we can’t really understand consciousness without instruments capable of resolving brain activity at the nearly atomic scale. Active brain scans (like Functional MRI) have nowhere near the required resolution. They can’t even see neurons, much less observe the electrical activity across specific synapses, even less observe any chemical activity involved in the processing—for example, to effect memory do neurons add methyl groups to their nuclear DNA causing different computational physics in the neuron? Needless to say, we are nowhere near being able to see even the physical synaptic structure of whole brains, much less know what the input and output signals are in every neuron or neural circuit, even less what physical structures compute the output from that input. Our brains aren’t digital electric computers. They are chemo-electric. They operate on analog principles, and combine chemical computation along with electrical signaling. Brains are therefore not Turing machines; although a Turing machine should be able to replicate the same information process, if we ever figure out what it is (Searle’s attempt to disprove this with his Chinese Room was a fallacious flop).
More likely we’ll get there first through AI. Which will be built in a completely different way from human brains. But we will be able to analyze every component of its processing and thus explain what specific processes generate what specific qualia. Because we will be able to configure its circuits however we want, and then ask it what it experiences (BTW, I hope we do this ethically by actually asking its permission and ensuring the experiments aren’t a torment; because such computers will be people in every moral sense of the term, so we should treat these AI the same way we now do all human test subjects in the sciences). Because building AI is, frankly, easier to conceive than inventing a scanning instrument capable of harmlessly observing the movement of every molecule and electron in a live human brain.
But let’s pretend for a moment we just invented that very instrument. What would we be able to do with it to start making headway on the qualia problem? First, we would be able to catalog what the physical difference is between different neural circuits and circuit networks that correlates with every distinct quale. We’ll know why one circuit makes us experience the color red, why another green; and we’ll know why one circuit makes us experience a smell instead of a color. It’s fairly certain this will be a structural difference (everything else we’ve found out about how the mind works has, and continually now for a century). It’s even more certain it will be a difference in information processing. In other words, one circuit will process information differently than the other, and that difference will cause a smell instead of a color, or seeing red instead of green. And it’s quite likely all smell circuits will share some structure in common, that makes them different from color circuits. We will then be able to peg what information process generates smells in general vs. colors, and then within that general difference, what variations of that information process distinguish different smells and colors from each other.
We’ll then know what information processes (what circuit structures) we could theoretically build (that aren’t in human brains) and thus explore the entire domain of all possible qualia (we’ll know if there is a finite number of colors experienceable, for example, or if the domain of possible color experience is literally boundless)—though likely we could only know what those “other” qualia are actually like by literally installing the circuits in someone’s brain and asking that subject what they then experience. Yet we will know some things about them: you could show us an alien circuit, and we could tell you it would produce a quale of smell and not a color. Or vice versa. Because we’d know what structural features smell circuits share that color circuits don’t.
So you might see how we’d then be able to start building a physical theory of qualia.
Dreams of a Complete Theory?
Could that process carry all the way to individual qualia? Could we get to a point where we understand the structural causes of qualitative experience for computational circuits well enough that if you show us an alien circuit, we can not only tell you it will produce a smell and not a color, but even what specific smell? Certainly for smells we know. But what about alien smells? Possibly, but it will take a good while to get there—because we cannot transmit qualia information propositionally, other than the same way we transmit things like how to ride a bicycle. Because qualia are a process. Like riding a bicycle is. I can give you a complete set of instructions for how to ride a bicycle. Every true proposition about it that could ever exist in the cosmos even. But you will not be able to ride a bike after hearing them. You would have to follow those instructions, and thus develop the skill. Then you’d know how to ride a bike.
The process of riding a bike is not cognitive knowledge. It’s noncognitive. We can encode it in a set of instructions and send it to your brain. But that won’t cause the wires in your brain to reorganize themselves into all the kinesiological circuits needed to ride. Even a complete set of instructions “to your brain” on how to do that, won’t do that. Because your brain doesn’t know how to follow such instructions. Our brains aren’t built to process sentences that way. Maybe someday we can. Like in The Matrix, Trinity’s team could rewire the neurons in her brain at a keystroke, so she instantly has all the neural structures needed to fly a helicopter. But right now, we aren’t built that way. Language is an add-on; not fundamental to how our brains work.
But notice even in that hypothetical Matrix example, they had to rewire Trinity’s neurons. Knowing how to fly a helicopter is not a set of sentences in a language. It’s a set of circuit structures that convert sensory inputs into muscular outputs. It looks like it may be logically impossible to convert noncognitive knowledge (flying a copter; riding a bike; seeing red; smelling cinnamon) into cognitive (propositional) knowledge. Yes, we can convert it in the sense of building a complete description, leaving no information out about how to physically realize the knowledge (so no ghosts or magic or gods or souls is needed to make it work). But a description of a heart, no matter how complete, will not pump blood. You have to actually build the thing. And run it. So, too, perhaps, qualitative knowledge. Knowing what a color looks like, requires building the circuit, plugging it into your cognition unit, and running it. A complete description of that circuit can no more tell you what the experience of it will be like, than a description of a heart will pump blood. But who knows. When we are able to tell at a glance the difference between a smell circuit and a color circuit, who knows what else we’ll be able to infer.
The other possibility, though, does mean there is some knowledge that can never be described in any language—that it is impossible to do so. Language is therefore limited. But that is not evidence against physicalism. That language can’t pump blood, is not proof hearts have magical powers. Hearts are still nothing other than physics, particles and fields, all the way down. The same follows for qualia.
It may be that all language can ever do is communicate a reference to something already available to the recipient: you and I can agree we will mean by some set of words x, some experience we share (as in, something we each experience separately, but agree is alike); and that’s simply all language ever does. Which is why you can never describe any experience to someone that they have never themselves had (hence the entire epistemology I lay out in Sense and Goodness). Unless that experience is composed of experiences they have had, that you can then refer to, by having them assemble it in their imagination out of their own component experiences. For example, everyone has felt pain, and what it’s like to increase pain, and that different kinds of pain feel differently, and so on; therefore any pain can be “described” to someone at least in some limited sense, even pains they have not themselves yet experienced, though always some of the information is necessarily going to be lost.
This is why language can never help someone with qualia-inverted vision discover that what they think is red, is actually what you think is green. All language can do is reference what we’ve agreed is a like experience; fire trucks and stop signs are “red” simply means “stop signs are the same color as fire trucks.” The quale we each use to determine that, is not communicable. It’s only configurable. We can build a heart that will pump your blood. And we can wire your brain so you can see what we see. But that’s the only way to transmit the information to you of what it is like to be a brain experiencing that. Language just doesn’t operate that way. And even if it did, it only would, by actually rewiring your brain in the requisite way. This in no way contradicts the conclusion that all that’s going on is physics. Any more for experience (the function of a mind), than for pumping blood (the function of a heart). But maybe we can do more, and someday articulate why red looks red.
Why Bartley’s Critique Flies off the Rails
With all that understood, you can understand what’s going wrong with Bartley’s article in Philosophy Now.
I won’t bother with his completely inaccurate description of “eliminative materialism” (whose conclusions he gets totally wrong). Instead I’ll cut right to Bartley’s key mistake: he declares that “experiences must be defined as not being brain activity” because “experience content is only specifiable through properties that are distinctly different from brains and brain activity.” “Indeed,” he says, “if the mind were not distinctly different from the brain, we could never have come up with the distinct concept of ‘mind’.” Here Bartley makes the common error of confusing an object with a process, form with function. It’s a category fallacy. A mind is not a brain; a mind is what a brain does. He is acting like someone who pulled open his computer and, not finding chess pieces inside it, declaring on that basis that it makes no sense to say his computer can beat him at chess. Or like someone who says that because his drive to Ohio is obviously not identical with his car, that therefore magic, and not his car, drove him to Ohio. That’s just silly.
“Can it mean anything meaningful to say that the contents of democracies are physical?” Yep. And yet it’s just atoms moving around. “Can it mean anything meaningful to say that the contents of conversations are physical?” Yep. And yet it’s just waves of sound or light transferring information from one computer to another. When you account for the structure of the process, yes. It’s just physics all the way down. And yet conversations and democracies exist and are fully explained. So, too, will thoughts and experiences be. “But what does a democracy weigh?” is simply a category error. Democracy is not an object. It’s a process. Likewise, a mind is not an object. It’s a process. Bartley almost seems to understand that when he lists “physical processes” as an example of what a “physical thing” is; but it seems like he doesn’t know the difference. He writes “physical thing” and thinks “object.” Ooops. No, Mr. Bartley. Wrong category of “thing” there.
Bartley probably should have read the first paragraph on this in the Stanford Encyclopedia of Philosophy:
Idiomatically we do use ‘She has a good mind’ and ‘She has a good brain’ interchangeably but we would hardly say ‘Her mind weighs fifty ounces’. Here I take identifying mind and brain as being a matter of identifying processes and perhaps states of the mind and brain. Consider an experience of pain, or of seeing something, or of having a mental image. The identity theory of mind is to the effect that these experiences just are brain processes…
Brain processes. Not the brain. In discussing this, someone said to me, but surely, “‘mind’ is typically synonymous with ‘brain’ for the physicalist” and “‘mind’ … is not a verb.” Neither is “my tour of Ohio” or “the Presidential election” verbs. But they are also not physical objects. They are processes. Actions brought about by, and properties of, complex systems of objects. But not identical to the objects themselves. My car can drive me to Ohio, with nothing required but physics; but my car is not therefore “my drive to Ohio.”
Bartley says “you do not conceive your experience of the sounds you hear as being the same sort of thing as…the activity of brain cells responsible for generating the sound experience.” But that’s exactly what we conceive it as. Imagine saying “your drive to Ohio can’t be physical, because you do not conceive of it as being the same sort of thing as rotating gears and pounding explosions inside a metal box.” That would be a dumb argument. And also obviously false. Of course my drive to Ohio is in fact identical with rotating gears and pounding explosions inside a metal box. But for that, there would be no drive. The other particulars (like the directions in which that metal box rolled me, hence “to Ohio”) completes the equation, but are just more physical facts.
What Bartley wants to say is that experiences and neurons are “distinctly different properties” of existence. Which is true. The warmth of a stop sign is a different property than its shape or color or what’s written on it. That it is a mostly red octagonal from one point of view, and nothing but a thin white line from a completely different point of view (when seen on edge), does not argue that it can’t be the same thing. Information processing in your computer can be described as just electrons moving around some wires. Or it can be described as an elaborate video game in which you are driving to Ohio in a silver corvette. Same exact thing. It’s all just a matter of from which angle, which perspective, you are looking at it. Yet it’s all just physics, all the way down. There is no godly voodoo magic that materializes your silver corvette or that moves it around a map. It’s really just those electrons and wires. Experiences are what a brain process looks like from inside the process; just as a white line is what a stop sign looks like from the side; and a silver corvette is what that electron process looks like on the display screen. That in no way means stop signs aren’t octagonal, or that video games or experiences aren’t physical processes.
Likewise when Bartley says “experiences are not properties of brains in the same sort of way that the physical properties of brains are properties of brains” he’s just begging the question. Yes, experiences don’t weigh anything or have a length and width, just as democracies and video games don’t weigh anything or have a length and width (yet are clearly physical things). But by that same reasoning, weight does not have a length or width, either; so “weight is not a property of brains in the same sort of way that the physical properties of brains are properties of brains.” But, you’d say, weight is a physical property! Well, yeah. So is thought. Oh I see what you did there. That’s what a circular argument looks like! All properties are different from other properties. That’s why we call them different properties. That doesn’t tell us anything about whether they are physical or not.
Hence when Bartley says “Mind is not just another part of the brain,” he is slipping into that same mistake again: thinking a process is an object. Mind is not “part” of a brain. It’s the operation of a brain. It’s a different kind of property than weight or length because it’s a process. But we well know processes can be physical. So that being the case, is no argument here. “The substance of experience is experience” is a nonsense statement. That’s like saying “the substance of the video game is the video game, therefore video games are magical nonphysical beings.” Democracy is not a “substance.” Neither are video games…or minds. Yet democracies and video games are clearly physical systems, realized in physical media. So why can’t minds be?
To ask what form of matter “qualia” are made out of is as nonsensical as asking what form of matter “the video game” or “American democracy” or “my drive to Ohio” are made of. These are not things. They are made of stuff…the drive to Ohio is made of tarmac and metal machinery and kinetic energy…the video game is made of electrons, wires, and transistors…democracy is made of buildings, and books, and people. But there isn’t any sense in which these things are those objects. They are what those objects are doing. That’s why democracy doesn’t have a “weight.” What would you weigh? The people? Their property? The buildings? The books its encoded in? Even the video game has no intelligible “weight” because which transistors and electrons it consists of changes from one moment to the next, and in any event the game is not simply the sum of those parts, but their arrangement. And arrangements don’t have a weight. Nor do actions and events.
How It Actually Works
The mind is to the brain, as the output of a software program is to the microchip it runs on. Note I said the output of the program; not the program by itself. The microchip is not the program. But even the program is not the output of the program. My word processing software is not the novel I wrote with it. These are different things. Mind (experience) is the output. Not the program. Nor even the hardware the program is running on. But the program and hardware are entirely physical and are all that is needed to generate the output, which is “the experience.” You need them to get that. And you need nothing else to get that. But that is not identity. It’s causality.
For example, we now know we are not conscious of spans of time smaller than about a twentieth of a second. Which is why movies work: we don’t see the individual cells flicker by, one after the other, because they fly past at 24 frames per second, so we only perceive a continuous moving picture. That means if you “zoom in” to a thirtieth of a second, during that whole span of time, consciousness doesn’t exist. It only exists as an event extended over time—a time span longer than 33 milliseconds. A thing that doesn’t even exist except over a span of time? That’s a process. No process, no thought. No thought, no mind.
You can have storage of a mind…when you are unconscious, the information stays stored there in the brain, but you aren’t conscious. So your mind isn’t doing anything. It’s turned off. Indeed, to pull off that trick, you need long term memory storage (one of the many things our brains do for us). But long term memory can’t even be formed to be stored, without first existing in short term memory…but short term memory is a process, not a storage system. That’s why if you take enough of a drug (like alcohol) that interferes with the ability of your brain to store a memory, you can still operate in short term memory but none of it gets recorded. Short term memory (hence experience, hence qualia, hence everything Bartley is saying a mind is) is a process, something the brain is doing, not something the brain is; it’s not a stored physical structure in the brain. Hence mind as experience is a process, not an object. Just as your car is not your drive to Ohio.
The same goes all the way up the chain of abstraction. Social constructions, for example (like what words mean, what things to assume, what standards are applied) are analogous to the “operating system” on your computer. That can be actually present in a culture, or just potentially waiting to be, e.g. as when encoded in a book, in which case it’s atomically there in the patterns of ink on paper, for example; but then the meaning of the patterns has to be socially extant somewhere or else it’s a dead and indecipherable language like Linear A. But when actually present in a culture, the social construct exists atomically as arrangements of interconnected neurons in brains, in the same way iOS exists in multiple iPhones—only there, instead of neurons, it’s electrons and transistor gates. We call this a social construct when the same pattern is shared across brains comprising a given culture. And indeed that’s how we define and distinguish one culture from another. Otherwise it’s an individual construct—or a group construct, though that starts to look like a sub-culture (and indeed, when we call it a full-blown culture is kind of arbitrary, or pragmatically determined, like how we decide to name a hill a mountain).
Though of course it’s messier for humans than for iPhones, because cultures overlap, or even nest within other cultures, and cultures continually change and evolve, and represent in a society along a bell curve of intensities across individuals, the same way genes do. And so on. But otherwise the analogy holds. The pattern of neurons in a brain entails an activation sequence, a circuit. Every time a certain idea is thought about, the same or sufficiently similar outputs are generated in every brain that thinks about it. The output will be further ideas or even behaviors (and indeed, thinking is just a category of behavior). Just like pattern recognition software, and decision software. It can all be described in terms of nothing more than a physical causal chain of events—just like in a computer, or a system of computers (such as “the internet”). All without ever mentioning anything more abstract. We create the abstraction, only to make thought and communication more efficient.
Hence “social construct” is a useful code for a massive complex of stuff. But it’s really just a massive complex of stuff. All physical. And we can know this because of two converging reasons: we observe that if we remove the physical components, then the social construct vanishes; and: we observe that nothing needs to be added to the physical system, to explain the social system that results. No extra “stuff” has to be added to neural circuits, to get a neural circuit to cause certain outputs to arise from certain inputs, or to get a neural circuit in one brain to match a neural circuit in another brain in respect to its input-output profile. It’s the same reason we don’t include “gremlins” among the causes of airplane crashes. There is no evidence that we have any need of that hypothesis to explain any crashes. Likewise mind-brain physicalism, even when networked into a social system of interacting physical brains. Social constructs are just what happens when you add more brains. Nothing more is needed to explain that, than the adding of more brains.
So, too, each individual brain. Which is just a system of smaller brains (neurons and neural circuits), producing individual constructs, which together comprise a mind. Bartley wants there to be something else going on. But we don’t have any evidence anything else is. Nor any need of that hypothesis. He is sure that maybe brains physically cause minds to exist, but that in so doing they are creating a whole new ontological thing, called “mind” or “qualia.” Maybe. But why think that? It’s not needed to explain anything. No additional energy need be devoted to creating any new object. And therefore no additional substance is needed to realize any new object. Qualia are not objects. Nor are minds. They are events. And as such it is a category error to think they need to be “made of” anything at all, other than what produces them: a churn of meat, chemicals, and electricity.
Bartley says “to say ‘experiences are physical’ would be to say that these particular so-called ‘physical’ things exist entirely to minds!” And he’s right. Experiences are unique to one particular arrangement and activity of matter. Arrangement isn’t enough (an unconscious mind experiences nothing). You also need the activation of it, the process of it. But not every process generates “experiences.” Experiences are an output unique to only one kind of physical process: a mental process. Just as “jogging across the street” is unique to the existence and motion of legs. Outside poetic metaphor, your salary doesn’t jog across the street, nor does your car, or your coffee. Is jogging therefore a supernatural phenomenon that requires some new magical substance to exist? Obviously not. Neither does your mind. Only certain arrangements, produce certain outcomes. That is an obvious fact of physics. It’s not evidence against physics!
And…Please Know Your Science
Science is philosophy with better data. So philosophers had better know the science of what they are talking about. But Bartley betrays his ignorance of modern science with a bunch of silly statements throughout his screed. I’ll just give three examples to illustrate what I mean:
- (1) Bartley says “whitewashing the mind/brain distinction could eliminate the difference for practitioners between whether a psychological problem is physically-originated due to a brain dysfunction or brain damage, or mentally-sourced due to traumatic experience.”
No such confusion follows from physicalism. Every therapist already knows that a traumatic experience can only be producing a psychological problem by being physically encoded in the brain; and that the only fix, is something that bypasses or rearranges that physical circuit, so as to ensure a different output from the input. Talk therapy can do that. But only by physically changing the brain. We all know there is a difference between, for example, genetic or surgical causes of brain organization, and experiential and environmental causes of brain organization. But both are physical causes. Both produce physical rearrangements of the brain. Both respond to the same kinds of therapies. Knowing the distinct cause can be helpful to tailoring treatment, but that in no way requires knowing when the cause is “not physical.” Because none are. And this a known fact of science. All changes in a mind, correspond to changes in the brain. All of them. We’ve never observed an exception.
- (2) Bartley says that because, for example, an actual ball we are tracking rolling behind something else, is different from our mental experience of the ball, that therefore experience can’t be physical. Literally, “These ideas all rely on the idea that physical things exist independent of minds. So by definition, a physical object is not only or purely what is in the contents of experience. This means, conversely, that anything that is purely in a mind, is not physical by definition!”
That’s wild nonsense. Obviously the actual ball outside our mind is a different physical thing than the ball in our mind. Just as a computer simulation of the airspace a plane is flying through is completely different from the actual airspace it’s flying through. Does that mean airplane radar readouts therefore cannot be physical systems? This is incoherent nonsense. There is no sense in which a simulation is “by definition” not a physical system. No more in human minds, than in avionic computers.
- (3) Bartley says “we know…that some events at a subatomic level are affected by whether there is an observing mind.”
No. That’s not what we’ve discovered. All we have observed is that when you meddle with an experiment—and any observation requires doing that, e.g. sticking a probe into it, bouncing a particle off it—you affect its outcome. That’s true even if minds didn’t exist. It’s not like unseen stars aren’t quantum mechanically burning when we aren’t looking at them. Or that we magically created the entire past history of the universe the first moment we looked up at the sky.
These are some pretty big fails in science literacy. And anyone who is this ignorant of basic science, can’t have any credible opinion in an advanced subject like mind-brain physicalism. But this does explain a lot about why Bartley goes so far off the rails and gets all of it wrong.
Conclusion
Bartley is right to ask “Why do brains in particular have these mental properties?” But we already know the general answer to that question, from comparative neurology and psychology across the animal kingdom and in modern electronics and brain science: these are the properties of information processing; therefore only information processors can generate them; and, we observe, only information processors of enormous complexity and particular organization. Organize them differently, and you get a different output. The internet is complex enough to generate consciousness, but is not at all organized in the way required to do that. If we knew what the required organization was, we could make the internet conscious. But not knowing what arrangement to put the system in to get that output, is not evidence of there being no such arrangement it can be put in.
I’m inclined to see the most promise in explaining consciousness in something like (but not identical to) Integrated Information Theory (minus all the speculation and woo that its proponents stack atop it; plus it probably needs to be integrated with some form of functionalism—see discussion in Wikipedia and the Internet Encyclopedia of Philosophy). But we won’t really crack the qualia problem until either we have active brain scanning instruments of extraordinary resolution—allowing us to construct complete and accurate computational circuit diagrams of the human brain—or we develop a general AI capable of helping us do that, using its otherwise alien brain construction to get at the problem from a different but more accessible direction. Might there one day be a complete physical theory that explains why one information processing circuit produces an experience of the color red, rather than green, or a smell or sound? Yes. I think that’s likely. We can’t conceive of it yet, because we don’t know anything about the underlying computational physics that’s causing it. And that physics is surely going to be extremely complex. Even a single neuron is mind bogglingly complex, in terms of its computational organization. It’s the end result of literally billions of years of evolution. Which puts it way the hell ahead of us in design capabilities.
So will we someday have a sound physical theory of qualia? As the Magic 8 Ball of history tells us: “Signs point to yes.” The scientifically illiterate fallacies of Christian apologists notwithstanding.
This argument looks even more ridiculous if you substitute any other bodily organ and its function for brain and mind, e.g.:
“Breathing must be defined as not being lung activity” because “breathing is only specifiable through properties that are distinctly different from lungs and lung activity.” “Indeed,” he says, “if breathing were not distinctly different from the lungs, we could never have come up with the distinct concept of ‘breathing’.”
Nice analogy, but as all comparisons not entirely correct. It should read “if breathing were not distinctly different from the lungs, oxygen couldn’t be transported into our system”, oxygen transport being a function of the lungs as “coming up with a distict concept” is a function of the brain.
But the program and hardware are entirely physical and are all that is needed to generate the output, which is “the experience.” You need them to get that. And you need nothing else to get that. But that is not identity. It’s causality.
The problem here is ignoring that the OUTPUT is being READ by us. The reader is what is missing from any physical explanation, not the output. We are dreaming… our dreams seem to be altered by what we believe to be external data… when we say we are awake, our dream seems to being constructed from sensory data, yet it is still a dream as it is not identical with what our senses seem to be sending us… example the eyes send upside down curved flat images and we dream of a rightside up 4 dimensional world. Understanding the dreamer is where the big problem lies… what is dreaming? there seems to be three basic possibilities. 1. the dreamer is an emergent property of a complex brain… having properties not found in any of the individual parts. or 2. consciousness is a property of reality itself and irreducible.. and 3. something completely alien to anything we are capable of currently imagining…
One overlooked element is that everything we are basing our theories upon exists within our dreaming. We actually have no basis to say it matches anything about the reality that causes the dreamer other than consistency and stability. If the dreamer is alien to anything it dreams, then these two factors really don’t point to reality at all.
I would go for possibility 1 for all experiments I know of tend to give evidence that point to that direction and no evidence exists for possibilities 2 & 3.
caveat : possibility 1 should read “dreaming” in stead of “the dreamer”
there is no actual evidence for 1 either… and if it is 3, there will not be any evidence found within the dream for it. as for 2, there is at least one scientist who says he can derive quantum field equations starting with consciousness as primary… and as far as I know no one has ever found anyway to do the reverse.
“there is at least one scientist who says he can derive quantum field equations starting with consciousness as primary” — in a peer reviewed journal? Do please provide the citation here.
But there IS evidence for possibility 1, and I don’t even have to refer to scientific publications. All you need to do is to compare the EEG’s of someone who’s conscious vs. unconscious. For 3 you agree that there will be no evidence found, so we can dismiss this possibility flat out, as we can not go beyond our “dreams”. (BTW if all of us dream the same thing, and we agree that we all dream the same thing, why not call it “reality” ? It’s just so more convenient !)
And possibility 2, even if the first part would be true : “Consciousness is a property of reality itself” -although I know of NO breakthrough in cosmology or quantum theory to back that – the 2nd part “and is irreducible” is highly probable NOT true, as ALL properties of reality that we know so far CAN be reduced to physical phenomens.
I am of course inclined to agree. But to avoid a straw man, I’ll put some qualifications up:
(1) IIT theorists do typically argue (in legit peer reviewed science papers) that “consciousness is a property of reality itself” and that it is irreducible. Their actual theory (which has some merit, IMO) does not really entail either proposition, though, so I think that’s just speculation or semantics gone awry. It’s fair to say that no one has proved either proposition with any real scientific experiment or observation.
(2) There is a difference between epistemic and causal reductionism, and this may lead to semantic conflation (an equivocation fallacy) in the hands of those inexpert at philosophical distinctions (which sadly includes most scientists even). I agree qualia are irreducible epistemically (I discuss this in Sense and Goodness without God, pp. 30-32). But that does not mean they are irreducible causally. For example, all emergent phenomena is irreducible in a sense (take a car: if you zero in far enough, you no longer see a car; and if you take away enough parts, it’s no longer a car). But that’s not what scientists mean by causally irreducible. Obviously a car is just an arrangement of atoms. It’s reducible to the parts and their arrangement. Without the parts, there can be no arrangement, and hence no car. But nothing additional has to be added to explain the car: just the parts, and their arrangement. Qualia are the same way: take away enough parts of the computer generating the output, and you no longer get the output. So the output is epistemically irreducible. But not causally irreducible. Not any more than a car is. Or anything else.
The same equivocation fallacy validates even statements like “Consciousness is a property of reality itself.” Of course that’s true in a trivial sense: everything that exists is by definition a property of “reality itself.” Including cars, apples, hurricanes, even joy. But in that sense, the statement is vacuous. It sounds like it’s saying something deep, but really it’s just a fancy way of saying something exists. You can try to redefine the terms so that the statement asserts something other than that, but then usually the statement becomes false.
There is a vast, obvious difference between learning pretty words, including how to turn a phrase just right, and intimately understanding the topic which is being conveyed. Unfortunately, recognizing the difference requires knowledge. This is a problem because the people chasing those lovely, impassioned words rarely care about education. They are seeking that which props up their biases, obviously. So, if it sounds good, it must be right.
Anyway, this post reminded me of the time I spent as a corporate trainer for a telecommunications company in Portland, OR. One of my jobs was working with software developers, engineers, to design new in-house products. There was a significant disconnect between how the engineers viewed success, and how end users viewed success. The engineers created programs to function as well as possible, to meet the project requirements. What they presented made sense to them as they were able to see and understand the entire process, down to the base coding. The ways in which this might not be ideal to end users is fairly evident. Yes, users want the program to work properly, but their minds are not wired the same way. To them, functionality means something quite different. What is the point of a perfectly functional program that is cumbersome and counter-intuitive to use?
What is your point exactly (with regards to this article) ?
Two things. First, that learning how to speak well is not an indicator of intellect or understanding.
Second, that it is difficult for people to move past their inherent biases to perceive the difference between their immediate interpretations, and the deeper, truer processes at work.
In my analogy, the engineers deal more with the brain. The users, with the mind – they only see the results of the processes. Shifting between the two perspectives is challenging for many, mostly because it never occurs to them to try, in the first place.
I think physicalism must be wrong indeed, especially when combined with naturalism. In particular, it cannot account for the question why I was born at all, or why I was born as me and not as some other conscious mind. According to physicalist naturalists, the volume or mass of conscious brain matter must be an incredibly small fraction of all the physical matter that exists in the universe or multiverse. Say 0,00…(500 zeros)…01 percent. And this is an optimistic guess, according to multiverse experts. According to Leonard Susskind, it might just as well be 10^100 zeros. Without any supernatural guidance to bring my consciousness to the interesting bits in the physical realm, there is no satisfactory answer to why I was born at all.
Compare this physicalism to the situation in which my consciousness was video-recorded and reprocessed a fantastically infinite number of times by supernatural agents. In this case the probability that I was born as me can be equal to one. And, personally, I am not satisfied with any probability less than one. For if it were only 0.9, I would still wonder why the dice or agents that decided that I would be born as me, would somehow ‘not exist’. In other words, only a deterministic solipsism can be a satisfactory theory of everything, and this runs counter to a physicalist naturalism.
You need to get more acquainted with the Law of Large Numbers, the size and time scale of the universe and life on earth, and evolution by natural selection. Because those all explain everything you seem mystified about. And since they explain far more than that, they are the far more probable explanations.
You also need to get better at math. You are doing it wrong. By analogy, you seem to be confusing the probability of winning a lottery, with the probability of a lottery being won. By your reasoning, every lottery win must be designed by God, and not just a chance accident, because each potential lottery winner is so unique. That’s not how probability works.
Perhaps some help here: Everything You Need to Know about Coincidences, Bayesian Counter-Apologetics: Design Arguments, and Statistics and Biogenesis. Just for starters. Likewise, on the silliness of your solipsism argument: Why We Aren’t Living in a Sim. Either way you are proposing a Cartesian Demon. Always the least probable explanation. When you do the math right.
In addition, you are mixing different scales. Granted, the chance of the sub-atomic particles of your brain to be arranged in that particular order is infinitely small. But biologically, the chance of you being born either male or female is about 50%, and being born with a reasonably functioning brain is about 99%. The question WHY you were born depends on the physical and psychological situation of your parents 9 months before your birth, and the socio-economic and cultural context they were living in. (All of this explainable on physicalism). The question why YOU were born can be rephrased as “Why do I experience myself as being me”, and I guess evolution could say a great deal about that, e.g. knowing to separate “you” from the environment is a first great step from protecting “you” from potential dangers of it.
And notice how that response actually predicts the data we actually see.
If Ward’s prediction were right, there’d be no more connection between your genes, or the culture you were born in, and aspects such as your values or personality types than a connection between a building’s foundation and what color it’s painted on the outside.
But it’s not. While in sociology and psychology we’re vastly far from a robust understanding of every detail, it’s pretty clear that who you are derives from a ton of biological and social-experiential factors. It’s not like the people who grow up into adults with PTSD are chosen at random: they experience some real traumatic event, and they do it in this world, not in some alternative reality.
For Ward to know that the possibility that the true, Platonic or magical source he’s being beaming from would produce him and only him, he’d have to not only ignore that the person who he is can change quite a bit when he’s tired or cranky or hasn’t had his coffee but also assume
1) That there’s no possible way that a mind being projected to seemingly occupy his body could be anyone but him, rather than one of the many other infinite potential grab bags of minds you could expect (in fact, his theory explodes the probability space; if Ward were right, otherkin should actually be totally real, because some people would, for whatever cosmic plan or mysterious reason, be projected into human bodies despite having the brain of a dragon, or a golem, or a Vulcan)
2) That there’s no possible way that that transmission can be corrupted
3) That there is a possible way for his actual body to get information from this magical afterplane and send it back
These mechanisms are magic. I don’t know if you can assess the probability of things that have no precedent, no known function, and don’t even seem to be internally coherent. I don’t know how I can assess how likely it is that unknown actors using unknown tools are doing things I don’t understand.
So, Ward, why do you think that you happen to live in a world that seems to be full of a bunch of other people who all happen to act just like the limited brains of evolved apes, with predictable rates of mental malfunction like mental illnesses and cognitive biases, a range of intelligence that seems pretty firmly within some range (with a pretty good amount of research into how that range operates), and mental experiences that all seem to be based on pretty similar mental inputs? Why do you never encounter (apparent) minds composing ultrasonic symphonies, or talking about how they made love to their clam wife last night? And remember: whatever answer you give to hack around the fact that your theory predicts a panoply of possible minds that we never encounter is going to be an ad hoc excuse with no independent evidence for it.
Dear Frederic,
Thanks for you interest in my theory. I’ll explain how it works:
1. Everything that can possibly exist, exists in reality, including Vulcans and devils. (logic = reality)
2. Everything that exists, exists infinitely many times. (because of set-theoretic recombination)
3. Everything is well-ordered: for every pair of things, one of the pair exists infinitely many times more often than the other thing. (axiom of choice in set theory)
Main conclusion:
Even though every eternal life exists, there is exactly one that wins over all the others in self-reproductive capacity: the eternal, deterministic, solipsistic consciousness. You are irrational if you do not believe it to be your life.
Relevance to this topic:
With regard to the topic of brain-mind dualism, it follows that if the thing with the highest multiplicity was a non-conscious thing instead of an intelligible life, I (the solipsistic consciousness) would not even have been born.
Your questions:
With regard to your question (How come life is intelligible and makes complete sense, if it is not produced by the world we observe?) the answer is a transfinite evolutionary conservation of cosmological natural selection: the empirical sciences have become a vital, evolutionarily conserved part of the reproduction process of our observable universe. Therefore these sciences have stopped changing, in spite of the laws of physics changing infinitely many times. Infinitely evolved laws of physics that support simple consciously experienced lives can rightfully be framed as a divine video that replays these lives all the time. You can Google my papers on “eternal life cosmology” and “benevolent metaphysics”.
And do not forget: there is no evidence that things exists that do not exist. Nobody has ever seen such a thing, and it makes no logical sense either. So do not revert the burden of proof here.
Best regards,
Ward
I’m not aware of any evidence any of those propositions is actually true, as in, reified. Conceptual truths are only potential realities, not thereby actual realities. So the axioms of set theory only describe possibilities, not factual actualities.
Nor do I see any coherent way to deduce from those propositions, even if they were true, that solipsism is ever true (literally ever; even far less so, true for us). I see no logical syllogism here, and no way to get one. And the evidence seems in fact to be completely the contrary. I’ve actually covered this before, in The God Impossible. Solipsism indeed requires the most implausible and complicated Cartesian Demon theory conceivable; we have absolutely no reason to believe it’s true in our case.
Ward, I did a little background research on you and your “paper” on “eternal life cosmology”. First, it was a shock to me that you actually reside in my home-town in Belgium ! At a first glance your scribling looks impressive, with charts and references and whatnot.. But when I googled the title, NO reference whatsoever came up, in Wikipedia or anything else… Must be that your “theory” (a hypothesis really, at best) didn’t pick up great attention from the scientific or philosophical world, did it ?
Next time, try to publish your paper in a real philosophical journal in stead of a shoddy one, founded by an even shoddier institute rooted in Ukraine… And no, “peer reviewed” by a bunch of fellow-cuckoo-philosophers all with Russian names does NOT mean you are following the scientific method !
Like we say in Ghent : “mee alle Chinezen, moar nie met den dezen !”
This is actually for Ward, below: which logic do you intend to use, and why? (I have yet to encounter someone who wants to reify logic who has any way to answer this. :))
A side thought, regarding consciousness:
You have never known a moment when you did not exist and you never shall.
Tautologies are fun that way.
Not a tautology but an over looked fact. If you persist beyond your body you won’t be dead. If cease to exist you won’t be there to know it
No, that’s just a tautology. People who don’t exist don’t exist = people who don’t experience anything don’t experience anything. It’s not empirically but logically impossible for someone who doesn’t experience anything, to experience something. Hence, tautology.
You write: “That means if you “zoom in” to a thirtieth of a second, during that whole span of time, consciousness doesn’t exist.”
I think your confusing the sense data capacities of humans with the consciousness. I can not hear all sound frequencies that exist in the world (humans only hear from 20hz to 20khz); and like so, you can’t deduce that because my eyes can only detect no more than 24 frames per second, I can only exist in there. If I close my eyes, I’ll be still there. Like you said before, brain activity is at the atomic scale, so we must be part of an infinity and not just a quantized world. As we experience it, Physicalism is in the infinite .
I really agree that mind is a process, even though an hierarchical process for me, that requires multiple self’s to reach some form of consciousness. Consciousness is in grades.
Excellent article and thoughts, best regards.
You are confusing consciousness with identity. A person is a stored collection of data; they remain a person even when totally unconscious. We are not talking about that. Bartley and I are talking about conscious experience. Not what we are conscious of. Your identity (who you are, your memories, skills, desires, personality, etc.) is among what you can be conscious of; but it is not your consciousness.
Consciousness does not exist at spans of time below 30 milliseconds. It is a complex computational process that requires at least 30 milliseconds to generate (and for many features, e.f. self-model, far more: average time to assemble, 500 milliseconds, as discovered by Libet).
But you exist even when you aren’t conscious. Obviously. Dreamless sleep, even a coma, does not disintegrate you. It just shuts down your mind, so you cannot experience anything, because your brain isn’t generating any experiences. That would be a p-zombie. Except that it also doesn’t do anything (it can’t talk, think, manifest qualia). Hence failing the thought experiment.
“[B]rain activity is at the atomic scale, so we must be part of an infinity and not just a quantized world.” That sentence makes no logical or scientific sense. I have no idea what you even mean, much less on what basis you can assert any such thing.
Yes, you’re right about me now confusing identity and consciousness. Although I don’t think one could be independent from the other, there’s any narrative identity in an unconscious world. But I understand there two different process.
You said: “Consciousness does not exist at spans of time below 30 milliseconds”. What? I know Libet’s experiments suggest that unconscious processes in the brain are the true initiator of volitional acts, but I have never read anything like that. Could you please give a link or a book to confirm your statement about that? that consciousness doesn’t exits t spans of 30ms?
best regards
I found Benjamin Libet latest book: “Mind time: The temporal factor in Consciousness”. What you are saying as a scientific evidence is instead an hypothesis:
“… the awareness of the skin stimulus is in fact delayed in its appearance until the end of the roughly 500 msec of appropriate brain activities. But then, there is a subjective referral of the timing for that experience back to the time of the primary EP response! The primary EP response of the cortex begins only about 10–30 msec after the skin stimulus, depending on how far the stimulated skin is from the brain. This delay of 10–30 msec is not sufficient to be experienced consciously. The experience or awareness of the skin pulse would thus be antedated (referred backward in time) subjectively to the timing signal provided by the primary EP response. The skin-induced sensation appears subjectively as if there were no delay, even though it did not actually appear until after the 500 msec required for neuronal adequacy to elicit that sensory experience.”
or
“This brings up an important general question about how different stimuli that are actually delivered synchronously can be consciously perceived as being synchronous. With stimuli in the same somatosensory modality, there are different conduction times in the sensory pathways, depending on the different distances between the stimulus locations on the body. The time for the arrival of the fastest sensory messages varies between 5–10 msec (for stimuli at the head) to 30–40 msec (for stimuli to the feet). Because synchronous stimuli to these two areas are subjectively perceived as synchronous, we can only assume that a time difference of 30 msec or so is not subjectively meaningful.”
I don’t see how you can logically deduce from “synchronous stimuli (..) are subjectively perceived as synchronous, we can only assume that a time difference of 30 msec or so is not subjectively meaningful” ergo, you said, consciousness doesn’t exits at spans below 30msec.
I think you are confusing the time that is lost between unconsciousness and consciousness process (the time of the sense data signals arriving to the subjective experience) but, the consciousness could be STILL a process that’s not quantize at span of 30ms, they are two different process.
Or am I wrong?
You are confusing Libet’s measure of signaling time (the time it takes for a stimulated nerve to get a signal to the brain) with Libet’s measure of consciousness delay (which came to about 500ms of time it takes for the brain to assemble a conscious impression of what a person reasoned out and thought), and neither is a measure of the smallest time-unit of consciousness (Libet wasn’t working on that). The smallest unit of time perceivable in consciousness is established by such experiments as the cinemascope, employed most commonly in subliminal signal studies. For example. But as I note in my article, we’ve found the average to be around twice the standard signal length used in many subliminal signal studies, which is why film projection and television work at 24 frames a second (1/24 = 42ms, which is greater than 30ms).
Yes, the brain must generate consciousness, to build a narrative memory, and thus an identity model. But the generation remains a time-consuming process (spanning a minimum of 30ms). The stored results are not the same thing as the process that generates the results to be stored.
As to the time-span being greater than 30ms (c. 1/30th of a second), I already discuss that in the article you are commenting on (with respect to why film works on us). Please read the article you are commenting on.
How is it that we are here? Not referring to a cosmic event then generations of stars and finally humans. We have sensory perceptions arising from the processes of the mind, but then? The self is also a process to allow us to interface with our environment. Is consciousness being filtered/experienced through thought? What is a mind doing when it is not thinking? Is there an awareness in the mind that, continuous thinking is preventing? In the way you describe knowledge it appears analogous to the Buddhist teaching that truth is not knowledge, including non cognitive knowledge.. Admittedly most of what is presented here is beyond my capacity, at the same time I am well aware of the global lack of intelligence. Intelligence in my opinion is not manifesting because the only true sign of it, is the absence of conflict. Thank you for your work.
How we would answer your questions, would depend on what you mean by them (what exactly you are asking). But whichever meaning you choose, they all have fairly well established answers in cognitive science now. Buddhist philosophy, just like Western philosophy (e.g. Aristotle), anticipated many things about what the truth would turn out to be regarding human consciousness. But also got many things wrong. That’s why we needed to get science properly on the task.
I find it amusing that people still (more or less) use the “but ‘mind’ is a noun!” argument. Sheesh. Brains mind, lungs breathe, skin transpires, etc.
And that’s why I don’t (and never did) call myself an “analytic philosopher”. (I am of course not a “continental” one either.)
Just because many people are bad analytic philosophers, doesn’t mean there is anything wrong with analytic philosophy. Just as with logic (many are bad at it), science (many are bad at it), mathematics (many are bad at it), philosophy as a whole (most are really bad at it), and literally anything else.
True, but the idea that ordinary language as a guide to reality is such a bad methodological decision I figured I’d ignore it.
At first I though that’s something I’d expect from a bot or a troll. “I shall ignore language and all the rules of language, but still try to use it to communicate with people.” That’s kind of funny.
But I think you mean, reality is more complicated than commonplace words? (i.e. a single word denotes a rather complex set of possibilities and structures.) In which case, yes.
This response and dr. Carriers reply to it makes me curious… what other tools than language would you use to describe reality or possible realities ? Mathematics and its derivate graphs can only be a partial answer… For instance, how would a patient describes his pain to a medical doctor without language ? (and sign language is STILL a language !)
I think the issue is technical vs. ordinary language; and using isolated words (which are vague) instead of complete descriptions (which are more accurate). And things of that nature.
Hmm, comment on my own remark. If we define “language” as a system of symbols (oral or visual) conveying meaning, then mathematics is also a language. The question becomes then : what language to use, and the answer is : the best possible one in any given situation. That is mathematics in physics, chemical symbols to describe the composition of medicines, the medical jargon to diagnose and remedy an ailment, and a common language to describe the pain by the patient.
My question remains however : what language would one use in philosophy ?
The idea is that some people think there is a direct correspondence between reality and the language used to reflect it. There were debates in the philosophy of perception ~50 years ago that were around exactly this. At the time I was an undergraduate (~20 years ago) this was still echoing through the field.
As another, less global, example, Storrs McCall (McGill logician/philosopher of law/philosopher of space and time) in class briefly suggested there was a profound ontological difference somehow reflected in mass nouns vs. count nouns. A classmate pointed out (correctly) that the distinction is language relative (and is perhaps not even a linguistic universal).
“Common place words” and reasoning from grammar to metaphysics, both, are thus fraught with difficulty.
One should use language coupled with appropriate exact tools (which are semi-linguistic).
I think the only other “exact tools (which are semi-linguistic)” that exist are actually just languages. Not semi. Full on. Math and symbolic logic, for example, are just languages. The only thing that distinguishes them is component simplicity and a forced removal of ambiguity (see Sense and Goodness without God, pp. 31, 126).
Words in those languages are not allowed to be ambiguous but precise, which is an advantage and a disadvantage; but is an arbitrary choice we humans made, i.e. we built those languages specifically to have a non-ambiguous mode of description; then found out it’s extremely hard to reliably or efficiently describe anything that way, because actual things are fuzzy and complicated way beyond the capacity of math or logic to capture in a single-component way, making math-logic descriptions far more difficult and complicated, but paying that cost in exchange for precision and analytical reducibility.
Thank you for getting back to me. I apologize for the lack of clarity in my question. I’ll try this, lets take away our sensory perceptions 1 at a time and then sections of cognitive awareness. ( which seems somewhat vague) At what point would a person stop “being”? I look inside and honestly can not be definitive as to just how it is that we, I exist beyond the obvious. We’ve all heard this: Live in the now, yet we can’t help but live in the now so obviously this “now” is a state of consciousness. So envision you are living in the now, what is the mind doing? If you’re in the now you’re not projecting images. Can a mind be silent? Thank you again, I appreciate your work and we need to find a way to get the people who need to hear what you are saying One more thing,please, have you read any of David Bohm’s work. e.g.: “Thought as a System” “The implicate order, etc. Respectfully, Bill Rogan
A person is a stored set of data, not consciousness. Consciousness is only the model your brain builds of you (sometimes inaccurately we now know). You are what you are conscious of; you are not your consciousness. And the more you remove (in stored data or mechanical functions), the more you slide a person into a mere animal and then animals of lower and lower cognitive capacity (e.g. a “vegetative state”). A person only exists when a certain accumulation of data and active potential is achieved. That’s why most animals never become persons.
Mind cannot be silent because [mind] = [experiencing sensation or thought] is a tautology; but that’s if you mean by mind only the operation of a brain (active model building, i.e. thinking and experiencing). If you mean by mind the stored information (e.g. the sense in which one still has a mind even when completely unconscious) then you are not talking about consciousness anymore. You’re just talking about the potential capabilities of a brain; not their activation.
Thank you again, I feel there is more and I of course could be wrong. Please think about the following.
“ In considering the relationship between the finite and the infinite, we are led to observe that the whole field of the finite is inherently limited, in that it has no independent existence. It has the appearance of independent existence , but that appearance is merely the result of an abstraction of our thought. We can see this dependent nature of the finite from the fact that every finite thing is transient.
Our ordinary view holds that the field of the finite is all that there is. But if the finite has no independent existence , it cannot be all that is. We are in this way led to propose that the true ground of all being is the infinite, the unlimited; and that the infinite contains and includes the finite. In this view, the finite, with its transient nature can only be understood as held suspended , as it were, beyond time and space, within the infinite.
The field of the finite is all that we can see, hear, touch, remember and describe. This field is basically that which is manifest, or tangible.The essential quality of the infinite by contrast, is its subtlety, its intangibility. This quality is conveyed in the word spirit whose root meaning is “ wind or breath”. This suggest an invisible but pervasive energy, to which the manifest world of the finite responds. This energy, or spirit, infuses all living beings, and without it any organism must fall apart into its constituent elements. That which is truly alive in the living being is this energy of spirit, and this is never born and never dies.”
This is from a eulogy written by David Bohm. A brilliant physicists/philosopher, and writer, he worked on the Manhattan project. His ultimate goal was to combine science and the search for meaning.
That’s all just gobbledygook.
It’s late and you are tired.
I’m not sure I get this… “You are what you are conscious of”.. Does that mean a person has an identity because he is conscious ? That can’t be right… If I’m asleep and dreaming, in my dream I am still me (perhaps twisted and warped, but still me). And other persons who see me asleep will also see me as me.. I must have missed something, but I don’t know what..
“Does that mean a person has an identity because he is conscious?”
No. Identity remains even when unconscious. Identity is a physical pattern of arrangement, currently of neurons (the stored data of the person, comprising the person: memories, inclinations, personality, skills, etc.). And the causal history related thereto (a person’s continuity over time, continuous but with change, is a function of causal history).
Consciousness is being conscious of the person you are. Consciousness is not the person itself.
The only thing I know that separates us from computers and furniture, is that unlike them we care about what will happen to us… I found there is no meaning OF life, but rather meaning to us derives from our awareness that our next move or lack thereof will have a consequence for us… we know that pain, pleasure and contentment will be a result of our next gamble. That seems intimately involved in our consciousness.
Computers can be programmed to care about what will happen to them. So that’s not a difference. But yes, being consciously aware of a self, and a narrative history of that self, and making decisions to affect the future based on that awareness, is what distinguishes us—even from most other animals.
Except that plenty of other animals also do that as well.
“A person only exists when a certain accumulation of data and active potential is achieved. That’s why most animals never become persons.”
Why can’t animals be persons too? What limit is that for the accumulation of data and potential to be considered a person? It’s quite arbitrary, and a lot of people consider a human foetus or neonate to be a person.
A person requires a narrative history of a self, or the active building of one. And to do that they have to be able to build a self-model, that includes meta-cognition (the ability to compare what they are thinking, to what they infer others are thinking, and think about what they themselves are thinking).
No animals have that capacity except a very few we almost never encounter (e.g. elephants, certain omnivorous birds, cetaceans, great apes) and even that may be challenged in some cases (though the evidence is intriguing) and is extremely primitive. For example, it’s unclear if African greys are self-aware enough to actually build a narrative history of themselves and compare their thoughts to others’ thoughts…the best case-study was ambiguous on some of these points, and was a one-off yet to be replicated.
A “person” does not exist, if there is no cognitive self-model, or none being built. Because a cognitive self-model is ultimately what a person is. And this is not arbitrary. It’s simply what we mean by a person, and all inferences we draw from being a person (the ability to enter social contracts, to have self-describable desires, to think about oneself and make decisions based on that self-knowledge, etc.) follow only from that property and no other (so we could not arbitrary change the definition of a person, without destroying all those inferences, and thus eliminating any significance to the word; “person” would then cease to mean anything relevantly different from “thing”).
Fetuses are not persons. They are capable of becoming persons, they have the machinery, and in the third trimester are already running the assembly program for it (which creates another key difference: between having the ability to someday have the machinery to become a person, and actually having the machinery to become a person), but the actual attributes of personhood (e.g. the ability to meta-cognate the difference between self and others) does not actually arise until even many months after birth (we are hindered by an inability to test it with communication limits, but comm skills are sufficient to verify metacognition already before age 2; and other related skills are booting up well before that).
In the U.S., since Roe v. Wade we have legally assigned personhood (provisionally or actually) on the grounds of running the assembly program for selfhood (i.e. if the computer is not off or not yet assembled, but actually actively booting up: hence similar rights also extend to full persons who are unconscious, e.g. coma victims), and we begin assigning more rights as the program starts hitting milestones of actualized personhood. No other animal has that assembly program, and thus no other animal can be running it. Except maybe the very few species I mentioned, and even that remains uncertain; e.g. that we could boot-up Koko into a person may have more to do with how we programmed her than with how she or other apes would develop naturally in the wild, but the mere fact that we could do that entails she had the equipment to either get there, or get close enough to at least be be a liminal case: the data were sufficient IMO to recognize Koko as a person. We should therefore not assume other apes aren’t or can’t get there too.
But most animals have no self-consciousness. And no capacity for it. Some can learn their names and engage in empathy (i.e. read feelings), but this does not correspond to actual meta-cognition, i.e. they can’t model and thus think about what someone else is thinking (with some exceptions, e.g. some monkeys can do this but have not yet advanced to meta-cognating themselves) and, again, they can’t model themselves and, in result, they can’t think about what they themselves are thinking. And hence they don’t build narrative memories of themselves. They thus have no self, nor any sense of self. There is no “person” assembled in their mind.
That said, though, animal rights are not assigned on the basis of animals being persons (they can’t enter into social contracts, for example, so human rights would be a meaningless concept to them, and will always be so: i.e. animals aren’t even actively developing into beings capable of that). We decide what rights to extend animals based on humanitarian needs (e.g. we don’t want persons in our society who enjoy or are callously indifferent to causing pain, and animals definitely feel/experience pain, so animals become a proxy for detecting dangerous persons) and social needs (e.g. socially we just don’t want to live in a system that permits the gratuitous killing of standardized pets; the same reasoning follows for human babies, but they at least are actively becoming persons, in a way pets never are; likewise any animal we allow the eating of).
So according to this, foetuses are not persons, human neonates are not persons either. What about humans with severe mental disabilities? Or the elderly with severe dementia that they can’t even remember who they are or form any model of their past and future? Does that disqualify them from personhood? Your last comment about killing also seems like an arbitrary ad populum. So killing pets and babies is only bad because people are squeamish about it? So that means there is nothing immoral if I decide to euthanize my newborn because me and my partner decided to break up and no longer want a kid to tie us together?
Neonates are provisional persons under the law, on the grounds that they are actively compiling a person in their minds (in fact this starts in third trimester before birth, hence Roe v. Wade allowed state interest in protecting fetal rights in the third trimester, and only disallowed that for earlier trimesters, on the grounds the compiling was not in those stages occurring). But no, they do not fully become persons until sometime in their second year.
Severe dementia has no effect on self-model awareness or self-model building or narrative memory (it only hinders access to some of that memory, or in some cases the adding of new memories to it). So they have not lost any aspects of personhood. They have only lost access to some pieces of themselves as persons.
Wanting to kill a baby for such trivial reasons proxies you as a sociopath (indeed it would be disturbing even to kill a pet for so trivial a reason). We don’t want sociopaths who kill so arbitrarily running around free. Nor could you rationally feel good about yourself being one. Which is what makes it immoral to choose to be one (you may instead be insane, but then we lock you up for that, too). Precisely what worries you about people doing that, is exactly why you ought not do that. But that reason has nothing to do with babies having fully developed cognitive self-models. It only has to do with babies actively building those self-models (and the cruelty of interrupting that active process). It is that that we value, and why we abhor anyone who would not value it.
But if babies weren’t building self-models, they’d just be like most other animals, and never progress to any other state. They should then be treated with the same sympathy as most animals warrant. But not as persons or anything becoming a person. Babies who never build self-models, would be indistinguishable from pets.
If your argument is based on law and legal precedents, the state of Uttarkhand in India granted animals legal personhood. And there are projects underway to grant apes and cetaceans personhood as well. If neonates do not become persons until the 2nd year, then what is the rationale for granting them a right to life? Being in the process of forming something doesnt mean they have that something yet. One can argue they are in the process of becoming persons since the moment of conception, as many anti abortion advocates keep saying. After all, they cant begin to compile a personhood in their minds without first having a biological body capable of sustaining and nourishing that mind, and that process takes place throughout the entire pregnancy term.
Your argument for killing infants or pets for “trivial” reasons is fallacious. Who decides what is “trivial”? Hunters go and kill animals for the sake of a trophy or a short term adrenaline rush. Is that trivial? People slaughter trillions of animals every year for no reason other than the taste pleasure of some bacon or a cheeseburger. Is that trivial? And you have no idea how good or not I would rationally feel about myself by getting rid of an unwanted baby. Just because you are squeamish about it doesn’t mean you get to decide it is irrational or that people who do it are sociopaths
Severe dementia to the point of forgetting who you are and what your name is would seem like a case of losing access not just to some but to ALL of your narrative memories and pieces of your personhood. .
If personhood is to be used to apply to just anything, then the word is meaningless.
You can’t change reality, by changing what it’s called.
So referring to examples of people completely destroying the word by defining it as anything they want is not helpful here. That’s just evidence of eliminating the significance and utility of the word. We may as well just say “thing.”
What we want to know when we want to know if there is a person, is whether there is meta-cognitive self-awareness, or an active process of its assembly. That’s an objective reality that matters, not a subjective wish. Because only entities with that property can engage in moral reasoning and thus be held responsible for their decisions, and only entities with that property can negotiate, enter, and maintain social contracts and thus be treated as entities that do, and only entities with that property can value their own lives and thus have futures that matter to them (as opposed to having no concept of the future or of oneself or even of what life or death is).
If instead you want “person” to just mean “any object we want to treat a certain way” then even rocks and emotions can be persons. Which renders the word useless. Stick to practical reality. Stop trying to define words out of existence.
As to the matter of a conceptus, a conceptus is no more a person than a stem cell in your finger is (they both have the exact same DNA and potential capabilities). There is a non-trivial difference between a disassembled computer that is being assembled, and an already-assembled computer that is booting up, experiencing the world, and compiling results from those experiences. Likewise there is a non-trivial difference between a fetus that has no functioning or operating brain yet (it’s still being built, like the computer still being built), and a fetus that does have a functioning and operating brain and is in fact actively using that brain to build and assemble a continuous mind. There is also a non-trivial difference between that, and a fully cognitive person. These are actual objectively factual differences. Renaming them can never change that fact. And thus no name you give them can ever change the consequences of each different fact.
Hence rights are attenuated to abilities. Rights of provisional persons exist for actual provisional persons (not potential provisional persons—only potential rights exist for potential things; actual rights only obtain for actual things). And the rights of full persons exist for completed persons. And the rights of partial persons exist for partial persons. Thus a baby has fewer rights than a toddler, a toddler fewer rights than an adolescent, and so on. These are not trivial distinctions. They are real distinctions, fundamental to organizing a functional society. One can not “wish” it any other way. What works, is what works. Regardless of what we “think” or “wish” would work. We can’t make a baby a functional adult by calling it an adult. Nor can we make a river a functional person by calling it a person.
Similarly, the fear I have that you would disregard an animal with an active mind that is actively becoming a person, and gratuitously kill it, is that this makes you a danger to me and society, because it signals you have no empathy or respect for developing personhood. That would be an objective fact about you. Not a subjective feeling of mine. And that’s why being such a person, would make you a bad person. Someone we need to take steps against.
As for animals, we gain vastly more utility from them than nutrition or pleasure. Literally hundreds of material products, many you depend on in your life, employ components from slaughtered or husbanded animals. Our only obligation to them is to treat them humanely. They otherwise have no concept of life or future. They do not “value” having a future; they don’t even know what a future is. And they do not value themselves. Because they have no selves. Death literally means nothing to them. And never will mean anything to them. If that were different—if pigs actually were actively developing into meta-cognitive selves who could comprehend and thus value being alive, and we just needed to wait for that to finish compiling—then we should help them do that, take care of them, and not interfere by killing them unnecessarily. But pigs don’t become that. Therefore none of the obligations we’d have to them if they did, exist. The phenomena don’t exist, so the obligations don’t exist.
And finally, the condition of losing ALL your narrative memories of yourself, is called being brain-dead. And the dead are certainly not persons. Not anymore.
Obviously personhood is not just being applied to anything. Don’t strawman. The argument is based on the cognitive capacities of nonhuman animals. That’s why no one is saying a sponge or a tardigrade are persons. If you want to insist that personhood is based on self awareness and building a self model, show me where that is defined. The only definitions of personhood I can find are either anthropocentric and by definition exclude any nonhuman entity at all, or are broad enough to include other nonhuman animals. Btw, corporations and some ships are legal persons under the law. What say you about that?
All words and definitions are necessarily subjective. Personhood is not some natural property that exists in the universe and we happen to discover. It’s a concept that we invented and are struggling to define precisely. You’re right that there are objective facts about concsious creatures. You’re wrong that there is no subjectivity in how to define personhood or where to draw the line.
Self awareness is necessary but not sufficient to be able to enter into legal contracts, abide by laws, or be held accountable for your actions. These require more cognitive capacities beyond just self awareness and building a model of yourself. 10 year old kids are pretty self aware, yet they are far from the point where they’re able to enter into contracts or be legally punished for their actions.
You also missed the point regarding the conceptus and foetus. Yes there are objective differences between a foetus with no brain and a foetus with a brain that is warming up and beginning to learn how to function. My point is that neither of them at that moment possess the actual property of self awareness. As you mentioned, even neonates do not have this property yet. So my point is that neither of them is more of a person than the other, by your own definition. Me saving money to buy a house vs someone else not saving money doesn’t change the fact that we both lack a house to live in. My future potential to acquire a house may be greater than the other guy, but the fact is at this point in time we both equally lack the property of home ownership,
Why should we respect provisional developing personhood and not just fully persons? And why not go the other way and respect all sentient conscious beings? You’re a full person, not a provisional one, so claiming to fear for your own life is ridiculous. Again, this highlights how arbitrary your position is. Your argument, if I get it correctly, is persons are entitled to life, provisional persons are entitled to life, potential persons and non persons are expendable for whatever trivial reason we like. Would you like to show us what is so objective about that? These are just YOUR values, not some fundamental law of nature. I can say the same about you. You have no empathy or respect for sentience and consciousness.
As for the products we gain from animals, there are plenty of plant based or synthetic alternatives for just about all of them. And even if there isn’t, it wouldnt be too hard to develop them if we wish. So yes, all of our farming and slaughtering of animals is for trivial reasons.
If animals have no concept of a future and have no ability to value or plan for a future, how come many animals deliberately store food or fatten themselves up in preparation for winter hibernation or migration? How do chicken restrain themselves from clicking a button for a quick small reward and instead choose to wait longer for the button that gives them a bigger reward? If they have no concept of time at all, their existence should only be conceived in the here and now, and they should immediately grab whatever food is offered at that moment.
And constructing a model of themselves as a person is not one of them.
As opposed to what?
I think you need to sit back and get clear why it even matters to you whether something is called a “person.” That’s just a sound. Why does it matter whether that sound is uttered at the sight of a thing, rather than some other sound?
Once you figure that out, then maybe you can engage in this conversation productively.
Calling something a word, does not cause that thing to have any powers or rights, or change anything about how we should think of it or treat it.
Powers and rights and disposition derive from the physical facts of the thing. Regardless of what you call those facts.
I’m telling you what physical facts have what powers and what physical facts entail what rights and what physical facts entail what dispositions.
The words you use for those facts is irrelevant.
It’s a basic tautology: Language is useless unless it is useful. So what definition of person is useful? One that identifies as a person an entity that cognitively produces a concept of itself as a person? Or do entities that never have any self-concept, any self-knowledge, any self-awareness, and never will, are they “persons”? And if so, who then cares? You’ve just defined “person” as a synonym of “animal.” And we already have that word: “animal.” Likewise any other definition you come up with.
That’s actually not true. That’s a liberal myth. Want to know how to tell? Ask if corporations or ships born in the United States become citizens and get to vote.
Corporations are owned by persons, and thus the law recognizes a corporation’s rights as if they were a person because they constitute the corporate rights of the humans who own the corporation. But the corporation is not actually a person. It’s called a legal fiction. And it is entirely derivative of there being actual persons involved (the owners of the corporation). Thus, corporations have a legal right to free speech, because corporations are the tools of shareholders and shareholders have a legal right to free speech. The same is true of ships: they are assigned the fictional status of a person as a proxy for the owner(s) of the ship, who is a human person (or a collective of human persons). But as such ships don’t have a “right to life,” for example, or to vote, or even enter into contracts—some actual human person has to sign the contract. Because they are a tool of human persons, not actually a person.
They actually can. Kids that age do those things in places all over the world, and always have. You are confusing the ability to understand and enter into a contract or take responsibility for an action, and our social leniency regarding their competence to do those things wisely.
That’s a choice we make as a society, in reaction to the physical facts of the person’s abilities. Hence as I said: we extend more and more rights, as a person’s abilities increase. A full person will be an adult person. But a partial person exists as soon as they have a concept of being a person: a meta-cognitive self-model. Which usually develops in the second year of life. Before that they are a developing person, and as such physically different from partial and full persons. Cognitively aware, learning, building and constructing a mental model, compiling a person in themselves (assembling their personhood over time). Hence we assign them provisional personhood, and thus assign them provisional rights. Because we, as a society, value that physical status, and recognize what it means (cognitively for the toddler and for their future as an adult).
And this is all true even if the word “person” didn’t exist. The word “person” does not have magical powers. Strike it from your vocabulary and just describe the physical facts you are concerned about and why. That’s all there is to reality.
And get the facts right.
Animals do not “deliberately store food or fatten themselves up in preparation for winter” because they know what’s coming. They have evolved the habitual practice of doing that, because the habitual practice of doing that keeps them alive. They don’t know what they are doing or why. They have no conception of the future; they don’t even have a narrative memory of the past (not, at least, in relation to themselves; they merely “have memories,” not memories of “what happened to me”).
And animals do not “restrain themselves” to seek delayed rewards because they know what they are doing. They simply learn the habit of it, because when they act a certain way, it feels better than when they act another way. Evolution built them that way. They do not “reason it out.” They have no reason. They do not need a “concept of time” to do this any more than you need a concept of centripetal forces to ride a bicycle. And they certainly have no concept of themselves. Chicken brains are vastly too simple to be capable of any information processing even a fraction that complex (they have less than 1% of the neurons of a human brain), and their brains anatomically lack any of the structures we know are necessary to process that kind of information.
Note that even worms “learn” the same kinds of things chickens do. Including the worm that they programmed into a robot, which had just 302 neurons. Does it have a concept of time? Certainly not. Of itself? Certainly not.
“And constructing a model of themselves as a person is not one of them.”
And you have yet to show why this ability is a necessary component of personhood, or even that it is a universally accepted definition of personhood. Personhood is a loaded and controversial term with no accepted definition or criteria. Just because you state it in a certain way doesnt mean you’re right or that any other definition is wrong.
“As opposed to what?
I think you need to sit back and get clear why it even matters to you whether something is called a “person.” That’s just a sound. Why does it matter whether that sound is uttered at the sight of a thing, rather than some other sound?”
As opposed to a non circular non fallacious definition that can apply to nonhuman entities.
It doesnt matter to me what a person is. I couldn’t care less. It’s when people like you use personhood as an excuse to justify needless suffering and death of animals that is the problem.
“Calling something a word, does not cause that thing to have any powers or rights, or change anything about how we should think of it or treat it.
Powers and rights and disposition derive from the physical facts of the thing. Regardless of what you call those facts.
I’m telling you what physical facts have what powers and what physical facts entail what rights and what physical facts entail what dispositions.”
Actually they don’t. Rights are arbitrary and are based on what values we decide to have, and not exclusively on what physical facts and properties a thing has. It is YOU who is deciding to attribute rights based on personhood. There is no law of nature or fact of science that says how rights should be attributed.
“It’s a basic tautology: Language is useless unless it is useful. So what definition of person is useful? One that identifies as a person an entity that cognitively produces a concept of itself as a person? Or do entities that never have any self-concept, any self-knowledge, any self-awareness, and never will, are they “persons”? And if so, who then cares? You’ve just defined “person” as a synonym of “animal.” And we already have that word: “animal.” Likewise any other definition you come up with.”
Then call them self aware entities. Why invent personhood as a term? And how is that definition of personhood that you make useful? Useful to whom? Useful for what purpose?
“They actually can. Kids that age do those things in places all over the world, and always have. You are confusing the ability to understand and enter into a contract or take responsibility for an action, and our social leniency regarding their competence to do those things wisely.” So that makes them partial persons according to you? What about people with mental disabilities who never mature past that age? Do they remain partial persons for life?
“That’s a choice we make as a society, in reaction to the physical facts of the person’s abilities. Hence as I said: we extend more and more rights, as a person’s abilities increase. A full person will be an adult person. But a partial person exists as soon as they have a concept of being a person: a meta-cognitive self-model. Which usually develops in the second year of life. Before that they are a developing person, and as such physically different from partial and full persons. Cognitively aware, learning, building and constructing a mental model, compiling a person in themselves (assembling their personhood over time). Hence we assign them provisional personhood, and thus assign them provisional rights. Because we, as a society, value that physical status, and recognize what it means (cognitively for the toddler and for their future as an adult).”
Who the hell is “we”? You keep talking as if humans the world over have reached a unanimous verdict about these things. Did anyone nominate you as your planetary spokesman? These questions are complex and have widely different answers in different cultures. Here in Canada for example our Criminal Code states the killing of any human being as murder, regardless of age. No mention of “provisional persons” anywhere. The life of a 10 day old child is just as protected as that of a 30 year old.
“Animals do not “deliberately store food or fatten themselves up in preparation for winter” because they know what’s coming. They have evolved the habitual practice of doing that, because the habitual practice of doing that keeps them alive. They don’t know what they are doing or why. They have no conception of the future; they don’t even have a narrative memory of the past (not, at least, in relation to themselves; they merely “have memories,” not memories of “what happened to me”).”
and how do you know any of that? did you ask them? And how can you logically have a memory of the past without some understanding that it involved you? What would the memory be of? HOw could the animal conceive it in the first place? How can a fish or a hermit crab “remember” to avoid a source of electric shock based on past experiences when they have no narrative memories?
“And animals do not “restrain themselves” to seek delayed rewards because they know what they are doing. They simply learn the habit of it, because when they act a certain way, it feels better than when they act another way. Evolution built them that way. They do not “reason it out.” They have no reason. They do not need a “concept of time” to do this any more than you need a concept of centripetal forces to ride a bicycle. And they certainly have no concept of themselves. Chicken brains are vastly too simple to be capable of any information processing even a fraction that complex (they have less than 1% of the neurons of a human brain), and their brains anatomically lack any of the structures we know are necessary to process that kind of information.” Once again, how do you know any of that? If a chicken has no concept of time at all, how can they anticipate an event happening in the future? And your analogy is a false one. We many not need a concept of centripetal forces to ride a bike, but we need some basic understanding of the concept of rotating objects.
“Note that even worms “learn” the same kinds of things chickens do. Including the worm that they programmed into a robot, which had just 302 neurons. Does it have a concept of time? Certainly not. Of itself? Certainly not.”
Worms can learn how to count? Worms can feel empathy? Worms can recognize each other? Worms can form strong social bonds with each other?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5306232/
“A person is a stored set of data, not consciousness. Consciousness is only the model your brain builds of you (sometimes inaccurately we now know). You are what you are conscious of; you are not your consciousness. And the more you remove (in stored data or mechanical functions), the more you slide a person into a mere animal and then animals of lower and lower cognitive capacity (e.g. a “vegetative state”). A person only exists when a certain accumulation of data and active potential is achieved. That’s why most animals never become persons.”
What!? I will likely get this wrong but I will try to get it strait. So a person is just data; perhaps a really big number. Our consciousness is the process of that “software” being run on a computer (our brain). If the computer (brain) or software is deficient enough that biological entity is not a person or that person may not be a person anymore, is that right? And some animals can become people? So people who think they and others are more than just data and biological machinery are just making an Illusion for themselves? Do you talk about the ethical implications of this anywhere? Cannot some inhumane person take this out of context and justify something evil by saying certain people are not really people because they seem deficient in some way?
I think you are confusing different concepts. Data is information about a thing. Numbers are one kind of information, but not the only kind. And a thing is not the same as the data that describe it. A person is like a car engine: the structure is what makes it what it is, not just the isolated bits of information. Bits of information can’t propel an automobile. The arrangement of the structure does that. You can have data about what that structure is; but the structure is not the data.
Also, we can sometimes use the software-hardware analogy in neurology, but it’s not literally applicable. Software is only a thing in Turing machines. Human brains are not Turing machines. They are analog, not digital computers. Software is a way to make a Universal Turing Machine behave like another machine. But you could do that by skipping the Turing Machine and software and just building the machine you want to emulate. It’s then all hardware, no software. Human brains are all hardware, not software. Conscious awareness is then among the things that that machine does; it’s the output, not the software; indeed it is the output of the hardware, not of any software.
The human brain is configurable, so it’s all hardware, but it can change (update, expand, reorganize), which we don’t really have an analogy to in computing yet. So we use “software” as the closest analogy, despite it being a stretch (per above); software also comes close to what we mean by short-term memory, but even then the analogy is not exact.
Computers roll out of the factory with fixed hardware, not continuously reconfigurable hardware. That’s simply because the former is easier to manufacture and operate and maintain on present technology. We could in principle build configurable hardware systems, which would then be more correctly analogous to human brains, we just don’t have any good reason to at present. The fixed-hardware Turning Machines we have are (currently) far cheaper and easier to build and maintain.
As to what you are asking about a “person,” I’m not sure what you mean. If you mean: a computer has to be configured in a certain way for it to be a person (as opposed to, say, an iPhone or wristwatch or arcade game console), then that’s obviously true. Just as a different configuration makes a watch vs. a game console or an iPhone. A thing is how it is configured. Configure it differently, and it’s a different thing. This is true of literally all things whatever. So that it is also true of persons is a trivial observation.
Meanwhile, all currently known people are animals. So I assume you mean, rather, that non-human animals can become persons somehow; I’m not sure what you mean by that. Which animals are you talking about? It’s possible that some animals have brains configurable to manifest a person (Koko the gorilla, for example, is a likely candidate). But in every case, it comes down to whether they are configured that way. Just as whether you have a watch or a phone comes down to how the thing you have is configured.
Consequently, you have committed a common fallacy, which I dubbed in Sense and Goodness without God the modo hoc fallacy, or “just this” fallacy. I suggest you read my discussion of it there (index, “modo hoc”). But in short, a tree is “just” atoms. But it’s still a tree and not, say, a pile of ashes. The only difference between a tree and a pile of ashes is how the atoms are configured. But that’s a really huge difference.
There is therefore never any real sense in which a thing is “just” atoms; a thing is atoms plus arrangement. And it is the combination of both from which all properties derive; e.g. a tree can grow and provide years of shade to sit under; a pile of ashes cannot; even if both are made of exactly the same atoms.
Thus there is never any sense in which anyone is “just” a biological machine. A plant and a tiger are both biological machines; yet there is no sense in which they are the same or have the same abilities and attributes. What makes them different is the way those machines are configured. Thus they are machinery plus configuration.
Just like people are.
There is no effect of this realization on ethics.
Ethics is a property of persons, by virtue of the way persons are configured, causing them to have particular abilities and attributes not possessed by other configurations of the very same atoms. Saying “they are just atoms” doesn’t have any effect here. Because they are more than just atoms; they are a configuration of atoms. Just as a tree is more than just atoms, hence it is different from a pile of ashes. The difference is all in the configuration of the atoms, and what powers and abilities that configuration thus bestows.
And it is from that that all moral truth derives.
To Keith Douglas: I would use a set theory M with Cantor’s Absolute Infinite and then use NBG or MK as a meta-language over M to declare that M has an absolutely infinite number of axioms. This has two advantages:
1) set theory is the canonical foundation of mathematics.
2) any possible logic can be translated bijectively to a segment of M.
What more would you wish for a logic? A paper with this proposal is currently being reviewed. No Russians involved this time.
by base position is that all of us are virtual computers which are part of the universal running program in the quantum mainframe some call GOD. However I have never found a way to conceptualize things like feeling pain… there is no way I can conceive of essentially ones and zeros being passed around a network, no matter how complex, ever having a “place” to feel pain… Brain neurons are not even directly connected.. the synaptic gaps are electrically disconnected and only pass chemicals across the gap. There is one theory that the eletro-chemical field generated by all the neural firings creates a central locus where an emergent property could exist… However I find that unsatisfactory as my training in electronic engineering has taught me that the data must have a differentiated system to decode it… it seems like an infinite regress exists and no way to explain where it actually exists as a conscious experience.
All very unlikely. And void of evidence.
pretty sure thats why its called the HARD problem… but as to us being virtual computers as part of the running program of reality itself… that much is a certainty in a material world.
Humans, in not understanding “anything” about life have invented meaning, religion, ism’s etc. These inventions have become part of humankind’s reality even nationalities are inventions not recognized in the natural realm. I think the word “actual suits this discussion much better. I want to hear you discuss the notion of “self” How is that this notion is sustained? Do we need this notion or are we stuck in a behavior pattern that some call mankind’s wrong turn. I think, actually think that the self is not necessary and is the source of all conflict. Now if this true and many people think it is then this implies that there may be a different source of action. I guess this will me more gobbledygook. However can we look at how life is unfolding and say that this is all on purpose, it isn’t. Humankind is basically incoherent. Men do things and don’t get the result they want and continue the same behavior, this is basically insanity. I doubt that more knowledge will resolve all these issues, that tells me that there be something else, of course there are no guarantees.
“Humankind is basically incoherent”… Well, I don’t know about humankind, but you certainly are ! And I don’t even mean this malicuously, you should really make an effort to order your thoughts and make a logical assertion before yoy start banging words away on your keyboard…
Perhaps I see that I am incoherent, and you claim that you are not? I did not take your remark maliciously. Can you admit to yourself that there are things many things you are unaware of? Is it because you haven’t uncovered certain things, or is it because your mind will never see them or accept them. Do you see that if it weren’t for humanities incoherence you would not have a career? Frankly I guess I am disappointed because I see how easily you dismantled “belief”. I thought you would make clear the reasons people act this way, and that would end this behavior. What do you call people that sacrifice their children for the nonsense they call god. I think incoherent is an excellent word for this.I see too that you may be lacking in compassion, and that is sad. Here is one of the reasons I know people are individually and collectively incoherent; one could travel the world and ask everyone they meet, do you want to breathe polluted air? The answer of course would be “no” yet we all breathe polluted air. I will bang out thoughts somewhere else. I think you need to keep doing what your doing because you really pull the rug out from under religious belief. One of the things I’ve noticed about so called intelligent people is that it is an across the board trait, there is an abundance of arrogance in some of these people, that pesky self image getting in the way.
Try poetry, I think it will become you better… No, really, I mean this, your style of writing is very poetic… you should try it ! (and no logic needed 🙂 !)
I don’t consider Animals on the furniture/computer side… not sure why you would go there?
Maybe you’ve lost track of the thread, but no one here was equating animals with furniture or (presumably desktop?) computers. We are all computers. The only differences are as spelled out above: what we are programmed to do; and information processing of increasing complexity probably produces qualia, and the most complex is self-model building, which leads to self-awareness (and thus the ability to talk about the qualia of being an information processor).
Pete then suggested plenty of animals also build self-models of the same complexity. That’s the comment you are quoting. Which isn’t true. Only very few animals come even close (e.g. elephants, certain omnivorous birds, cetaceans, and great apes) and they are still not complex enough in their self-models to communicate details of their own meta-cognition (e.g. even Koko was never documented discussing qualia).
I lived with a cat breeder for a decade, more than 20 adults and I can tell you that they think, they remember, they know their own reflection in mirrors after a while, they show every sign of consciousness that we do with the main limitation being they don’t seem able to create a detailed alternative reality as we easily do and compare their current life with that alternative.
Thinking and remembering and learning even a robot can do.
Building a self-model, and doing meta-cognition with it, cats don’t do. Nor do they have the architecture for it. No part of their brain has been identified as doing that; and all parts of their brain are accounted for by comparative and experimental anatomy.
Note there is a difference between “consciousness” as in experiencing qualia and using them to learn and make decisions (even worms—and indeed, IMO, probably even robots—have that), and consciousness as in self-consciousness, which does not mean “learning that mirrors don’t contain enemy cats” (that requires no recognition of self at all; there is a reason the mirror test has been highly criticized as useless: in some cultures, even human children fail mirror tests as late as age six, yet pass sophisticated tests for meta-cognitive self-awareness—which, notably, cats do not).
Personhood is a property of self-consciousness beings (or beings actively building that self-consciousness). Not of just anything that’s conscious. Again, even worms are “conscious” in that looser sense. So we need to be careful to avoid equivocation fallacies.
Oh come on Richard. We both know there is a difference between some computer one dimensionally processing information and generating output about a specific task it was designed and programmed to do vs a sentient animal making complex decisions based on all sorts of complex sensory information from the surrounding environment. I am by no means saying computers will never become sentient, conscious, even self aware, but we are far from that point yet. You were saying something about equivocation fallacy?
How do you know that cats and other animals dont form a self model and do meta cognition? Which part of the brain is it that is responsible for meta cognition in humans? And it’s kinda ridiculous to suggest computers and worms are conscious. Responding to stimuli is not consciousness. Conscious beings have a subjective personal experience of reality. Thomas Nagel said there is something that it is like to be a bat. Worms as far as we can tell only ever respond according to pre-programmed reflexes and never show any awareness of or ability to adapt to their environment beyond the basic reflexes they have. computers so far are only ever able to carry out the tasks that are assigned to them and programmed into them. An autopilot can fly a jumbo jet all around the world in a flawless manner, but it will never spontaneously understand, learn, or do anything other than that unless someone rewrites its program.
Btw, I agree the mirror test is unreliable, but what other tests are there? Maybe you can link to a website or a book on the topic so we can look further into it.
I don’t know what you mean by “one dimensionally processing information.”
Modern computers are multi-dimensional networks now (your own desktop computer is comprised of multiple CPUs running complex parallel programming). And the physical processor is irrelevant anyway. It’s the software (the actual information processing) that makes the difference. Hence neuralnets can be run on standard machines now. In other words, everything an animal brain does, a machine can now do. The only thing left to discover is figuring out the precise wiring. That’s it. Configuration. There is no other difference.
Robots can learn and model the structure of their own bodies now. That this inevitably entails they experience qualia has been pointed out since Dennett analyzed Shakey the Robot in Consciousness Explained forty years ago. We’ve already replicated a complete worm brain with software: and it works. Which means all qualia the worm experiences, that robot experiences. It’s the same brain. Process is all the matters. Configuration. Not the material it’s made of.
We and all other animals are also just computing inputs and “generating output about a specific task it was designed and programmed to do.” We are just a configuration wires. We are just a massively complex input-output algorithm, run in parallel processing through a neuralnet. The only fundamental difference is that we are programmed by evolution (our base program is the brain structure, how it is wired by the third trimester, which is built under instruction by the DNA code: another bit of software) and environment (we are programmed to self-program through learning by interacting with the environment; we now have robots that do this, too).
Sentience is just another program. It’s just another form of information processing. Shakey the Robot and the Lipson-Zykov Bot also “make complex decisions based on all sorts of complex sensory information from the surrounding environment.” Hence they learn what structure and capabilities their bodies have, and the structure and capabilities of their environment (they are programmed with neither information), and use that information to navigate environments and obstacles. Animals do the same thing. Like the Worm Bot. Which literally has exactly the same brain as an actual worm. Only it’s made of software code instead of neurons.
We are all machines. The only differences that matter among these machines is how complex the information processor is, and how integrated the information processing is (this latter part is what makes the difference between, for example, the Worm Bot and Shakey and the Lipson-Zykov, and some non-learning, straight-code industrial robot; although even the latter might experience qualia, it just will be even simpler and vaguer than what animals do or advanced bots like the Worm or Shakey), and consequently what it is capable of being conscious of.
Are animals conscious of things? Yes. So is Shakey. And the Worm Bot. And the Lipson-Zykov Bot.
Are animals conscious of themselves? No. No more than those robots are.
Are animals conscious of consciousness? As in, does their information processing ability include meta-cognition? No.
Are animals consciousness of having a future? No.
Will animals ever be conscious of having a future? No.
Are animals conscious of themselves, as individuals with thoughts and goals and history? No.
Will animals ever be conscious of themselves, as individuals with thoughts and goals and history? No.
Do animals have information processors complex enough and configured to comprehend the value of life and comprehend death and comprehend themselves as persons? No.
Do animals have information processors complex enough and configured to develop the ability to comprehend the value of life and comprehend death and comprehend themselves as persons? No.
Again, there are a few exceptions—a very small list of species, as I already noted; and humans.
And this makes the objective factual difference between full persons, partial persons, developing persons, and non-persons. If a brain cannot construct a comprehension of itself as a person (if it can’t meta-cognate a self-awareness), it cannot construct a person. No construct of a person, no person. Period.
How we value these different categories, relates to what they can accomplish: non-persons never have personal rights because they will never be persons; developing persons will have provisional personal rights because they are actively developing into persons; partial persons will have partial personal rights because they are partially persons. But things that are none of these things (not actively developing into persons; nor partial persons; etc.) are not persons. The word “person” has no utility in society under any other definition.
(1) Because if they did, they would exhibit the consequent skills and abilities. They do not. (As opposed to, for example, Koko the Gorilla.)
(2) Because if they did, their brains would anatomically possess the structures of sufficient complexity and configuration required to do that. They do not. (Only a very few animal species on earth do, and all are very rare.)
(3) And there is no plausible scientific explanation of how they could have such abilities and (a) have no corresponding brain structures and (b) never exhibit the corresponding abilities.
The primary locus is the human prefrontal cortex, but the distinguishing features are the structure of that prefrontal cortex, not the mere presence of a prefrontal cortex. Similar structures are not found in almost any other animals.
Human personhood however is a product of more than merely the meta-cognitive structures in the prefrontal cortex. The self-model (and narrative memory etc.) is constructed in across many areas of the whole cerebral cortex, which in humans is vastly more complex than in almost any other animal. It is again the structures, not the labeled anatomical areas, that we are concerned with. A cat’s cerebral cortex is far simpler and lacks the structures distinctive of human abilities like self-model generation and narrative memory formation.
Why? Have you ever been a computer? How would you know what it was or wasn’t like?
Again, don’t equivocate between two different senses of “conscious.” Being conscious of qualia is not the same thing as being conscious of oneself. All animals, even worms, must be conscious of qualia. Which means the robot we built using the map of a worm’s brain must be conscious of qualia. Indeed the very same qualia that worm knew when in its previous body. Self-consciousness is a far more complex output of information processing. It requires a far more complex processor. Indeed, just the gigabytes of structure needed to run human consciousness alone, far exceeds the gigabytes of structure in an entire cat’s brain! (91,000 gigabytes in a typical cat brain; 2.5 million gigabytes in the human brain, of which at least 10% comprises the cerebral cortex, for 250,000 gygabytes: almost three times more than the entire cat’s brain.)
It can be. When the information processor thus stimulated produces perception as its functional output, that entails qualia, and hence consciousness of qualia. Hence the kind of consciousness animals, e.g. worms, mice, cats, experience.
But if you mean that is not the same thing as self-consciousness, I agree. This is exactly what I’ve been explaining to you. Consciousness of a self is a vastly more complex output requiring a vastly more complex information processor.
So do sufficiently complex robots (like the ones I listed above). Any computer that generates and operates on perception of the environment, is generating subjective experience of reality. Like Shakey the Robot does. And the Worm Bot and the Lipson-Zykov Bot. And all animals are doing when they generate their subjective experience of reality, is run a program: input-generating-output, operating mechanically according to physical logic gates; they are just more complex robots. It’s all just wires (whether of metal or flesh). Configuration of circuits. Nothing more.
Thomas Nagel is a total kook and almost always wrong.
Nonsense. Worms learn, respond to the environment, and model their environment and make decisions from that modeling. There is no reason at all to assume this does not come with associated qualia. Any act of perception, must entail some form of qualia. Unless you think only at a certain level of complexity of information processing do qualia start to manifest. But then, that’s just all there is to it: at a certain level of complexity, qualia are experienced. That’s still not consciousness of self, consciousness of death, consciousness of thought, and so on.
The difference is always just more wires, more circuits, more complex neural connections. That’s the only difference between a worm and a cat: the cat has more neurons, and can thus do more complex information processing. But the cat is just an information processor, same as the worm. There is no fundamental difference. Just complexity.
First test, do they have a brain of enough complexity to generate a self-model and associated meta-cognition? That’s not sufficient, since some animals (e.g. the octopus) have the requisite neural complexity, but devote almost all of it to doing something else (in the case of the octopus, it runs its dermal camouflage system).
So you then have to study anatomically what do the complex brain centers do. For example, we have studied meta-cognition in primates by studying the physical operation of their mirror neuron network. Having a mirror neuron network is a structural requirement for all meta-cognition, including self-meta-cognition; though monkeys do not display the latter ability. Though having a mirror neuron network is again not sufficient to develop complex meta-cognition, much less self-cognition. Those require many other structures; so we can study whether animals physically have those structures in their brains. Almost all do not (or not enough of them to meet the requirement).
And finally, meta-cognition and self-cognition produce externally observable and testable behaviors. Almost no animals exhibit those behaviors (they don’t act on meta-cognitive knowledge; therefore they don’t have meta-cognitive knowledge; they don’t act on self-cognitive knowledge; therefore they don’t have self-cognitive knowledge).
Read up on how we discovered the remarkable cognitive abilities of the rare few animals that have them: that teaches you how we know other animals don’t have those abilities.
For example, Wikipedia has a good article on elephant cognition. The studies of bird intelligence (through brain anatomy and behavioral tests and observations) are more scattered and inconclusive (some not sufficiently rigorous or unreplicated) but you can get a start in the entry on bird intelligence (the birds with requisite anatomy and behavior may include the Magpie, African Grey, and Crow). Likewise apes, but the same problems in the science persist there (see this analysis of the behavioral tests of Koko’s cognition). And Cetaeceans.
Yes we are all machines running software. That is besides the point. Sentience is having the ability to feel and have a subjective experience of the world. Having a conscious awareness is not simply responding to stimuli. We can conceive of ourselves having the subjective experience of a cat or a fish or an eagle. We cannot conceive of ourselves having the subjective experience of a chair or table. Is there any reason to believe robots have subjective experiences? Can you conceive of yourself as experiencing the subjective feelings of a robot? Animals, at least the more advanced of them, are sentient and possess consciousness. No non biological entity so far that capacity, though robots will likely get there in the near future.
“So do sufficiently complex robots (like the ones I listed above). Any computer that generates and operates on perception of the environment, is generating subjective experience of reality. Like Shakey the Robot does. And the Worm Bot and the Lipson-Zykov Bot. And all animals are doing when they generate their subjective experience of reality, is run a program: input-generating-output, operating mechanically according to physical logic gates; they are just more complex robots. It’s all just wires (whether of metal or flesh). Configuration of circuits. Nothing more.”
How do you know any of that? How do you know the robot has any experience at all and not just operating according to non-mental information processing? We still dont know at what minimum level of complexity can a brain still generate a mind. WE dont know why consciousness even evolved if non conscious processes like the robots you mentioned can adapt to their surroundings. Your bombastic confidence that we can be sure a cat and a robot experience the same thing is just ridiculous speculation.
“Nonsense. Worms learn, respond to the environment, and model their environment and make decisions from that modeling. There is no reason at all to assume this does not come with associated qualia. Any act of perception, must entail some form of qualia. Unless you think only at a certain level of complexity of information processing do qualia start to manifest. But then, that’s just all there is to it: at a certain level of complexity, qualia are experienced. That’s still not consciousness of self, consciousness of death, consciousness of thought, and so on.”
Obviously there has to be some minimum level of cognitive capacity that can generate qualia. Every living organism responds to its environment, even single celled bacteria can detect stimuli and change their behavior accordingly. It’s still hard to conceive of a bacterium experiencing qualia or having a subjective experience of the world. Why should information processing necessarily lead to qualia? My pancreas detects and responds to my blood sugar by increasing insulin production. Yet I never have any qualia or any subjective experience of this process. It wasnt until modern science that we even came to know that this complex process is taking place insidde everyone of us.
“The difference is always just more wires, more circuits, more complex neural connections. That’s the only difference between a worm and a cat: the cat has more neurons, and can thus do more complex information processing. But the cat is just an information processor, same as the worm. There is no fundamental difference. Just complexity.”So? same with humans. It’s all a sliding scale of complexity of information processors.
“First test, do they have a brain of enough complexity to generate a self-model and associated meta-cognition?””
What level of complexity is that? How far can you hack a human brain before it reaches a point of insufficient complexity to generate a self model?
“For example, Wikipedia has a good article on elephant cognition. The studies of bird intelligence (through brain anatomy and behavioral tests and observations) are more scattered and inconclusive (some not sufficiently rigorous or unreplicated) but you can get a start in the entry on bird intelligence (the birds with requisite anatomy and behavior may include the Magpie, African Grey, and Crow). Likewise apes, but the same problems in the science persist there (see this analysis of the behavioral tests of Koko’s cognition). And Cetaeceans.”
I’ll look them over. Btw the link for Koko is no longer functioning
I’ve been saying this to you repeatedly. We are all just reacting to stimuli but it is not “simply” that, because our stimuli are more complex (a neural network generates experiential perception as the stimulus, not a mere tit-for-tat reaction) and our ability to react to those stimuli is more complex (reason, intelligence, comprehension). But most animals don’t have the latter. They have perceptual experiences as stimuli, but all they do is react to them. They can never “think” about them. Much less think about themselves. They have no sense of self. No equipment to build any stimuli as complex as a self-model. They have no meta-cognition. They have no self-cognition. And we know this. It’s a scientific fact. It’s an observationally confirmed fact. It’s an anatomically confirmed fact.
So once again: Sentience in the sense you mean is not self-consciousness. It’s not meta-cognition. It is not the assembly of a person. It’s therefore not being a person.
Yes. I’ve told you why. Repeatedly. All sentient beings are is information processors. Absolutely nothing else. Just that. Therefore any information processing that does the same thing will have subjective experiences. Q.E.D. What information processing does that? We have observed it to be anything that produces an output of integrated perception. Do robots do that now? Yes. I gave you several examples. And now we’ve even made a robot with an identical information processor to an animal (a worm) that does that. There is, to the contrary, no reason to believe these robots don’t have subjective experiences.
But “having subjective experiences” is not “being a person” in any useful sense relevant to how society employs the term “person.” If all you want to mean by “person” is “having experiences” then robots are persons, worms are persons, every information processor producing integrated perception is a person, and that simply dilutes the meaning of the word. It then no longer has any significance to call something a person. It’s meaningless. It then is just a synonym for “sentient,” which entails nothing as to rights, powers, or dispositions. And as it then makes no distinctions as to powers, rights, and dispositions, it has no use. We may as well do away with the word.
Yes. Same as I could a worm or any other animal.
But it would not be me “myself” experiencing those feelings. Because they have no self. So there would be no “me” having those experiences. I could not think about them, comprehend them, appreciate them, or do anything with them cognitively because there would be no “I” to do any of those things. I would simply be a noncognitive reasoner reacting to perceptual stimuli; I would be having experiences, but incapable of thinking about those experiences or relating them to any concept of myself. I would have no narrative memory built out of them.
You keep conflating different things as a “mind.” What is a “mind”? When does a “mind” exist in your conception?
You keep confusing “mind” as in any neural net generating perceptual experience, with a “mind” as a neural net that is generating and employing a meta-cognitive person-model.
Those are not the same things.
We know already what it takes to get the latter kind of mind: anatomical equipment almost no animals have; and we know this because we have observed the correlation between having that equipment and exhibiting all the abilities it entails, and lacking that equipment and lacking all the abilities it entails.
That’s just fact. So please stop denying well established science.
So how do we know a worm robot has the same experiences as a worm? Because of that same fact: we observe when you have the equipment, you have the ability; nothing else needs to exist. There is no special secret magical fluid or something that gets added. It’s just a neural net that generates integrated perception. Period. That’s it. There is nothing else. Which means when you have that, you have the effects of it: subjective experience. To argue otherwise is to argue for some special secret magical fluid or something that has to get added. But there is no evidence any such woo mumbo jumbo exists or has any role to play in generating subjective experience.
And we confirm this by observing the worm exhibits the abilities that result from having an integrated perception of something: it learns (it builds a model of its environment), and it reacts to phenomena that require more than mere tit-for-tat impulses (it is doing more than direct reflect action). It therefore must have some form of integrated perception. Because if it didn’t, it couldn’t do those things. Dennett explains this quite well in his analysis of Shakey the Robot as I directed you to earlier.
Therefore, it has the anatomy that causes those effects. Therefore we know it must cause those effects. And those effects are the observed behavior and subjective experience. Extremely primitive and simple subjective experience. But subjective experience all the same. To suggest otherwise is to insist there has to be some additional special secret magical fluid or something that gets added. But you have no evidence any such thing exists nor any reason to believe any such thing exists. It’s woo nonsense.
Now, as I said, maybe, perhaps, subjective experience emerges only at a certain complexity of integrated information processing; and maybe that “certain complexity” is more complex than a worm brain. We have no evidence whatsoever that that’s the case. But let’s just suppose. What complexity does it arise at, and how do you know that? Maybe it only arises at human complexity, so that in fact even cats don’t ever have subjective experiences—they aren’t even sentient! You have no more reason to deny that than to affirm it. If you are going to arbitrarily insist “some degree of complexity” is needed and you have no idea how much.
But here’s why that’s not likely to hold up. It would be extremely improbable (a really bizarre coincidence) that a cat could behave in ways that require the equipment we know generates subjective experience, yet not generate subjective experience. And if that’s true for the cat, it’s true for the worm. And if it’s true for the worm, it’s true for the Worm Robot. And if it’s true for the Worm Robot, it’s true of the other robots I described, which are even more complex than the Worm Bot.
Sure. And we know something of what it is: any system that generates integrated perception, generates qualia. We therefore have zero reason to believe any system that generates integrated perception doesn’t generate qualia. If you want to arbitrarily invent some special point at which extra more complexity is needed, then that point may well be for all we know way past cats. So your own reasoning eliminates even the sentience of cats. And the only escape from that consequence is to appeal to the correlation of cat behavior with qualia generation. And that argument entails worms and robots experience qualia (not the same qualia, but some primitive qualia). There is no way to escape this conundrum. If the argument works for cats, it works for worms. And if it works for worms, it works for robots. And you have exactly zero evidence otherwise.
But they don’t produce integrated perception, e.g. they don’t learn and model their environment (or their own body etc.). They don’t feel pain (and thus don’t react to pain). They don’t feel pleasure (and thus react to pleasure).
Worms react to pain. Why then should we think they don’t feel the pain they react to? There is no scientific or plausible reason they shouldn’t. It would be a bizarre coincidence that when worms and cats react to the same thing, pain, that one “experiences” what they are reacting to and the other doesn’t. If you can react to the perception of pain without experiencing pain, why would animals ever have evolved the ability to experience pain at all? And what extra magical fluid thingy has to be added to get a pain perception circuit to “also” generate an experience of what is being felt? As Cottrell argues in “Sniffing the Camembert” (see again the article you are here commenting on), there is no logical argument to be had here: if two entities are reacting to pain, they are both perceiving pain, and if they are perceiving pain, they are experiencing pain. Full stop.
It sounds like you want to create an arbitrary world, where “certain” animals don’t experience what they perceive and others do. But you have no basis on which to decide which animals do and which don’t. You have no way to draw the line. No knowledge. Which begs the question why you even think this at all. If you have exactly zero knowledge of where to draw the line, how do you know where the line is at all? You don’t. Unless you use a means that actually can determine where the line is drawn. And there is only one objective set of observations that does that: the anatomical presence of integrated perception, and the behaviors that integrated perception causes that can’t be caused without it. If you observe both of those things, you have exactly zero reason to believe qualia aren’t being generated by those systems. And those systems exist in all animals and certain robots—literally everything that has (a) a neural net computer configured to include (b) a perception algorithm.
And again, this is moot. Because sentience only tells us which things can have experiences, and thus what things we should be humane to, i.e. help them avoid pain and suffering or at least not add more than they’d experience without our care. Rights attenuate to abilities. Very few rights are entailed by merely “having experiences.” The more capabilities, the more rights. But the rights of persons, as currently defined by society (e.g. under the law), attach only to entities that build self-models: i.e. that build persons. And almost no animals do that. From there, you can have varying degrees of personhood: potential, provisional, partial, and full. But those degrees require increasing stages of person-development. And only brains that have the extremely complex anatomy required to build persons (“selves”) can manifest any of those stages. All other brains cannot generate self-consciousness, cannot think about themselves or what they are experiencing, cannot build narrative memories of themselves, cannot plan or comprehend life or death or the future; they are all just systems of noncognitive perception, habit, and reaction. The future means nothing to them. Death means nothing to them. Because they do not have the anatomical equipment we know is needed to comprehend any of those things.
The only Meaning I find is that of an agent understanding that its next move will bring it a consequence, that it cares about what it is doing for its own self interest. Cats amongst other animals seem to fit this.
That’s not comprehension. And thus not self-consciousness. Cats operate on habit, they don’t know what they are doing or why. Nor do they know who or what they are. They react to stimuli. They don’t learn by comprehending what they learn. They learn and take action to effect results the same way we learn how to ride a bike: we develop the intuitive sense of what actions will produce which outcomes, without actually thinking or being aware of why that is (or even when we know, we don’t use that knowledge at all to ride: cognitive processing is too slow at the conscious level to ride a bike that way). All cat knowledge is noncognitive knowledge. And that’s why they lack meta-cognitive skills and never develop a sense of self or outcome beyond the immediate moment.
The only thing that matters in the present discussion is whether a cat is consciously aware that it is doing those things. But cats have no comprehension of what “self-interest” even is; they are not cognitively aware of the difference between cooperative and self-serving behavior. They simply noncognitively aquire semi-altruistic habits. They don’t know they are doing those things or why or what the significance of their doing them is. And they certainly do not reference any of this to a model of the self. They have no conception of “I” vs. “you.” They have no personal conception of themselves, no concept of themselves as a person, and that’s why they are not persons.
in split brain patients it seems that we humans don’t comprehend either… we seem to make up a story that we tell ourself about why we did X or wanted Y but in reality, our actual motivations may have been completely different… so I don’t think you can really know what a cat tells itself or does not tell itself about why it is doing what it is doing, but the important point is that they seem, like us, to be doing it for themselves and not simply out of reflex or habit.. I have witnessed cats having goals and being persistent about obtaining them even when they clearly are being told by the world around them that they can’t get what they want at the moment… that does not seem either reflexive or habit as they are being pushed by something internal that reality should tell them to stop seeking.
That’s actually not true. You are talking about confabulation; that’s abnormal, not normal, brain behavior. Yes, when you break the brain, it screws up. That does not mean it screws up when it’s not broken.
The brain is always trying to guess at the story of you; it’s building a model of you, just as it builds a model of the world around you. But it’s not perfect; it will get things wrong (e.g. mis-estimate a distance or the presence of water; mis-guess a motive; etc.). But the only reason it does this at all is because it usually gets things right. If it didn’t, it wouldn’t have evolved these abilities.
So yes, when we stick electrodes into a brain or cut wires, we can confuse it into making mistakes in its construction of models. That’s uninformative of what it does when we aren’t screwing with it or damaging it.
But this has nothing to do with what we are talking about here: cat brains can’t even confabulate narratives about themselves!
And we know this because they lack the brain anatomy required for it, and exhibit no behavior that would result from it.
So yes, we very well know what cat brains can and can’t do. Because if they don’t have a brain structure to do it with, they physically can’t possibly be doing it. And if they exhibit none of the behavior that would result from their doing it, they obviously aren’t doing it.
Finally, persisting at a behavior does not indicate knowledge of the future. It is not planning. It’s simply repetition until enough cycles indicate futility. That’s a completely intuitive, noncognitive, habitual response.
I am not sure what sort of behavior you would need to convince you otherwise… I remember watching one of our Cats, Cindy, attempt to occupy the top of a coffee table as she had done in the past, but was not aware that another cat, Alex, had broken the glass top and we had removed it… she jumped over the metal frame that normally held the glass to where she estimated the glass used to be and I swear there was a look of surprise on her face when her paws failed to find that table top… she fell through… but instead of updating her experience of the world to indicate that the table was no longer there, she went around and tried again, sure that the table was there. After the 2nd failure she accepted that it really was gone… it seems to me that this required a lot more brain power than you seem to give her credit for…
Meta-cognition: being able to think about what others are thinking and act on that information; Self-cognition: being able to think about oneself as a person and one’s own thoughts and desires and act on that information.
No animals pass these tests except, possibly, the very few species I listed. Cats definitely don’t pass any tests of these things, unlike those other species.
Surprise at things being different than they have been habituated to expect is not meta-cognition nor self-cognition. Learning after repeated tries that something doesn’t work is not meta-cognition nor self-cognition.
You are confusing intelligence, which even worms have, and even robots have now, with meta-cognition and self-cognition. These are not the same thing. “Person” does not mean “an entity that has some intelligence.” If it did, then it would just mean “learning algorithm,” as even robots exhibit learning intelligence now. And all animals do. Even worms and gnats. But if “person” just means “learning algorithm,” it doesn’t mean anything useful. We may as well just do away with the word and stick with the one we already have.
If you want to talk about science, it has been scientifically proven that many of the animals kept in zoos or factory farms experience a wide range of emotions like joy, happiness, stress, anxiety, depression, optimism, pessimism, etc. To pretend there is no difference in the cognitive capacity between them and a robot is ridiculous. Let me know when your laptop or iPhone experiences some semblance of emotion.
Sure. Not relevant to anything we’ve been discussing. I’ve repeatedly told you there are differences in cognitive experience between advanced robots and animals and people. But there are also commonalities (e.g. they all experience qualia). I’ve already noted we have humanitarian obligations to animals because they experience suffering (we haven’t yet programmed robots to do that, except Worm; but it does not experience the more complex emotions of mammals). But by far most animals (indeed even by far most mammals) still aren’t persons, have no self-concept, and thus cannot comprehend life and death and thus the value of either. You seem to be the one who wants to conflate every being as if they were the same. I’m the one explaining to you the differences among them. And those differences matter.
Hi, Dr. Carrier. I was wondering what you thought of this objection/response that is commonly put forward against Mind-Brain Physicalism:
“The soul’s true nature is immutable, but that it interfaces with the body only through the brain, and that brain damage can distort this interface and cause a person to act in ways not in keeping with the true nature of their soul”.
This is similar to the response given in your debate against Lenny Esposito by Lenny. He analogises it to hardware corrupting software, but that the software’s existence is not contingent on the hardware to exist. You didn’t really give a response to this point (due to time probably) but I’m interested on what the response would have been (besides it being ad hoc and unparsimonious, of course)
Also, could it be that we as MB Physicalists are committing a post hoc/correlation fallacy, by assuming that because someone suffers brain damage and then their mind is altered, that the mind alteration is caused by the brain damage?
Looking forward to hearing your reply.
There is no evidence for any immutable anything, soul or otherwise. And no one is immutable; everyone changes as a person over time, even without brain damage. So there is zero reason to believe in any such thing, and all evidence is against it.
In essence these are just circular arguments, re-asserting the conclusion (“there is a soul”) as a premise in the argument to get the conclusion (that “there is a soul”). Irrational.
Science operates by empirically identifying the best explanation of observations, which will be the explanation that involves the fewest ad hoc assumptions and correlates most strongly with observed facts. That’s mind-brain physicalism.
See the full illustration of this point, showing six converging lines of evidence, in the section on it in Sense and Goodness without God. “Soul theory” has not even one line of evidence, much less six that all converge on and thus corroborate the same conclusion.
The problem with the notion that the best and simplest explanation of mind is physicalism seems to assume we already can explain consciousness which to my understanding is still a distant dream… why physicalism may explain reasons for different levels of consciousness it has zero explanatory power towards consciousness itself and that’s a very important thing. There is apparently no place in the brain where a singular consciousness can be found that can explain what all of us apparently share.
No. That’s not how science works. We don’t have to see atoms to know they probably don’t contain elves and don’t require supernatural powers to work. We can build multiple corroborating lines of evidence to ascertain the entire structure of the atom (we now know it all the way down to quarks and leptons) without knowing “everything” about the atom or even being able to see one. That we don’t know “why” quarks exist or have the masses they do and operate as they do, does not “undermine” atomic physicalism.
Same for the brain. We have multiple corroborating lines of evidence confirming consciousness is only produced by physical machinery. And that’s it. End of story. There is no evidence of anything else. No elves in atoms. No elves in your head. Just a computer.
(You also need to get up to speed. You seem ignorant of the science. No one thinks there is a “singular place” in the brain that is “conscious.” Modern cognitive science establishes consciousness is a product of the whole complex machine, not a “place” in the machine.)
Personally, I don’t see a problem with physicalism explaining consciousness, that is to say, a third-person consciousness. But the ratio of conscious matter versus non-conscious matter is so enormously small on the atheistic, physicalistic worldview, that I do see a huge problem to explain my first-person consciousness. The probability for me to be conscious on atheism seems so incredibly small that it calls for an explanation that physicalism cannot provide.
I don’t fathom what you mean by “conscious matter.” Or how you have derived a ratio between it and non-conscious matter. Do you mean by “conscious matter” brains and “non-conscious matter” the rest of the universe? And if so, what is surprising about that? That consciousness requires the organization of a computer with high specified complexity obviously very little matter in the universe will be producing it. And that consciousness requires the organization of a computer with high specified complexity is evidence that consciousness arises from the organization of a computer with high specified complexity. We have no evidence anything else is required. And abundant evidence this is all that is required.
If you mean something else, explain.
I do indeed mean that brains are conscious matter and the rest of the universe unconscious matter. But in the atheistic worldview, there is nothing special that makes a distinction between conscious matter and unconscious matter. So the probability to be born as me must be the same as the probability to be born as some animal, or even some lump of dead matter that also weighs 1,5 kg. Claiming that I was extremely lucky to be born as an intelligent person is not an explanation. There must be something unobservable that provides this explanation, such as a soul, a God, or a transcendent reality. Your answer does not reach further than explaining third-person consciousness, but it is about my first-person experience.
WTF?
Why do you think there is no difference between a highly organized information processing neural network, and anything else in the universe?
On the combination of atheism and physicalism, there is no such difference. Our highly organized brains do not require special laws of nature. Nature is indifferent toward the computation of brains and will spend no more effort on computing it as compared to any other lump of mass that weighs 1,5 kg.
Are you thinking of a probability distribution that favors intelligent matter? That looks like an ontologically very expensive device. It is similar to a divine decision making program. There is no known law of physics that supports that.
So…you literally think the human brain does not have complex specified connectivity, does not process information any more substantively than an ant, and is no different from a sack of grass?
You are starting to sound insane.
You seem not to know there is an entire field of knowledge called neuroscience.
Try getting up to speed.
I am not against the conclusions of neuroscience at all. Instead, they explain third-person consciousness. But with only third-person consciousnesses, I wouldn’t be alive. You seem not to be aware of Chalmer’s hard problem of consciousness.
Again, you are confusing knowing everything about a thing with knowing enough about a thing. Neuroscience establishes, via six lines of converging evidence, that nothing more is involved in producing human consciousness than physical information processing.
If you don’t know what those six lines of evidence are or how they verify the hypothesis, I told you where you can go find my summary of them and bibliography of supporting science.
I am well aware of Chalmers. It is not relevant to my point. Not least because I follow Cottrell, not Chalmers. You need to catch up.
I am curious on what you would respond to the following thought experiment: imagine a reality R in which it is possible to make perfect duplicates of things and beings. Then imagine you step in a duplication machine of a wicked professor who kills one of the pair after some time t. From an objective description of reality R, nobody dies, especially not if you make t very small. But I wonder what you would say here. Do you have a chance to die? 50 percent chance? Or consider this thought experiment in our own reality: you take your brain and another brain and you interconnect them gradually with biological neurons. Will you have access to all the qualia of the other brain after the first neuron has been established? And if not, after how many? Again, neuroscience can describe the objective reality. But apparently personhood is reality plus something extra, namely something that decides to which neurons a person’s mind is correlated with. Many different persons can be constructed from a single set of neurons that change through time, including persons that die instantly. From your position (or from Cottrell’s), can you answer these thought experiments?
Wrong. You created a cognitive twin and then killed them. (Unless you add that they resurrect them unharmed after, which you didn’t.)
That’s the objective reality neuroscience (and indeed just physics, full stop) describes.
The mere fact that we don’t cognitively twin ourselves is trivial; we could have been like amoebas and reproduced this very way. It’s merely an accident of evolution that we reproduce not by cognitive twinning but by gamete mixing (which, also by accident of history, can sometimes create only genetic, not cognitive twins).
If on the other hand you are talking about changing as a person (people’s brains, hence minds, change constantly, as do trees, houses, nations), you evidently have a confused and completely unusable concept of identity, one that would allow me to legally steal everything you own, because it’s no longer “exactly the same stuff” you paid for or built.
I suppose it will be you who will be legally robbed from your belongings, since you have no objective method to construct a person from a set of neuron cells. Again, I will stress the cases of 1) two separate brains, 2) two brains that are biologically connected via a single neuron cell, 3) two brains in which every neuron cell is biologically connected to a neuron cell of the other brain, and 4) two brains that have fully merged into a single brain. I bet you cannot determine where between 1) and 4) the two persons have become a single person, even if you get all the physical facts. An objective, third-person description of the neurobiological reality clearly is not sufficient to determine this.
About dying: since I believe in a plenitudinous multiverse, there always exists a resurrected duplicate somewhere, so dying is literally impossible. Everything and everybody evolves into God.
That I don’t know every particular of how my car’s engine works has no bearing on my conclusion that it’s the same engine that I bought four years ago, with wear and tear and some parts replaced, and that it is only my engine that propels my car and not gremlins or magical souls.
Ditto what I know about my brain and how it stores and generates my mind.
You seem incapable of grasping simple concepts like this. Why?
As for “two brains that are biologically connected via a single neuron cell,” a single neural cell would be incapable of sustaining a conscious link between them. Such brains would be two separate brains; even more so than split-brain patients, who have a substantial number of surviving connections (in the billions). Yet one half of their brain remembers and sees and learns things the other does not, and thus has a different causal history. They are thus halfway between being one person and two separate people; and the only thing preventing the latter are those connections yet remaining, which are basal and thus more fundamental to personhood. Even so, that one half is separately conscious from the other solely due to a physical severing of physical neurons refutes any notion that anything nonphysical is going on in producing their consciousness. So even that example refutes you.
Meanwhile, all the other examples you list here are just a single brain, not separate brains. They would thus generate a single, distinct, separate consciousness.
Your remaining statement is delusional tinfoil hat completely divorced from any evidence whatever. Which now explains a lot.
With all the things you admit here, you must also admit that there exists an interconnectedness of two brains that involves at least three persons: one half, the other half, and the union of both. Such an interconnected brain will use three forms of ‘I’ and ‘me’, depending on who is speaking. But if you admit that, you also have to admit that there are arbitrarily many persons, all defined by which neuron cells are consciously experienced, and which are not. And then the problem spreads to normal brains, because it is very well possible to experience everything of a normal, healthy brain, except, for example, the hearing. Or everything except the sight. Or nothing at all: the zombie case. Or the superbrain: an extra-dimensional brain that experiences everthing of a normal brain, plus extra thinking about it. This implies there are very many persons that can be constructed from a healthy brain. So what a coincidence that my experience provides me the full package of what is within my skull, and nothing more. At least, that is my subjective knowledge, even though it is not accessible by other people.
In my vision, physical matter is absolutely responsible for generating consciousness. The problem is what decides which consciousness. Why not the union of everything that is conscious? Why not nothing at all? And why not the super- or subbrain?
I don’t think you know what the word “arbitrary” means. Precisely three, a number fixed by physical facts, and their interrelationship entirely explained by the physical facts, is the opposite of arbitrary. That’s why it disproves your point. (You are aware these people really exist, right?)
Statements like this tell me you must be lying when you claim to know anything about neuroscience. Only someone who knew little about the actual neuroscience of consciousness could think “just anything can be conscious” (neuroscience has proved the contrary), that “nothing” is needed to produce consciousness (neuroscience has proved the contrary), or that “just any part of a brain” by itself can produce full human consciousness (neuroscience has proved the contrary). And nothing in neuroscience rules out the future production of super-brain super-consciences, precisely because of the necessary and sufficient link between physical structure and consciousness neuroscience has amply proved to date; and nothing in neuroscience rules out the possibility of isolated sub-brain consciences (human and sub-human), but in fact has proved them physically possible, and explained their existence solely with physical facts (e.g. split-brain patients exhibit dual consciousness).
What you seem to be doing now is reducing personhood to reportability of personhood: being able to say ‘I am a person’. If a person has its own mouth to speak and hands to write, you would accept this as a person, but otherwise you deny it. Consider, for the sake of a thought experiment, a layered brain, with 26 layers: the core brain part A can report personhood, but receives no information from part B and higher, part B receives everything that happens in part A, but receives no information from part C and higher, etc. So B and higher cannot report their personhood, even though they know more. They can only observe that A reports to be a person that B and higher are not. Would you assign multiple persons to the layered brain, just because of its structure? Or would you admit only the core A, or maybe only the outer layer Z? I think physicalist atheists have a problem with admitting that B is person, because it fails to give an explanation why B is not A, nor Z. You would need something magical or transcendental for that. Something that assigns it to be alive with a probability greater than zero. On the other hand, B is clearly conscious, since it processes everything that happens in A, and A is conscious. And thus B is a person that cannot be explained by physicalism.
No. I am not doing any of that. I am listing to neuroscience. You are ignoring it.
A person is the sum of memories, personality traits, desires, reasoning and other skills, and like attributes, that distinguishes them as an individual, combined with the ability to generate conscious models of themselves. Every single one of those things has been proved to be the physical product of physical computation in a physical organ, without which the property ceases to operate or exist. This has been confirmed by multiple converging lines of evidence (at least six, as I document in Sense and Goodness without God).
Go read the evidence. Stop bothering me with all this made-up, counterfactual nonsense that doesn’t address any of the actual scientific evidence nor even seems aware of it.
You continue to succumb to the strowman fallacy. I do not claim that physical matter cannot generate consciousness. It does generate consciousness. I only say that, in an atheistic, physicalistic worldview, there is something extra needed to turn all the third-person consciousness into a first-person consciousness. Without this extra, I would not have any personal experience. Do you understand this difference: 1) explaining consciousness will arise in some atheistic worldview according to the laws of physics and neuroscience, and 2) explaining my first-person experience?
You have zero evidence that “there is something extra needed”; and we have multiple corroborating lines of evidence that there isn’t.
That’s the difference between your pseudoscience, and actual science.
When you talk about the summary of the six lines of evidence, do you mean in your book (Sense and Goodness without God), or are they freely available on the web? I never found them.
Good lord man. I repeatedly told you they are in my book. With links. It’s available electronically for a mere six dollars and change.
I read your evidence for mind-body physicalism in Sense and Goodness Without God. Unfortunately, all the six arguments do nothing more than explaining the success of neuroscience. Therefore, they all appeal to Chalmer’s easy problem of consciousness. My problem is a hard problem: How can it be explained that I experience the mind of one particular brain, rather than none at all, especially in the presence of so much non-conscious matter. I conclude that the metaphysical reality consists for the most part of conscious, mind-generating matter. So I am also a mind-body physicalist. However, your conclusion that my observable, non-transcendent brain is needed to keep me alive must be false. Your six points do not exclude the possibility that we are simulated by a transcendent brain that provides us with a consistent experience. Consistent in the sense of consistent laws of physics, and a consistent mind versus non-transcendent brain correlation. My philosophical position is therefore akin to Nick Bostrom’s simulation argument, which is not tackled by any of your six arguments either.
You are confusing having an explanation, with what that explanation is likely to turn out to be. We don’t know what caused the Big Bang, but all the evidence converges on it probably being physics, not ghosts. Likewise, consciousness.
Even apart from the obvious general fact—the total absolute failure of supernaturalism to successfully predict or explain anything whatever and all the evidence accumulating for naturalism instead (see Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them)—for the eventual explanation of consciousness being physical we have six more converging lines of evidence, each one of which is improbable on your theory but fully expected on mind-brain physicalism, and that is why almost all neuroscientists today (the actual experts) are mind-brain physicalists, and thus why you are only pushing pseudoscience:
(1) General Brain Function Correlation: never once has anyone ever observed a human mind functioning in the absence of a functioning brain.
(2) Specific Brain Function Correlation: for every individual function of consciousness, nothing mental happens without something physical happening.
(3) Positive Evidence Mapping the Mind to the Brain: nearly every conceivable mental event has been identified with a physical location in the brain that has been mapped.
(4) Negative Evidence Mapping the Mind to the Brain: we can remove a property of consciousness by removing the physical circuit that generates it (e.g. we can remove your ability to identity faces by removing the face-recognition circuit, yet the rest of your consciousness remains; we can remove your ability to see a color by removing the color-recognition circuit, yet the rest of your consciousness remains; we can create two partially separate consciousnesses in one brain by physically cutting the wiring between the two halves of the brain, the result of which one side is not aware of what the other side is experiencing; and so on).
(5) Brain Chemistry and Mental Function: when the brain lacks certain chemicals, or has too much, the mind will fail in certain ways, or change in personality; that the mind can be so affected by brain chemistry makes more sense on mind-brain physicalism than any other theory.
(6) Comparative Anatomy and Explicability: The mental powers of animals increase in direct correlation to the increased complexity of their brains. And in every species, when an animal, including humans, develops its mental abilities further (or gains skills, memories, cognitive abilities, etc.), this is always matched by the development of neurons or synapses in their brains.
None of these things is likely if a “transcendent mind” was causing any of these effects we collectively call “consciousness.” But all of these things are likely if the physical operation of physical circuitry is causing them. The likelihood ratio thus strongly supports mind-brain physicalism being the eventual explanation of it all. So it would be a waste of resources to investigate or test any other theory.
Your arguments partially attack Bostrom’s simulation argument, because, indeed, in an imperfect simulation you might find glitches, be it inside or outside of the brain. But if my transcendent brain is of divine origin, I don’t see why their would be glitches. It would be a perfect rendering of a reality that makes a lot of sense, which implies that naturalism goes a long way in explaining everything. On the other hand, Bostrom at least attempts to explain why I find myself conscious in the middle of so much non-conscious and non-intelligent stuff, as implied by naturalism. My explanation, and in my opinion the only possible explanation, is that rendering my conscious mind takes exactly hundred percent of the computational resources of everything that exists. Otherwise there are things that remain unexplained, hence non-logical and impossible. That is also what you get when you use set theory to describe a sufficiently large multiverse in which everything exists infinitely many times. Every little difference in self-reproductive capacity will blow up to infinity, thereby turning every little difference between two minds in a winner and a loser with respect to their multiplicity. So my mind must be the mind that won all of these battles. And luckily I can enjoy natural explanations.
Nice. Totally ignore what I just said, bring up an irrelevant point instead, and then get wrong what I elsewhere said about that anyway. That’s, like, three own goals in a row.
For those following this sad, face-palming exchange, these are what I actually said about the unrelated issue of the Bostrom thesis and panpsychism generally:
Eight Questions (Q2: Sim Theory)
The God Impossible
The Argument from Specified Complexity against Supernaturalism
My apologies for all the sad face-palming I caused, but panpsychism is indeed unrelated from what I have been arguing. As are souls, ghosts, or any other immaterial substances. I have been arguing that a material, transcendent brain or computer that fills the greater deal of reality, explains my first-person consciousness much better than your atheistic worldview. And not a little bit, but about 10 to the power 500 times better. Because that is, according to physicists, how much a lifeless universe/multiverse must be larger than the observable universe, in order to explain the presence of third-person consciousnesses. You may have your arguments that transcendence and divinity appear unlikely, but this is no more than circumstantial evidence. Nothing that can be so certain as to bridge this gap of 10 to the power 500. I would summarize it as follows: I think, therefore most of reality is thinking transcendence. And this in turn increases the likelihood that God and afterlives exist. Nota Bene: I couldn’t find the term panpsychism in any of the links you provided. Panpsychism, in my opinion, is the crazy idea that rocks and atoms have consciousness.
There is absolutely no logical validity to the reasoning “I think, therefore most of reality is thinking transcendence.”
This is the difference between pseudoscience and science. Science is logical and based on evidence. You, by contrast, are simply ignoring both.
I think you are confusing the things we are conscious about with consciousness itself. I will grant that much of what we know can explain the subject[s] of consciousness, but there is nothing to explain the observer yet… please show this if I am mistaken. Science cannot just proclaim something exists because we wish it does without some way to explain it and to my knowledge there is nothing to explain the observer at this point in time. as to a singular place there are some theories that such does exist, for example the very weak magnetic field generated by all the sparking neurons… which can be disrupted by a very powerful magnetic field placed near the head… which also seems to disrupt our ability to experience anything… but a magnetic field does not really explain anything about how it can be an observer.
Since the article we are commenting on says exactly the opposite of that, clearly I am not.
Science works by proposing a hypothesis and deducing what will and will not be observed if that hypothesis is true, and what will and won’t be observed if it is false, then looks for both, and if what it finds confirms and does not falsify the hypothesis, we know the hypothesis is probably true (to a probability proportional to the accumulated weight of passing all these tests in the evidence).
This is what science has done, and how it has confirmed mind-brain physicalism.
No contrary theory has achieved any empirical success at all, much less as extensively as that.
Hi, Dr. Carrier. I have some questions about The Myth Of An Afterlife, edited by Michael Martin and Keith Augustine. As far as I know, you’ve recommended it in the past. Q1 is the most urgent while Q4 is the least. Sorry if some of these sound a bit lazy, it’s just that the book is so large that I won’t be able to answer them myself easily.
Q1. Does the book address the common objection to Mind-Brain Physicalism, “Things like brain damage may alter consciousness but that doesn’t show that the brain produces consciousness. That is fallaciously equating correlation with causation. It could be that the brain is needed for optimal functioning of the mind but not for the mind to actually exist.”
This is such a common and major response to the evidence for MB P that it would be really disappointing if it weren’t adequately refuted. My problem is that the book is so large that I won’t be able to find it myself. I was hoping you could point out where they address it in the book.
Q2. The critical/negative reviews of the book are a bit concerning. Is it true that the authors don’t really address/anticipate objections as much as they should?
Q3. How essential is reading this book? Is it a must-read or just something nice to refer to every now and then?
Q4. Is the book peer-reviewed? How can I tell when books are peer-reviewed?
Q1: It would depend on what you would consider “addressing.” The objection is a fallacy of infinite goal posts: as soon as we show a function entirely dependent on a brain circuit, they retreat and say, well, maybe some sub-function of that function requires a non-brain part; then we show that that sub-function is entirely dependent; and so on. Once you see this happen a dozen times, their “retreat” has lost all prior probability. If every time we ask “Does this sub-function require a brain part?,” and we are able to test and we find out, we find “Brain is required,” it is no longer credible to claim “Next time we’ll get a different answer.” That’s simply unlikely at that point.
Moreover, mind-brain physicalism follows from converging lines of evidence, not a single line of evidence. We can remove a function, we can stimulate a function, we can restore a function, we can alter a function; never have we found any function remains, any memory of it either, apart from the brain. And the causal cascade is always physical; for instance, we can physically trace the brain circuit that invents the color yellow–it is a processor that takes the RGB inputs from the eye and “calculates” when a differential signal should produce “yellow,” but it is in a different place than the RGB circuits, and is physically wired from them, and physical wires come out, which we can cut, and the signal won’t get to another processing circuit of the brain, and we can observe that that signal doesn’t get there, altering consciousness accordingly.
It violates Ockham’s Razor to observe all this and conclude, “Well, but, maybe there is something else as well, something not only required, but capable of maintaining structure and operation without all this circuitry,” as that is like observing Newton predicts all the planets’ motions and concluding, “Well, maybe there are also angels pushing the planets.” That just isn’t a scientifically plausible conclusion at that point. It is also two different things to theorize “something else is required” (there has never been any evidence of this; no mysterious sources of energy are interacting with the human brain, for example; all brain events so far have been entirely accounted for by ordinary metabolism and electrochemical neurocircuitry) and “that something can still maintain structure and function without the brain.” The latter theory has been decisively refuted.
If you need me to explain why brain damage studies accomplish that I’ll elaborate, but it should be obvious. If awareness of people’s faces, for example, requires the brain circuit that does that, and removing that circuit removes that ability, and thus all memories associated with doing it (so it’s not like there is a “second” brain somewhere accomplishing this task and recording it while the “first” brain fails to do either), there isn’t any way to claim that this ability survives the removal (the destruction) of that part of the brain. And as every function of consciousness has been correspondingly traced to a corresponding brain circuit that once removed removes that ability, if you remove all of them, there is nothing left that would count as consciousness (much less a person, who is a collection of memories and proclivities and skills and so on, all of which we’ve proved cease to exist when the corresponding part of the brain is removed).
This and many other points are covered in The Myth Of An Afterlife, but due to the fallacy of infinite goalposts, if you really fall for that argument, you will need many more sources than that, in order to track down “every” function and sub-function of consciousness and its discovered neural requirements, to see that the alternative theory always fails, and thus has no basis for believing it would ever yet succeed.
Q2: I haven’t checked those, but in general, when people are refuted, they always claim something wasn’t addressed, and when the issue is something delusionally crank (like disembodied soul theory), I too often find those claims to be false. So you would have to check for yourself: (1) is a thing not addressed, or is it? (so, is the critic lying–in which case you know you can dismiss the rest of their critique, as you already know they are dishonest) and (2) does that thing really have to be addressed? (as in, is it even logically necessary to anything proved in the book that this other particular claim be addressed; often the answer is no, it’s a red herring or other fallacy; so if you find a critic doing that, you have another reason to dismiss them)
If you do this work and find any claim that (1) actually, in fact, is not addressed in Myth and (2) logically needs to be addressed for the thesis of Myth to be maintained, definitely let me know. It could make a worthwhile blog post.
Q3: I can’t answer that for you. Most people have no need to read that book any more than they need to read a book extensively debunking flat earth theory; they already know it’s crank. The same reasons someone might nevertheless “need” or want to read such a book vis-a-vis flat earth theory, can be reasons omeone might nevertheless “need” or want to read Myth of an Afterlife vis-a-vis disembodied mind theory. Note that there are no dualist theories of mental or brain function in peer reviewed brain science. So even mere dualism, i.e. even without the added absurdity of positing an invisible “second brain” that can detach from the first and still retain all cognitive functions and memories and so on, is just a philosophical speculation that has yet to find any scientific basis.
Q4: I don’t think it is an academic monograph in that sense, no. All of the authors are experts in various pertinent fields (some quite prominent; many working scientists in mind or brain studies, many actual professors of philosophy) and I believe they peer reviewed each other’s work for this anthology in some degree (but not in what particular degree), so it is of peer-review quality. But that’s a bit short of a blind peer review. So it depends on why you need a thing to be peer-reviewed. The central purpose of peer-review is to ensure a work meets industry standards and is thus worth an expert’s attention; I would say this volume meets that standard. But if you need it to have met some yet higher standard than that, it doesn’t so far as I know. But I’ve never checked.
Huge thanks, Dr. Carrier. Wasn’t even expecting an answer to all of my questions, nevermind detailed answers like those.
Regarding Q1, I think I found my answers in Chapters 4, 5, 6 and 10. They’re quite similar to your answers.
Another question I have, Q5, is: what do you think of the “Argument from Physics“ against dualism? The idea that any action of a nonphysical mind on the brain would entail the violation of physical laws, such as the conservation of energy.
Another question is: the evidence of things like brain damage may rule out dualism but would it rule Idealism (matter is reducible to mental… stuff)?
The Argument from Physics only works empirically. There is no a priori reason a “second substance” (like, say “ectoplasm”) or a dimensionless point or matterless volume possessed of “mind properties” couldn’t also engage in energy exchanges with normal matter and force particles; so the issue is not that this is impossible, but that it simply isn’t observed—so the hypothesis of “soul matter” or “soul fields” or “soul points” or whatever is merely empirically false, not logically impossible. If any of that stuff existed, it would obey its own physics, and in harmony with all other known physics. So dualism is not impossible conceptually. It’s just false empirically.
Only dualisms that are radically nonphysical might be logically impossible. But it’s really hard to describe such a thing; such a mind must lack not just mass, but also volume or any physically distinguishable content (so as to in no way be describable as itself just another physical entity), which also means it must lack any ability to interact with normal energy and matter (e.g. it can have no EM charge, or ability to generate EM charge, so as to move anything), which would be self-contradictory (how could a “mind particle,” as we would then have to be describing, produce observable effects in the world and not be interacting with anything in the physical world? That’s a direct contradiction of description). There is a sense in which only dualisms that are just another version of physicalism (like “ectoplasm” models) can have any logical potential to explain anything. See my article The Argument from Specified Complexity against Supernaturalism for more on this problem.
Some of the same points are made in Evan Fales’ formal peer reviewed work on Divine Intervention, much of which applies to any “disembodied mind intervention” scenario, not just godlike ones.
I don’t understand your last question. You seem to have left out a word or two? Possibly the question you meant to ask is what I address in my Critique of Rea? Idealism in general is refuted by Occham’s Razor: no Idealism is ever needed to explain anything, never distinctly explains anything, and entails positing extremely bizarre entities nowhere in evidence. One hardly need appeal to convoluted arguments about brain damage to dismiss it.
My paper on consciousness and how it leads to God is out: “Proving God without Dualism: Improving the Swinburne-Moreland Argument from Consciousness”, published in Metaphysica (not a shoddy journal this time), and freely accessible on Academia and ResearchGate. It distinguishes three problems of consciousness: the explanatory gap, the personal identity problem, and the exceptional-point-of-view problem. The explanatory gap is the only problem Carrier is providing with any counter-arguments, as far as I can see. I think we all agree here that consciousness is related to the brain like an operating system is related to the computer hardware, as proposed by Dennett. The personal identity problem is the problem with the thought experiments about brains that are no everyday-life, scull-delimited examples of brains. For example, if brains can duplicate and merge with each other, than the number of possible consciousnesses is different from the number of brains (if at all you can still speak of brains). That is a serious problem. Our paper is about the exceptional-point-of-view problem: how can it be explained that I find myself as an intelligent consciousness if the vast majority of the naturalistic/atheistic universe consists of gas clouds, stars and rocks. The natural laws of physics have nothing in store to make any difference between the elementary particles that support a consciousness, and those that do not. Without anything supernatural, it simply cannot be explained, from a probability perspective, that I experience an intelligent consciousness. To get the probabilities right, we conclude that most matter must support an intelligent consciousness. Therefore a very large mind exists, which is probably God.
Post the URL to that article here (or if it is behind a paywall, email a PDF offprint to rcarrier@infidels.org).
A link to the article: https://wardblonde.org/metaphysics/proving-god-without-dualism/
Wow. Just read that. Whackadoo nonsense. I explain its incompetence in math, logic, and science here.
Whackadoo nonsense. I explain its incompetence in math, logic, and science here.
Hi, Dr. Carrier. Just a quick question about the mind and personhood. If possible, could you give a clear definition of what you mean by “person”? From reading your work, I think I know what you mean but I’m not sure.
Also, I’ve seen you talk about “narrative memory/history” a lot in your posts and in the comments. I’m having a hard time grasping what that means exactly. If it’s not too much trouble, could you also define or explain what it means?
Thanks.
See Sense and Goodness without God, index, “personhood” for a full explication and cited science.
Narrative memory is what it sounds like: you remember yourself and your life in the form of a causal narrative; which memory you can move around in consciously, and thus pick a time in your life to recall, and relate it causally to times before and after, all the way to now (and thence into the future, hence you can consciously plan). Animals don’t form or use memories this way.
And that is just one component of what constitutes personhood generally: a self-model, consciously explorable. Your brain builds a cognitive model of you as an individual and all its properties, desires, plans, and the like; you are thus capable of thinking of yourself as an “I” and others as a “you” or a “they,” and you can navigate around your self-model the same way you, and many animals, can navigate around a model of their external environment. Which entails a corresponding scale of metacognition: you can model other minds besides your own and thus understand and appreciate what other people are thinking, experiencing, wanting, and so on, and also track their narrative histories.
In short, conscious awareness of existing and existence, and all that that entails.
Re: God/the soul doesn’t explain anything. Yeah. I had a joke about this a long time ago. “You don’t believe in God? Do you believe in the wind?” and then I say, yeah, I know, the Billy Graham line, and, sure, the motion of leaves on a tree is evidence that there’s wind, but it’s not evidence that the wind is the breath of Zephyrus who’s blowing it because he loves us and wants us to be refreshed on hot summer days, and has a plan for our lives that involves anal sex!
God, in fact, does the opposite of explaining, because when every discovery we make closes a gap in our knowledge that doesn’t point to the existence of a god or points away from that, what can a theist say other than, “that’s just a coincidence!”?
If the brain/”mind” is physics all the way down, do you think we could say that about anything/everything else? Even the universe/multiverse as a whole? Then we can dispense with the ridiculous “first cause” hypothesis so many people (even some atheists) cling to! It reminds me of the Greek skandalon, which, in Christianese, is rendered “stumbling block” (I say “Christianese” ’cause that’s a term we used all the time, and Lattimore either hadn’t heard it or didn’t think it was a good fit. So he translated it in the slightly awkward way of something that “sets a person off [their path]”). And then, couldn’t we instead propose it might as well be the case that material has always existed, since it can neither be created nor destroyed? And therefore was always in motion, and didn’t need a “first mover” to “kick it” into that state? I don’t wanna go on a rant here, but I can’t stand that people think believing in a higher power (or simply not ruling out the possibility) makes them more humble (“you don’t know everything! It could be the case!” which, as you always say, is a possibiliter fallacy!), and yet it almost never pans out like that. It’s a shame!
“to say ‘experiences are physical’ would be to say that these particular so-called ‘physical’ things exist entirely to minds!” ah, don’t you just love when they say something right? I see it happening all the time!
Re: The joker you talked about in the other article who said consciousness wasn’t a “mystery.” He distinguished “mystery” from something we’re “ignoran[t]” about. I’m reminded of how, in certain religions, they use “mystery” for something we needed God to tell us. Something we never could’ve figured out on our own. That’s how Paul used the word. Shame to see it’s still causing confusion. Reminds me of “the poker incident” where Wittgenstein and Popper were arguing about whether philosophy has “problems” (for which, according to the author of the book I was reading, the implication was that they could never be solved) or just “puzzles.” I’m also reminded of that outrageous Thought Slime video where they seem to take for granted the theologian’s bs about the big bang being some kind of absolute beginning of the universe that required something “outside of time and space” to “start” it! “It’s not so much that this is a question that materialism doesn’t have an answer for […] it’s that this is a question materialism cannot ever answer. It is outside the purview of material analysis. […] The gap between uncaused and caused is infinite,” spake the theologian! rolleyes