I’ve been asked to comment on Peter Hacker’s bizarre claim that qualia don’t exist in his arrogantly braggish essay “The Bogus Mysteries of Consciousness.” So here goes.
Say What Now?
First, what are qualia? If you’re new to the idea, “qualia” means the qualitative properties of human experience. It’s a catch-all term for all the features unique to conscious experience, the “what it is like” to be seeing the color red or hearing a bass drumbeat or smelling cinnamon or feeling angry. Explaining why qualia exist and are the way they are is called the “hard problem” of consciousness because it’s really the last frontier of brain science, a question we haven’t yet resolved even hypothetically (in contrast to the other three unsolved frontiers of science—the origin of life, the origin of the universe, and the fundamental explanation of the Standard Model of particle physics—which all have fairly good hypotheses already on the table). Yes, the explanation for qualia most likely does have something to do with the inevitable physical effects of information processing. All evidence so far is converging on no other conclusion. But that still leaves us ignorant of a lot of the details.
This is mainly because we can’t access the information we need to answer this question. For example, to tell what actually is causally different between a neural synaptic circuit whose activation causes us to smell cinnamon rather than oranges (or see red or hear violins or feel ennui), we need resolutions of brain anatomy far beyond any present technology. The mere arrangement of synapses won’t be enough even, yet we don’t even have that—and since the IO signal for any neuron is determined by something inside the neuron, such as perhaps methyl groups attached to the nuclear DNA of the cell, we’d need to be able to make a map even of that, for every single cell in the brain, which is far beyond any present physical capability. AI research could get there sooner, if somehow they achieve general AI and can ask it about its personal phenomenology, but that’s just another technological capability we presently don’t have.
In any event, if you want to catch up on the history of this problem and its current state of play, see the entries in the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy. And to catch up on where I land on this subject, see The Mind Is a Process Not an Object (as well as the relevant sections of How My Philosophy Would Solve the Unsolved Problems and How I’d Answer the PhilPapers Survey).
What Is Hacker on About?
What Hacker argues is not even quite the same thing as what so-called “eliminativists” argue. They don’t really argue “qualia don’t exist,” but that they don’t exist in the sense supposedly everyone assumes. Neither Paul and Patricia Churchland nor Daniel Dennett have actually argued qualia don’t exist in any sense at all. Which is a problem I have with eliminativists generally; they only confuse people with semantic games. Dennett proposes we must abandon qualia by providing “alternative explanations for the phenomena” that qualia are evoked to explain. But the phenomena to be explained are the qualia. Dennett thus confuses causal theories of qualia with the qualia themselves. The Churchlands make the same mistake. Once you correct their mistake, we’re back at square one: we have some distinctive phenomena we have to explain; and we have not yet fully explained them. It does not matter what you call those phenomena. You can’t change what a thing is by changing what you call it.
Hacker doesn’t make this mistake, because in those other cases, such as the Churchlands and Dennett, their actual explanations are coherent enough to actually disentangle what they are trying to say in different words. For example, Dennett ultimately gets around to admitting there are phenomena to explain, and he attempts an explanation of them. Hacker does neither. As such, I suspect Hacker has simply naively misunderstood eliminativists, and gone off on an immature brag fest denouncing the stupidity of anyone who still thinks there are any phenomena to explain.
Dennett and the Churchlands don’t do that. They admit there is something to explain, and try to explain it; though what they provide is really a meta-explanation, which in each case reduces to the same thing: they propose qualia are an illusion; they are simply what it is to believe you are experiencing qualia. In other words, qualia are not an extra something that explain anything; they are, rather, the inevitable consequence of certain forms of information processing. I concur. I just don’t think it’s helpful to frame that as saying qualia don’t exist. That’s rather like realizing “when I see a mirage of water on the horizon, I know that that water doesn’t exist,” and then concluding “the mirage doesn’t exist.” That’s to confuse explanandum with explanans.
Why You Can’t Hide from This
No matter what word games you play, you still have to explain why cinnamon doesn’t smell like oranges, why activating one neural circuit causes you to experience a smell at all and not hear a bass drum or see the color red or feel disgust (and vice versa), or any other conceivable thing instead, and why any of this happens at all. We well know what it is like to process information without any of this phenomena: we call it our subconscious. So what makes the difference between just walking though life running purely on subconscious processes, and instead experiencing all these bizarre, and bizarrely specific, phenomena? What makes the difference between experiencing something as a smell, and experiencing it as a color? Or a sound? Or an emotion? Or anything else other than any of these things? Why, in other words, do smells or colors or sounds even exist at all?
And we don’t mean by this the biomechanics of our sensory systems. When we ask what makes the difference between cinnamon smelling like cinnamon and not oranges, we don’t mean what has to be different about the molecular receptors in the nose that distinguish between these two odors; those don’t have anything whatever to do with what things smell like. No matter what molecule stimulates a certain neural track in the nose, that’s just a binary signal, “on or off,” that flows into the brain. At best, perhaps, it has a quantity scale. But there’s nothing qualitative about it. That wire could go anywhere. It could go to the circuit that makes you see red, rather than smell anything, much less some particular thing. And for some people, it does: synesthesia is a thing. (So why are only some people synesthetes?)
Qualia are in fact undeniables. They therefore cannot not exist. The probability is literally zero. And that’s saying something, because almost nothing has a truly zero probability. But qualia are in fact the one and only thing that does. Because it is literally 100% impossible that “I am experiencing a white field with black markings inside it right now” is false; that it “isn’t happening” and thus “doesn’t exist.” That I am seeing letters on a computer screen as I type can be in doubt—maybe I’m hallucinating or dreaming this; maybe I am mistaken about what the sensory signals my brain is interpreting as letters on a computer screen actually signify; and so on. But that I am experiencing seeing letters on a computer screen is impossible to doubt. And why that is has to be explained.
Yes, qualia are fictional (our brain invents them to demarcate and navigate information), and yes, their “existence” will have something to do with information processing. Because we know if you remove or numb the pertinent information-processing circuit that generates any given experience, you remove the experience. And you can even cause the experience to occur by simply sticking a wire into the pertinent circuit and shocking it. So we know this is simply something that circuit does, and does differently than a circuit that doesn’t generate any phenomenological experience (as most circuits in our brain don’t) or that generates a different one than this (as all the remaining circuits in our brain do). What makes a “cinnamon circuit” cause that experience and not some other (or none at all)? This is the “Mystery of Consciousness” that Hacker daftly claims is “Bogus.” But it’s Hacker’s claim that’s bogus.
Hacker’s Catastrophic Derail
One thing that often throws everyone off, including the “eliminativists,” is the persistent yet completely unnecessary assumption that qualia are things. That they are objects, entities—evoking wonder at what mass or charge they have or whether we can bottle them. That would be as mistaken as thinking we can capture “running down the street” or “voting in an election” in a bottle, and weigh it on a scale. Those are not things, they are events. And like them, qualia are events, not things (I fully explicate this point in my article The Mind Is a Process Not an Object).
Thus qualia don’t “explain” things; they are the thing to be explained. And they don’t exist separately from the physical process underlying them; they are the physical process underlying them. So the question is what is different about those physical processes, and other physical processes, which don’t generate such phenomena? That is exactly identical to the question of what causes those events of experience to occur, and to have the qualities they do (rather than others instead). And this is the “hard problem” of consciousness. It is not unsolvable (we know what we need to do to get at the answer; we just don’t have the technology to get at it yet), nor is its being “mysterious” evidence against physicalism (physicalism poses no difficulty for explaining what “events” are and why they occur).
But Hacker ignores all that and launches his bizarre essay with the incredible declaration that “there is nothing mysterious or arcane about” consciousness, despite all actual experts the world over, from brain scientists to philosophers, concurring that there is. Indeed Hacker even slags off eliminativists in his first paragraph, noting that “Daniel Dennett” himself has said “that consciousness ‘is the most mysterious feature of our minds’,” and so Dennett, too, is among all the rest of the world’s experts “who should know better.” Hacker himself is a philosopher of relevant pedigree; so really, it is he who should know better.
I just laid out what the “mystery” of consciousness is; and it is very real, and indeed remains very much a mystery. Maybe not as much a mystery as why America elected Donald Trump to be their president or why ketchup-flavored ice cream is a thing. But some manner of mystery all the same. So how does Hacker try to argue that it isn’t a mystery? That there isn’t anything about it to explain?
Mostly Hacker argues by vacuous mockery. It takes quite a lot of reading to ever even discern an actual argument in anything he says. Indeed, the first time we get to anything even close to an argument is his sarcastic remark that:
There is something which it is like for you to believe that 25 x 25 = 625, which is different from what it is like for you to believe that 25 x 25 = 624. There is something it is like for you to intend to retire at 11.30, which is different from what it is like for you to intend to get up at 7.00. These are distinct qualia.
This isn’t, of course, an argument at all. He does not draw any conclusions or inferences from this declaration. He seems to imply that it is ridiculous and that its being ridiculous somehow means qualia don’t exist. But I can’t fathom how a serious philosopher could think that wasn’t bollocks. “These qualia don’t exist, therefore none do” is a shit argument.
It’s just all the worse that “Arguments to the Ridiculous” are usually already shit arguments. They typically just reify the fallacy of Argument from Lack of Imagination. To simply presume there is no qualitative difference between experiencing the conceptual distinctions he lists here is, in other words, a circular argument. And circular arguments are shit arguments. The rest of us aren’t this stupid. Belief means confidence; and we all know confidence feels different than the lack of it. Whereas if there is anyone out there who can “experience” the difference between “624” and “625” as quantities, that logically entails that for them there is something experientially different between them. And that’s exactly what the word “qualia” means.
Most of us, however, do not qualitatively experience any difference between such abstract numbers as 624 and 625. We comprehend them in a computational sense, absent any unique qualia. We generally have to work out in what way they differ; we don’t experience it directly, the way we do the difference between “two” and “three,” which are quantities we can directly apprehend in experience. And to feel the difference between those quantities we don’t even have to be the synesthete to whom chicken tastes “like three points,” but we could be—and how would Hacker explain that? But larger numbers, like 624 and 625? Those simply don’t “feel” any different to us except in fragmentary ways. We can “feel” that one of those quantities has one more than the other (but so do lots of other quantities); that both are in the hundreds (but so are lots of quantities); and we experience distinct features of the Arabic shape of the component numerals (but those numerals, and hence the attendant qualia, attach to lots of other numbers); and so on. But that’s it. And that’s what we need to explain.
By contrast, we can be fairly confident my desktop computer experiences none of these things. So why do I? And why do they feel like that, and not like something else? Of course—to some people, they do. The most common form of synesthesia is to experience color qualia in conjunction with various numbers. That Hacker doesn’t know this would suggest he is too science illiterate to have any opinion on this topic worth consulting.
Indeed, in accord with his ignorance, perhaps Hacker might ignorantly blather on about how we could possibly know my desktop computer doesn’t experience these things as I do; at which he should be instructed to read up on the science of comparative neuroanatomy. My desktop computer has none of the corresponding hardware we know my brain requires to experience those things. We know a computer’s entire contents, and nowhere in that inventory is any experiential circuitry analogous to ours. Yet my computer can agilely handle the conceptual content of these numbers through countless renderings and computations. Perhaps that does feel like something to it; but it won’t be at all like what it feels like to me: our phenomenological circuitry is too radically different. My computer’s phenomenology couldn’t even be identical to that of a flat worm; and yet is surely far more distant from mine than a worm’s. And unless Hacker is going to profess a belief in magic, he cannot propose an effect can exist without a necessary and sufficient cause.
So now I am half way through Hacker’s essay and have yet to encounter a single argument, apart from this garbage, which is the mere fragment of a possible argument—and that argument is trash.
Demystifier, Aisle Seven
In the second half of his essay Hacker makes the whole world face-palm when he backtracks from the stupid idea he’s been uselessly pushing for hundreds of words now by declaring “There is ignorance, but nothing mysterious.” Someone ship him a dictionary. Those mean the same thing. When Dennett says the question of how the human brain generates the particular phenomenal experience that it does, he simply means we do not know how it does that. It’s a mystery. Have I really been duped into reading a thousands-word long equivocation fallacy? Is Hacker that shitty a philosopher? I’d tell him it’s time for him to retire—but I see that he already has. Maybe he should stick to fishing. Or knitting. That’s a good hobby.
But it’s worse than that. When Hacker gets to trying to explain how there is no mystery to explain, he actually reverts back to claiming we are not ignorant of how consciousness works. So which is it? Never mind. Here he declares “the question of what perceptual consciousness is for is trivial,” because obviously it has survival advantages. This is where he jumps the shark, revealing he doesn’t know what he is talking about. When scientists ask why qualitative experience evolved, they are not asking why the conceptual processing of perception or thought evolved—they already know why that’s useful. The “mystery” is not why our brains can do those things (for example, locate and react to movement in our “peripheral perception”). The mystery is why our brain can’t just do that as blindly as it does everything else—why does it have to experience doing it?
We don’t need colors. So why does our brain invent “red” when we could just simply respond to different wavelengths of light automatically? We don’t need to “experience” seeing anything to recognize something is there and is reflecting different wavelengths of light than something next to it, for example. So why does our brain bother “coloring” that in? Much less with specifically that color. Remember, red does not exist. Nothing outside our brain has any color. Redness is a fiction our brains made up to “represent” certain patterns of photon wavelenths. Why?
And remember, that’s both senses of why: Why did it do that at all? And why did it do it in that specific way? Why are red things red and not blue? Why are they red and not some shade or pattern of grey? Why not some other completely alien color? What is it about the circuit that colors in parts of our visual field with “red” that is different from the circuit that colors it “blue”? And why does that physical difference in those circuits produce exactly that difference in color experience? This is what scientists are talking about when they say they don’t know “why” our brains evolved to do this, nor “how” any neural circuit even can do this.
Hacker seems to not know this. He seems to think scientists are confused about why wavelength discrimination is useful; but my computer can do that, and it needs no conscious experience to reap every resulting benefit. So what use is the experiential aspect of wavelength discrimination? And what use is that specific kind of experiential discrimination? (Colors instead of shades of grey; those color assignments instead of some others; and so on) Neither is explained yet, by evolutionary biology or neurophysics. We have ideas. But Hacker seems not to know that either. He acts like all scientists and philosophers have done is throw up their hands and propose nothing. In fact they’ve been busy proposing a lot of good leads for answering these questions. Hacker seems not to know any. He can cherry-pick a Dennett quote; but does not appear to have ever read him.
Take Hacker’s example of pain. He claims “consciousness of increasing pain is an incentive to decrease stress on an injury.” But that utterly fails as an explanation. All we need is the behavioral-response-to-stimuli effect. We do not have to feel pain at all. The useful behaviors Hacker refers to can be entirely programmed without it. So why are we programmed with it? What is pain for? The question is not, what are aversive stimuli for. If we just reflexively favored a wounded limb, no one would be mystified. But we don’t do that. Instead we have an elaborate phenomenology of pain, a completely unnecessary extra step—and one most annoying. Why?
We can tell this evolved early; comparative neuroanatomy shows that experiential pain as a mechanism is an attribute of neural systems going pretty far back (at least as far back as insects); by contrast, similar reactive systems in single-celled and simple multi-celled organisms, and plants, lack any of that computational architecture. They don’t need it. So why do animals? We can even today build injury-favoring robots without any of that phenomenological architecture. So why did evolution produce it? More importantly, how did evolution produce it? After all, we do not know how to program a robot or a computer to feel pain. Why?
This is the mystery that completely eludes Hacker—because he apparently read nothing on this subject, and knows nothing about the actual debates and concerns of real experts in it. He just pontificated a drunk uncle’s essay from the armchair, harrumphing at something he doesn’t even understand, and has made no effort to. This is annoying.
How Does One Solve These Mysteries?
When Hacker makes hopelessly naive declarations like, “Affective consciousness enables us to reflect on our moods and emotions and to bring them under rational control in a manner unavailable to other animals,” he is the one throwing up his hands and giving up. He is basically just covertly admitting he has no explanation for why we need affective consciousness to do this. He is likewise declaring we have no need of knowing how evolution could have produced such a remarkable feature, even were it needed. Even computationally. Much less biologically. This is almost as antiscientific a behavior you could ever expect from a purported philosopher.
We don’t know how a computational process can produce an “affective consciousness” to use in this way. That is the primary mystery of consciousness. Nor do we know why our brains, arranged as they are, generate the particular kind of affective consciousness we experience—why our emotions feel the way they do and not like something else. That is the secondary mystery of consciousness. Only then do we rank the remaining mystery of consciousness, which is why evolution would have brought us down that road of DNA mutations toward developing an organ capable of any of that, rather than achieving the same goals in other, less mysterious ways (like simply making thought more rational; no need for any phenomenology of emotion in the first place).
Unlike Hacker, I acknowledge these are serious questions that need serious answers. Not armchair poohpoohing. Only a fool would think these questions can be ignored. And they haven’t been. Following Dennett, the Churchlands, and others, I know (unlike Hacker) that the most promising research program here is in the direction of integrated information processing. At a certain level of complexity, virtual world-building becomes inseparably phenomenological. In other words, you can’t have a complex integrated perceptual system that doesn’t eventuate a phenomenology—a “what it is like” to be navigating that virtual perceptual space. In other words, philosophical zombies are logically impossible. A conclusion evidently unknown to Hacker, who evidently has never actually read anything on this. This is, on present evidence, the most likely solution to the primary mystery of consciousness. (This does not mean the hyper-specific idea called “Integrated Information Theory” is the ticket, however. All computational models of consciousness carry the same basic insight.)
From that it then follows the answer to the secondary and tertiary mysteries will be one of mechanism: eventually we will be able to map and diagram the specific neural circuits causally sufficient and necessary for generating every unique quale, and we will then be able to see what the physical difference is between a circuit that generates a scent and a circuit that generates a color, or a circuit that generates no quale at all; and then what the physical difference is between a circuit that generates the color red and a circuit that generates the color blue; and we will then be able to deduce all logically possible color circuits, and be able to begin discovering, and possibly even predicting, what colors any given circuit will generate and why. Likewise scents, feelings, and the like.
I do not think we will be able to predict all phenomenology independent of experiencing it ourselves (I suspect we would have to integrate a color circuit into or perceptual system to “experience” the color it produces: you have to be the process to know what it is like to be), but we will be able to categorize them: at a mere glance we will be able to predict whether a circuit so-installed would make us see a color or smell an odor or feel a feeling, for example. And with all that information, we will be able to look at the evolutionary history of every component, all the way back to its most primitive known ancestor, and thereby answer the question of why evolution favored that route for that circuit, while favoring the development of non-conscious circuitry for other functions and systems in the brain.
I can say that, at this point, I suspect what we will find is that phenomenology-driven pathways are more computationally simple to develop for the complex purposes they serve. We may find, for example, that it is possible to design a consciousness circuit that does not feel pain but reacts in every way identically to the ways pain is meant to benefit its experiencer, but that the requisite programming is too irreducibly complex to arrive at blindly by stepwise mutation and natural selection. The pathway of coopting a phenomenological feedback loop was probably easier and thus more likely to be hit upon by any evolutionary process.
Conclusion
After pointless drunk-uncle rambling, completely missing the point, understanding nothing, and developing no cogent theory of why or how a completely natural, physical world can be compatible with experiential consciousness, Hacker eventually resorts to the idiotic declaration that “the world does not contain conscious states and events” but rather it “contains sentient creatures like us who are conscious (or unconscious) and are conscious of various things.” Those are the same goddamned things. There is no such thing as a “sentient creature” without conscious states and events. There is no such thing as “being conscious” and there being no events or states of consciousness. Hacker is literally writing contradictory nonsense.
Hacker closes his essay with a dozen more nonsensical contradictory statements like that one, which require no further parsing. The fact remains that there are indeed real mysteries of consciousness. There are phenomena we cannot yet and as yet have not explained. We don’t know how a physical system can produce them (we have some ideas, but a long way to go to confirm them). We don’t know why different physical systems produce certain phenomena and not others (we have some ideas, but a long way to go to confirm them). And we don’t know how or why evolution ever got to or needed any of this to do any useful thing (we have some ideas, but a long way to go to confirm them).
Hacker does not explain any of this away. He instead simply ignores what all those mysteries actually are. He pretends there is nothing to explain, and therefore we don’t need to develop even the beginning of any answers to them, much less continue a major scientific research program to complete them. But we do. And we have. But dolts like Hacker want you to simply abandon all scientific and philosophical curiosity and responsibility and pretend there are no mysteries to solve about why we are the way we are, and how the world has made that possible. Please. New Year’s resolution. Don’t be like him.
When trying to understand the concept of qualia, what I get is that qualia are atoms of experience, molecules of sensation. They are the pixels in the screen at the Dennett’s Cartesian theater. They are the components of the sensorium, Lego blocks of consciousness.
Unfortunately when I try to do a chemical purification of my experience to get the atoms, the irreducible elements, I discover there must be something wrong with me. First, my qualia are not constant. The quality of sweetness is different from my youth, as best as I can remember. The quality of cinnamon varies quickly. I had cinnamon toast last night, and the last piece was less cinnamony than the first. My wife accused me of having an impaired experience of blue and green, so that I couldn’t distinguish them as normal people could. (I have heard tell of Daltonism of course.)
Second, my qualia are transmuted in dreams. When I was a kid I had the sensation of dogs biting my toes but it was my grandfather wiggling them to wake me up (to free the couch.) I don’t even have the same qualia when merely exhausted.
Third, I cannot really reduce my experience to its qualia. If you pretend for a moment, qualia are the direct experience by the soul of the forms, then the problem of identifying the qualia is the same as identifying the Forms, or Ideas. Is there a Form of the Chair? Or are there Forms of White, Tan, Cylinder, Arch, Right Angle, Smooth, Hard, etc. which I can list (incompletely) as I experience my dining room table chair? Occam’s Razor tells me not to multiply Forms beyond necessity (because unneeded Forms are not explanatory?) but I personally lack the competence to do that.
Fourth, I don’t remember qualia as a child, even though I surely existed.
Fifth, the qualia of itching seems on the other hand to be entirely unrelated to actual experience. Yet it still seems to a quality.
If the hard problem of consciousness is to understand how people have the same feelings, the same qualia, in my case I’m not convinced I do, no more than a red-green colorblind person does. Either I am an inexplicably real philosophical zombie or a defective mind or the hard problem is perhaps not as real a problem as claimed.
That might be a misleading analogy. They are events, not objects. But yes, one can distinguish between irreducible and reducible qualia (a color or smell is irreducible; the experience of a flower is not).
Similarly, these analogies can mislead. There is no analogy to pixelation. Qualia are holistic, not atomistic. When you see a color field it cannot be reduced to pixels of color. It is a singular experience. And the actual lego blocks are the circuits that, when activated, generate that experience; but the experience and the event of that circuit’s activation and integration into a complex virtual model are not separable entities. The qualia thus actually reduce to a physical system, in the same way that “running down the street” does.
The phrase “components of the sensorium” may be closer to correct, although these components are again actual physical circuits, that could be swapped out with others, e.g. if we could rewire the relevant parts of your brain—yank out, say, the “red” circuit and replace it with a “magenta” circuit for example, or a small circuit instead of a color, and so on. Your reported experience would change, though you wouldn’t remember what it was like to be integrated with the removed sensation circuit; because memory would require activation of that very circuit, and thus you could have a scenario where you have a false memory of having always, say, “smelled cinnamon” instead of “saw red,” and the only clue that this was a swap would be your separate narrative memory that those memories used to be of a color (even, the color red), but you would understand that only conceptually—you would have lost the ability to experience that color or remember what it was like.
So the components again are still physical circuits. Each of which corresponds to a specific “what it is like” to run that circuit. And then we get to the fact that the “what it is like” varies as the circuit itself varies, both internally and in its integration with other circuits. Hence…
That is likely due to hormonal changes with age. If you took the right hormone cocktail you might recover the youth sensation. But that depends on how much the circuit itself has actually changed (as we learn, accumulate experience, integrate more circuitry), which is not correctable (except by super advanced technologies that current don’t exist).
Note that there are a lot of clues to the circuitboard in what you are saying. It actually can’t happen that you would “remember things differently” if the whole circuit changed, as then your memories would also change, as they need to be run on the same exact hardware. There is no separate set of circuits for “running memories,” only for storing them. And memories are not stored like a tape recorder. They are stored as a set of instructions for re-activating the sensory circuits that generated them in the first place. So if those changed, so will your memories. But the stored instructions might not have changed, and that is where you can recover sneak-circuit information.
For instance, narrative memory is different from sensory memory. So if you recorded a narrative description of a memory, and the sensory circuits that run that memory changed, you can detect incongruities between the narrative information and the sensory trace. But this ceases to work at a certain fundamental level, e.g. there is no way to record a narrative memory of what the color red “is like,” only that, for example, it is a color and thus should resemble other colors in dynamic properties. (Of course, narrative memories can also change; our memory system is not entirely reliable, but that’s a whole other problem.)
Another way you can detect changes is when there is an integrated experience, i.e. several qualia circuits operating, and only, say, one but not all of them have changed. And yet another way is in intensity. As we age, memories (and experience generally) tends to decline in intensity; and this is usually a product of changing hormone balances. You can narratively record the fact of an experience being more intense, and even recover the qualia of “what it is like to be intense” as the circuits producing that idea still exist, so that when a memory or a repeated experience feels less intense, you can detect that that is the case, yet not “fix” it (you can’t “make” it more intense by a thought, as that is a hardware problem, not a software problem).
All of these things give us clues to how the brain is and isn’t wired, and as we correlate such clues to physical structures and organization in the brain we get closer to honing in on how the brain is doing any of this. We have a long way to go, but so far we have enough evidence to prove where the final flag will be planted. It will be some form of brain physicalism, having to do with integrated information processing.
That’s simply an amplitude change: more of the same rather than less. Like the difference between shining a bright light in your eye rather than a dull one. Only the intensity shift sometimes occurs at the hardware level: rather than the actual quantity of molecules striking the nose changing (though it could be that, as thermodynamics entails they will emit and expand and disperse and thus become more diffuse over time), the brain registers the intensity and then dials it down once it starts to assume this is a new baseline.
Indeed, and note that that is a problem at the detector end (the cone cells in the eye), not in the perceptual end (the brain’s qualia circuitry). You can still experience blue and green as well as ever; it’s just your eyes are bad at reporting to your brain when you are supposed to, because some of the cones are wired to the wrong qualia circuits.
This is how we know there must really be people with inverted qualia (they see red when you see green, and vice versa): if you have both known mutations for color blindness, all the cones are swapped to a different circuit, so the cones that detect what we call green light all wire into the circuit that generates an experience of redness, and vice versa.
Dreaming is a random walk, i.e. your brain randomly fires and runs circuits, and practices integrating them creatively into a virtual reality (which happens to be necessary to skills acquisition: circuits recently used will get the most activation, and the brain uses that to “practice” at whatever it thinks you were doing, by throwing random variations at it). Needless to say, all sorts of strange things can happen when that occurs, as it is no longer constrained by actual sensory signals from the outside.
First, this is the problem of not being able to see the machinery that produces the output, like someone trying to fathom how a clock works, but they have no idea of gears, much less what arrangement of gears would be necessary to produce that effect. All you see is the effect: the clock hands moving a certain way. This is why qualia cannot be studied solely from inside experience; you have to crack open the brain and look at the engine that is generating them. And we don’t have the technology to do that yet.
Second, qualia are not limited to irreducibles. What generates qualia is the integrated virtual model: a complex of experiences, circuits interacting. And I don’t think it’s possible, for example, to just “see red.” You have to experience it in some integrated way (as a field of red, for example, which entails activating other circuits having to do with topography and geometry; and as part of a narrative experience, for example, which involves qualia having to do with the passage of time, the thoughts racing through your head, and so on). Meditation (and certain drugs) can dink with this process in various interesting ways, but I am not aware of anyone escaping the necessity of integrated circuit activation. Qualia can only be experienced in a collective integrated experience of them, not in isolation.
This means most experience, though it will have analyzable components (“this is a sensation of red”, “this is a sensation of passing time”, “this is a thought”, “this is a sensation of a geometric space”), will always have unique or repeatable integrations that have their own “what it is like.” Because it’s not just a bunch of isolated circuits. Those circuits activate together in countless arrangements of meta-circuits.
This may have more to do with how memories are reinforced and diminished by neural pathway building. Your brain has accumulated so much more wiring since then, that that old wiring has either been written over or buried beyond recovery within much more abundant and robust connections.
I’m not sure what this means. But sensory signaling is not the same thing as qualia circuit activation (per the color blindness example); and qualia are experienced as complexes of experience rather than singularly alone (per above).
That you can report changes in experience entails you have experiences so that rules you out as a philosophical zombie. And if you suppose you are only falsely believing you are having experiences, that is exactly the Dennett theory of consciousness with which I concur (see my mirage analogy).
All the things you just reported only reinforce, not diminish, the hard problem: why do you need any of those experiences, much less the changes in them (is this an accidental or a deliberate product of evolution?); what generates those experiences and why does it only generate those and only then, and why do they change, and why does most of your brain keep functioning without generating any? These are all the hard problem. That problem thus doesn’t go away.
“Qualia are in fact undeniables” seems to conflict with “And if you suppose you are only falsely believing you are having experiences, that is exactly the Dennett theory of consciousness with which I concur (see my mirage analogy).” Please explain their coherence. (I’m painfully aware of my ignorance about this.)
An illusion still exists. And in fact, undeniably exists. For example, that a mirage of water is not real does not mean the mirage does not exist. It most definitely does; in fact, that you are seeing it, when you are seeing it, is literally undeniable. It’s the water that doesn’t exist, not the mirage. If you falsely believe you are seeing red, there is literally no logical difference between that and seeing red. In other words, that is what redness is: a present belief in the presence of a color. The only thing left to explain is why a certain circuit convinces you that you are seeing precisely that illusion and not some other (or none at all).
Dennett discusses this extensively in his book Consciousness Explained but IMO the very best article on it is Cottrell’s “Sniffing the Camembert,” if you can find a copy of that (maybe through your local public library).
Holistic and atomistic are opposites. My memories of other usages of qualia centers more on atomicity, including the implication of an irreducible, as in unchanging, element. Maybe my understanding of the others is wrong. But the notion that experience is composed of this atomistic qualia contradicts my (illusion?) of experience, as my qualia are not constant. If I don’t have genuine experiences, then I must be a p-zombie, if I understood that concept.
“…why do you need any of those experiences, much less the changes in them (is this an accidental or a deliberate product of evolution?); what generates those experiences and why does it only generate those and only then, and why do they change, and why does most of your brain keep functioning without generating any? These are all the hard problem.”
Answering as if I were not defective…Why do I need any of these experiences? The reports of blindsight, where individuals are not consciously aware of being able to see, suggests that a sensorium is the equivalent of a “You are here” notation on a mall map. It’s a heads-up display, providing information for making conscious choices (a process that begins unconsciously.) People with blindsight cannot move about as efficiently and accurately and quickly, according to the reports I’ve read. Counting the number of animals in the immediate surroundings, a skill of many uses, may not require verbalizing numerals in order, but a conscious awareness of environment would make that more efficient, quick and accurate too. Plants do not move about so it seems unlikely they have a sensorium, though I suppose I’m just guessing about that.
The changes in qualia then are a consequence of the different conditions obtaining when the material aspects of environment and body (which includes the mind, no matter how conceptually distinct philosophy makes them,) are different. The changes over time are a consequence of the changes in the body, from infancy to old age. The changes in qualia are a consequence of the material limits of the body (which includes brain/mind.) The changes in qualia are a consequence of glitches in the body, from things like low blood sugar to prior expectations falsely priming perceptions, and a multitude of others. Changes are a product of time. There isn’t much mystery in why things change. They mystery is why people are convinced so often that only the eternal is the real, when it is far, far more likely the eternal is the imaginary, if not the ideological. (Pejoratively speaking.)
As to the question whether qualia are accidental or the product of evolution, this seems to be too heavily influenced by Dennett, who is something of a crank who favors evolutionary psychology (see Darwin’s Dangerous Idea, a critical reading of which does not help Dennett’s reputation.) The belief that everything is adaptive is dubious. The utility of consciousness of place in the surroundings in guiding movement seems to provide quite enough adaptive value to be positively selected. The idea that natural selection is a sharp enough tool to ensure that everyone has the same sensation of red (quite aside from the observation that colorblind and synesthetes don’t,) that qualia are essentially mental modules a la Cosmides and Tooby is not a proposition that compels interest.
As to what generates these experiences, whether atomistic or not, the interaction between the person and their surroundings, whether initiated by the person or by changes in the environment, generate experiences.
As to why only certain experiences are generated, meaning I gather why are they the same (abstracting again from my experience,) from person to person, my suggestion is that there is only so much body to generate them. Analogies from computers are dicey, since computers process digital input according to a program, which is not what bodies do. But a computer can support only so much computation. The similarities in experience from person to person in the feeling of taste start with there being only so many different kinds of taste buds. To get really loose with the analogies, there’s only so much bandwidth and only so many ways to code the information given the limits. (The usefulness of qualia as analytical constructs in explaining differences in taste, as in “I like that!” or “Ugh!” is unclear to me.)
And last, why doesn’t our brain generate qualia for all its functions? We might design an automobile dashboard that reads out numerical measurements of oil pressure; the precise amount of fuel; the external temperature of the engine under the hood; the noise level from the muffler in decibels; the engine output in watts; any number of things besides vehicle speed and internal engine temperature and the rpms. It’s not necessary for most purposes, is why we don’t. Evolution may be smarter than we are but natural selection is not really design. Descent with modification “merely” mimics design. The survival of the nonfunctional proves that. Most of thinking is not conscious, not verbalized or imaged or whatever, because most of it was either impossible, given the constraints of evolutionary history or because it wasn’t adaptive enough to be selected for.
Only at the same scale. A holistic understanding of a person does not contradict the fact that they are entirely reducible to atoms. The one is simply looking at the same thing on a different scale. Missing the forest for the trees in no way means there aren’t both trees and a forest.
Thus, a field of red is not reducible to pixels of the color red. It is a single indivisible spatial experience. But red is not the same as green. Or as a smell. Or a feeling. Etc. Thus my explanation of how qualia are atomizable only to a certain and limited extent. And even then probably indivisible (we know of no way to “just” experience the color red; it only ever gets experienced in an integrated model of multiple qualia, e.g. a geometric shape or space that is red, etc.)
Oh no, that is not what a p-zombie is. By definition, a p-zombie is a version of you that experiences nothing whatever.
See my discussion of p-zombies here.
That qualia change is simply a question of neuromechanics. It has nothing to do with whether qualia exist.
It’s rather a demonstration that qualia can only be experienced in an integrated model. No virtual model, no experience.
The “you are here” is only ever realized virtually. After all, you exist even when not conscious of you (e.g. you don’t cease to exist when asleep; all your memories, abilities, character traits, preferences, etc. remain present, intact, and correctly arranged). Thus a consciousness of you is not you; it’s just a virtual model of you, that you use to study and understand things about yourself and so as to be able to relate that model to other models (of other people’s minds; of the physical environment you are in; etc.). But “you are here” is a physical fact (wherever your brain is), not a mental fact. Your ability to experience yourself is a mental fact. And that requires computation.
The one thing we know for certain about blindsight, for example, is that it only ever occurs when the color circuit is physically severed from the other circuits integrating a virtual model with that color. I discuss this in Sense and Goodness without God (index). Thus, that color circuit might be experiencing color, but as it is not physically connected to the other circuits, you cannot access that experience (nor can we, as that isolated section of your brain can’t talk, nor engage in complex thought at all). All that remains is a sneak circuit to the verbal center of your brain where colors are labeled. Thus you can access what the color is named, but not what it looks like, because you have been otherwise physically severed from that microcircuit.
But this is what a p-zombie would be like: if all experience was like that. Nothing whatever but the ability to identify, analyze, and name things, but experiencing literally nothing when you do. No vision of any kind. No feelings of feelings or experienced thoughts of any kind. Emotions would simply have their behavioral and analytical effects, but would feel like nothing. Thoughts would just mean computations; you would never actually experience thinking any thought. And so on. That’s a p-zombie. And as I argue, it’s actually conceptually impossible; but it takes some thinking to grasp why.
But that experiences vanish only when physically cut off from the integration of circuitry producing a model is one of many reasons the integrated information theory of consciousness is most promising. This is why you can’t ever just “experience red.” You can only ever experience it in an integrated model of some kind, involving spatial and temporal qualia as well as the color. And that requires a complex integrated circuitry, not “just” the circuit generating red experience.
That’s sort of true, except you are a part of the display. It is not as if the display is separate from “you” and you watch it from afar. The circuits generating experience are among the same circuits that generate everything about you. There is no “you” apart from that. There is in the sense that there are parts of you that don’t have to be conscious all the time (e.g. all your memories are stored unconsciously, and you are only conscious of some of them at a time). But you can’t become an integrated “you” without processing those memories consciously, which requires the architecture for building a virtual model “of” you. That architecture is identical to the architecture you “run” your memories on to experience them again and think about them again. So it is not a separate you watching a HUD. You are the HUD (in relevant part).
It is, rather, that integrated model building works better than isolated sensory labeling. You need to build a model of the world to move around in it effectively. Once you start chopping physical circuitry out, disabling your ability to integrate information (like color) into a model, you impair that ability. But there is no immediate reason why the model shouldn’t work without qualia. A coherent, complete integrated model of the environment should work fine for navigating it without qualia. And we know this because we can build blindsighted machines that do this perfectly well.
The insight Dennett et al. reached is this: though conceptually we could have a perfectly functional model-builder and model-navigator without qualia, the separation of the two is actually impossible. Once you have a perfectly functional model-builder and model-navigator, it by definition will be experiencing the model. Which means that those “blindsighted” machines…might not be blindsighted after all (see Dennett’s discussion of Shakey the Robot for a real-world example). They will likely have the same phenomenal experience as a comparable animal (though not a human, as that requires building a model of one’s own mind, and we haven’t figured out how to program that yet). Because they can’t not. It is not possible to navigate a virtual model without experiencing the virtual model. Those are literally one and the same thing.
It is indeed imaginary. Like an airplane flight simulator’s viewscreen. But the fiction corresponds, most of the time, to reality, well enough that by navigating the fiction we successfully navigate the real world that that fiction was invented to model. That is, in fact, why we evolved the ability to build these models. When an airplane flight simulator’s viewscreen is based on external information, such that the fictions on that screen match real things (a real horizon, real buildings, real air conditions, real aircraft sharing the sky), a person can successfully fly a real plane using only the simulator. This is what the brain is attempting to do by building and using its fictional (“virtual”) models.
You must be confused. Dawkins is a proponent of evopsych (see The Selfish Gene). Dennett is more of a nature-nurture proponent than Dawkins is. And the status of evopsych is irrelevant to Dennett’s explanations of consciousness.
Dennett has not only never argued that, his entire point is that “everything is adaptive” is false. His theory is that qualitative experience is an accidental byproduct of other selection effects.
As I have been explaining here, from the article, through all my comments.
That’s exactly the Dennett thesis.
The only thing you are omitting is the crucial step of explaining why we need experience. If the car ran fine with no dashboard, who needs a dashboard? Dennett’s explanation is quite simply: once you start building and using a “dashboard” consisting of integrated virtual models, the inescapable consequence of doing that, is experiencing it. There therefore will necessarily be something “like” what it is. Whereas the rest of the brain, though integrated in its own information processing tasks, is not integrated in a way so as to build or run a coherent model of anything.
The rest of the brain operates like an automaton—like a p-zombie. But the “add on” part of the brain (a good chunk of the cerebral cortex etc.) has evolved to do something more specific: to build and navigate virtual models (of the environment, as with all animals; and of the brain’s own thinking, unique to humans and possibly, to a lesser extent, a few other species). The very moment it starts doing that, there is inevitably something like what it is to be doing that. Virtual modeling is consciousness.
This is a decent theory and lots of lines of evidence confirm it. But it has not been proven; it remains a hypothesis in technical parlance, albeit a promising one. And it still does not fully answer the hard problem: it solves the “why” consciousness exists and the basic idea of “how” consciousness exists, but it cannot yet explain what physical circuit designs have this effect, or which effects are produced by which circuit arrangements, or why (as in, why do those specific effects attach to those specific arrangements of circuitry). Dennett’s theory gives us a research program—as in, we have a good idea how to eventually answer these questions if his theory is correct—but it does not by itself solve these problems. It just tells us where to look.
a fundamental issue with consciousness seems to be the problem of infinite regression.
we can create a camera that encodes a scene, we can make a computer analyse this scene and reconstruct it into a simulation, that in term can be analysed by a sub program but we can not point to any specific point where the system SEES the scene…
so does it really matter if we track down every circuit? no because the problem of the infinite regress remains.
I dream, my dream is corrected by my senses and I call that reality… but where is this dream happening? could it be an encoded holographic stream? yes, but what is analyzing it and experiencing it? that is the issue, it really does not matter if we discover all the hardware nor even the software, it matters how we pin point the experiencer.
I don’t fathom where you see “infinite” regress. Qualia are finitely reducible. For example, the color red cannot be “broken down” into component parts; regress therefore ends, not continues. At the causal level, of the circuitry, there is a finite circuit that generates the color red. We can even cut it out of the brain. It is not an infinite series of circuits. It is one, singular circuit. Regress ends.
The hard problem of consciousness is explaining what it is about that finite circuit that produces an experience of “red” rather than something else, or nothing at all. There is no indication that that explanation will require the infinite regression of any explanations.
As to “where your dreams are happening,” they are happening in the physical circuitry of your brain, located in your skull. And as to “who you are,” you are that total brain system, or at least the part that stores the core attributes and memories you identify as you. And that is a finite collection of circuits, not an infinite one.
Likewise, as to “what is the specific point where the system has an experience,” we actually have answered that question spatially and temporally: experience resolves at about 20 Hz (the brain resonates at about 40 Hz, so it apparently requires two cycles to generate an experience). In other words, below one twentieth of a second, experience no longer exists. Likewise, we know which parts of the brain must be interconnected and interacting to generate an experience, and we can already with even present technology confirm correlations between those circuits functioning and our experiencing being and observing.
There is no separate “person” watching what is going on. The brain is watching itself. The brain’s effort to integrate information into a coherent virtual model, of the world and of its own contents, is identical to experiencing yourself experiencing that. They are inseparable events. They are, literally, one and the same thing.
“I used to think that the mind was the most fascinating part of the human body. But then I thought, yea, but look what’s telling me that.”
Emo Phillips
Hi Richard, patron here. Thanks for this article! Let me try this out with you.
I understand consciousness to have two components, hardware and software, which have co-evolved.
The hardware is the brain and the wiring you describe.
The software is memetic: social meaning-making via language. Thus consciousness is at least partly a learned behavior.
What it means “to be like something” implicates metaphor and poetry.
The richness of red, for example, lies in its web of meaning. When we experience red, a world of personal and shared experiences are invoked. Love is red, and so is blood, fire, fruit, and sunrises. We cannot not make these associations. “Higher” consciousness means richer webs of meaning and is achieved through literature, relationship, education, and life experiences. Higher consciousness is social and is an achievement of higher culture.
Not exactly higher culture, but, here is Taylor Swift:
Losing him was blue, like I’d never known
Missing him was dark gray, all alone
Forgetting him was like trying to know
Somebody you never met
But loving him was red
Loving him was red.
“Red” is not just a pattern of neurons firing. Animals have that kind of red. Red for humans is something we learn from others in rich and exciting patterns of association that can inflame a whole network of neurons distant from those which detect color.
I think that if we did not grow up in a richly associative world of ideas and metaphor, then we literally would not be conscious, and red would not be like anything.
Here is Helen Keller when she had her breakthrough in first learning the signed word for water:
“I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten — a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!”
Prior to that she lived, as she recalled in her autobiography, “at sea in a dense fog”.
I understand this to be the dawn of her consciousness. Her neurons didn’t change much at that moment. What happened was that the software of memes and language kicked in for the first time.
E.O Wilson talks about gene-culture co-evolution. Culture provides something like a durable nest for diverse citizens across generations and thus permits the rare phenomenon of group selection. Such enduring groups of humans (and not just clusters of genes) are subject to natural selection based on the cultural memeplexes—the software of consciousness– they inhabit. Consciousness has evolved.
I once heard E.O Wilson say that what makes us human is – campfires. I think he is talking about the sharing, transmission, and evolution of the cultural software of consciousness.
The analogy is somewhat problematic, as the distinction hardware/software is particular to the way we have designed digital computers. The brain doesn’t work exactly the same way. For instance, memetic learning is stored in the brain as hardware (physical rewiring of the brain), which explains why it is so difficult for people to change their personality or worldview or habits or biases and so on. We can’t just remove the software program from RAM and install a new one. The hardware has literally been changed.
The closest analog to software in the human brain is what we call working memory: it is the program we are “running” on the hardware, and that “sits on top” of it for a few minutes before fading away. Your stream of consciousness is a software routine. These do not require physical rewiring of the brain; but for that reason, they only last minutes; we have to keep loading and running new software every minute to stay conscious. And even then, insofar as any of this generates memories or learning, those are stored by hardware rewiring, not in anything equivalent to RAM.
That said, you are correct to point out there is another layer of “software” on top of all that: culture. But this exists only insofar as it exists in the rewired hardware of the people carrying that culture around and spreading it (through communication and mimesis), or in external information repositories (e.g. books). And likewise, you are right to point out that, for example, “When we experience red, a world of personal and shared experiences are invoked.” Not only is our “red” circuit wired into a whole plethora of hardwired memories and other accumulated and learned information, but the causal connections extend outside the brain into broader sociohistorical causal systems in which we are a part. And our brains can reflectively even store models of that causal system in its hardware, and that model can have causal effects on the brain (software and hardware). Thus, what you believe the world to be is as impactful causally as what the world really is; in some respects more so.
I think it’s the other way around of course.
Ideas and metaphors are emergent causal effects of consciousness; rather than its required causes. Consciousness arises from the need to build virtual models of, first, the environment (animal consciousness) and, second, of what the brain itself is doing (human consciousness). The latter being a much more complicated model to run, and having evolved far later than the other equipment did. The ability to creatively assemble and work with causal models, particularly of one’s “self” and its relation to the world, necessarily produces also other abilities, such as the ability to use analogies and metaphors, to make and use tools (and tools means not just physical objects, but crafts, skills, processes, e.g. language, government), and to compose fiction (as predictive modeling requires the ability to create fictional events within a causal model and “run” the model to see what happens; that evolved not so we could write novels and plays and songs and poems, but so we could anticipate and thus better navigate our physical and social environments—but once you can do that, you necessarily can use that same ability to write novels and plays and songs and poems).
Actually, her neurons would have to have changed within minutes, otherwise she would never have any stored memory of that event so as to write about it.
Moreover, she would have to have already been conscious before that event in order to remember what it was like to exist before it and thus without that revelation so as to even know it was a revelation.
I mention elsewhere in the thread some reasons why childhood memories often don’t survive. One I left out is that at very young ages (e.g. infancy) our integrated model of our selves is very primitive and diffuse, and consequently memories cannot be anchored to “being a person” yet, or only at such primitive levels as to not survive the further laying down of neural pathways defining oneself as a person. Thus infant memories become “lost” in things we take for granted, e.g. the ability to recognize a tree, say. There won’t have been an original “narrative” memory of first seeing a tree, because narrative memory wasn’t being formed yet (that required a stronger sense of self to connect a narrative to), but even the bare experience of first seeing a tree will have been by now completely washed under the accumulation of tree sensations ever since, forming our singular “tree recognizing” circuit. Our infant memories are technically inside that, but not related to anything else, and so unrecoverably lost as distinct memories.
It is evident from Keller’s writing that this is not the state she was in before learning words. She had an integrated sense of self, a narrative memory, and a qualitative memory well before that event. And that is consciousness.
This blog helped me understand a lot about the mysteries of qualia. Thank you!
Hello there Dr. Carrier,great post.
I know this is off topic but, i would like to ask you about an argument that i often hear from christian apologists, it has to do with knowledge.
“In order to know anything, you must know everything or have someone who does (God) reveal it to you therefore only the christian worldview allows knowledge”
I have seen a few responses to this argument but i would like to hear what do you think about it Dr. Carrier.
Thank you very much
I don’t really understand what argument this could possibly form.
Knowledge is always only of a probability, and only as given available information. Consequently, all we know is what the probability is of any given thing, given what we do know at the time. There is no logical need of knowing everything, to know that.
(The one exception is present uninterpreted experience, which we always know with absolute certainty, but that’s not what most people mean when they are talking about knowing something, and in any event, that also does not require knowing everything, only what you immediately know, i.e. the contents of your present experience qua experience, prior to any hypotheses as to what is causing those experiences.)
See my discussions of the Logic of Induction and Gettier Problems.
“Experience” – so you’re absolutely certain we’re not brains in a vat?
“Knowledge is always only of a probability, and only as given available information.”
Is that probably true or absolutely certain-ly true?!
(This is a constant presuppositionalist refrain, of course)
Is this related to Epimenides paradox at all?
First, there is no such thing as “absolute” certainty of matters beyond direct sensation. But if you mean, am I highly certain we are not living in a virtual universe, then yes. Follow the links in the article you are commenting on, for example my discussions specifically of Simulation Theory (which is just a variant of Cartesian Demon). Its epistemic probability is vanishingly small. At least on present information (new information could change that).
Second, read the second paragraph of the comment you are responding to: I already said there are known exceptions to “knowledge is always only of a probability.”
But as to our ability to tell the difference, yes, that is by definition absolute, because it is precisely those propositions for which we cannot establish undeniability that are classified as deniable, which is a logically certain dichotomy, as it is directly apprehended, not inferred.
Deniable is just another way of saying “true to a probability and not a certainty.” We otherwise can directly apprehend what is undeniable, and thus is “true to a certainty,” and thus requires no rule or inference; everything else is then by logically inescapable definition deniable and therefore true only to a probability. See Epistemological End Game. This is a logically necessary fact, and is directly apprehendable as such, without inference; it therefore is itself undeniable.
And no, this has no connection to the Epimenides Paradox, as that only identifies unintelligible propositions, in particular, propositions that are impossibly self-referential (“The Cretan says all Cretans are liars” entails an inescapable contradiction and therefore does not communicate any information: it is a proposition devoid of actual meaning).
“All deniable propositions are deniable” is an intelligible proposition, and so is its synonym, “All deniable propositions, qua deniable, can only be true to a probability.” This is in fact a tautology, not a self-contradiction. It is thus not subject to the Paradox. Meanwhile, un-deniable propositions are directly apprehended as such, requiring no inference, and therefore are known to be undeniable to an absolute certainty; they are, by definition, the only propositions that can be so described, because to describe them is simply to state that very same proposition. Which is a tautology, the opposite of a self-contradiction.
Logically necessary truths, when directly apprehendable as such, are the exact opposite of logically contradictory assertions. The Epimenides Paradox only relates to logically contradictory assertions.
Concerning this argument:
“In order to know anything, you must know everything or have someone who does (God) reveal it to you therefore only the christian worldview allows knowledge”
Then this argument presents an inherent problem for the person making it (when taken at face value).
Because if we can’t know ANYTHING (without knowing everything) then we can’t KNOW that there is a God to start with. And furthermore we can’t KNOW that it is in fact God that is revealing any information to us.
This is therefore another “special pleading” type of argument.
That is true.
But the argument is unintelligible to begin with. As far as you have presented it, it has no valid or sound form. It’s thus gibberish; which is not an argument. So you really don’t need to find flaws with it. When someone has stated no actual argument, you need not bother rebutting it, as no actual argument has been made to rebut.
Dr. Carrier wrote:
“Knowledge is always only of a probability, and only as given available information. Consequently, all we know is what the probability is of any given thing, given what we do know at the time.”
Concerning the last part of this statement, “given what we do KNOW at the time”, doesn’t that beg the question as to how this PRIOR knowledge actually came to be regarded as something that we actually KNOW? It seems to me that your probability assessment is dependent on some prior certain knowledge (foundation), but how did that prior knowledge come to be regarded as being so certain to start with.
Or are you simply saying that ALL knowledge is based on probability, and the best that we can do is strive to ensure that our building blocks of knowledge are based on sound logic and probability assessments?
“…doesn’t that beg the question…?”
No. As I already explained, regress ends at the raw sensory data from which all hypotheses are inferred; as I’ve mentioned several times now, that exceptional case of basement data cannot be false (it has a zero probability of being so), and all other knowledge derives from it. See Epistemological End Game.
If you want a more inclusive term to see more clearly this point, use the word “information” rather than “knowledge.” See again The Gettier Problem and Epistemology without Insurmountable Regress or Fallacious Circularity.
What are your thoughts on the “self” being an illusion? I heard Sam Harris say that there’s no biological basis for the “self”. Maybe not in those exact words, but it seemed like he was saying that it doesn’t exist.
I was thinking that even if we believed that we’ve found where the “self” is located in the brain, how could we tell the difference between an actual self and a mere sense of self (an illusion of the self)? Maybe that region in the brain can give rise to either of the two. How would we know if the illusionary self doesn’t have the capacity to produce the exact same qualitative experiences as an actual self?
Curious to hear your thoughts.
Harris is being illogical there, in order to defend his religion (Buddhism) with the same dubious apologetics any religious believer does.
See Eddie Tabash’s take-down of Sam Harris on this point.
This is, IMO, just another example of how Harris is a terrible philosopher.
There is only one sense in which a self is an “illusion,” and it’s the same sense in which the “world” we experience in our heads is an illusion.
For example, colors don’t exist in the real world (only photons vibrating at different frequencies, which have no color). Nor do solid objects (everything that exists is almost entirely empty space; what we experience as solidity is actually just the force interaction of electromagnetic fields). But our brain “invents” those things as stand-ins for things that do exist, in order to construct a more-or-less reliable virtual model of the external world (so, “solidity” it invents to model EM forcefields, “colors” it invents to model photon frequencies, etc.). But this model is more-or-less reliable: we can successfully navigate the real world using it, so for all its errors or flaws or fictions, it clearly isn’t even substantially incorrect.
So that virtual model is not an illusion in the sense that the things it represents don’t exist. Colors don’t exist, but the photon frequencies they correspond to do (even when they don’t, e.g. magenta, they still do, e.g. magenta represents the overlay of two different photon frequencies); solidity doesn’t exist, but EM force fields that block light and the movement of masses do; etc.
The self is the same thing: it is a virtual model of pertinent contents of a human brain. As such it has illusory elements (e.g. the feeling that consciousness is a singular experience from a specific topographocal point and separable from the body, i.e. Cartesianism, is as fake as colors and solidity) but they correspond to non-illusory facts. Your brain is in fact not neurally connected to anyone else’s so you are, in actual fact, an isolated individual decision-maker with isolated unique memories, desires, skills, personality traits, etc., and you do have a unique narrative memory that is more-or-less a true history of that individual integrated collection of properties and data.
So there is no sense in which “you” don’t exist, in any sense of “you” that matters (an individual with unique memories, history, abilities, and proclivities). And this is such an extremely well-established fact of neuroscience it is shocking Harris, an actual neuroscientist, would deny it. This only illustrates how irrational a religious faith makes people, compelling even scientists into denying their own science.
As to your last point, you may be tricking yourself into the same equivocation fallacy Harris has: confusing you (as the actual person, with memories, abilities, desires, etc.) with your experience (your brain’s active virtual model) of you. Consciousness is not a person; it is the awareness of a person.
That’s why you still exist when you sleep, and why “you” always consist of far more things than you are ever at one time conscious of. For example, you do not go around simultaneously conscious of every memory you have ever had in your entire life; yet “you” do not consist solely of what memories just “happen” to be present in your consciousness at any given moment, but of the entirety of all memories stored in your brain—thus “you” are not “the consciousness of you”; the latter is just your means of keeping track of yourself, by constructing a reliable, coherent, consultable database of who you are and how everything you experience relates to you, e.g. historically and socially or with respect to your wishes, desires, and plans, and so on.
Read the Tabash critique for more.
How would we know though that the external world is real? How can we verify that the photons that correspond to “redness” aren’t an illusion too?
I guess since an illusion is a mental phenomenon, there would have to be a real, non-illusionary mind that’s producing the illusion. Otherwise it’s like saying that an illusionary mind can produce a REAL illusion, which makes no sense. There has to be something real at the bottom of it all.
You’re right about my equivocation fallacy. Probably because I had no clear understanding of what the “self” was. Now that I think about it, I consider the “self” to be analogous to a captain of a ship. Just like the captain determines the course of the ship, the “self” determines the thoughts, behaviors, and actions of the body.
Thank you for the link. I’ll check it out (If I’m real, LOL).
We know these things the same way we know anything: by inductive inference. The only way we could be fooled about these things is if there were a Cartesian Demon specifically arranging data to fool us, which is far more improbable. See my discussion of the Cartesian Demon problem.
One should thus not confuse “colors exist” with “colors exist outside our mental model.” The former is true; the latter is false. And we know this by very strong inductive arguments. Likewise everything we know (and don’t know) about what “photons” are and that they exist.
That “there has to be something” at the bottom is tautologically true, but logically that “thing” could simply be a solipsistic mind (your mind may be the only existing thing and everything else a fiction it invents). We rule that out, again, by inductive inference (e.g. true solipsism requires positing one of the most elaborately complex and inexplicable Cartesian Demons conceivable).
As to a “self” being the captain, that can still stumble into Harris’s equivocation fallacy: one can still confuse the acting commands of the captain (the experienced event of thinking and deciding), with the actual thing itself (the thoughts, memories, information, proclivities that cause the decisions made and thoughts thunk).
Notably, even if a Cartesian Demon were fooling you into thinking you existed and were a singular entity, that would in and of itself constitute your existing. So it would actually be logically impossible for you to be mistaken that you exist. You can be mistaken about the contents and properties of yourself, and about what is causing and sustaining your existence, and even about how much you are even in control of anything you do (it is possible to fool you into thinking you are making decisions when in fact you are not), but that would still entail there was a you to fool and have this mistaken belief.
The only logically possible way for a self not to exist is for no experience of a self to exist and for there to be no dormant thing that could be experienced. Other animals live such an experience. Humans do not (outside severely brain damaged states of being). You would have to meet both conditions otherwise (e.g. you still exist even when you are unconscious, so “not experiencing” a self is not a sufficient condition for a self not existing; so you need that and for there to be no organized self stored in memory anywhere, whether a Cartesian Demon’s memory or any other kind, in order for a “self” to actually not exist).
I read your article on the Cartesian Demon problem. One of the points you made was that positing a conscious mind that’s engineering the simulation is a more complicated theory than a mundane one, because you are adding a mind to it, which is a very complicated thing. But I don’t see how this counts against the Simulation theory. Maybe reality is more complicated than we think or that we would like it to be. I don’t find this reasoning convincing. Maybe this is due to my ignorance, but I just don’t see why we should necessarily prefer a simpler explanation. Isn’t String theory a lot more complicated than the standard cosmological theory that says there’s only one universe?
Also, I wouldn’t say we know the Simulation theory is false with the same level of certainty that we know that water is made of hydrogen and oxygen. We can run a chemical analysis and observe the chemical constituents of water, but we can’t do anything remotely similar for the Stipulation hypothesis. It seems untestable and unfalsifiable.
Moreover, I just want to clarify a few things. First, I understand that by saying that color (for example, redness) doesn’t exist outside the mind, it doesn’t necessarily mean that color doesn’t exist at all. I’m definitely not saying that. I understand that color is a mental phenomenon (therefore it exists) that corresponds to forces outside the mind: photons.
Also, not sure I said that the Simulator is fooling ME to think I exist. If I said that, I misspoke. What I’m saying is, how can we prove that “reality” isn’t part of the Simulator’s dream, and every person (being, animal, etc.) aren’t characters played by the same actor: the Simulator. So the Stimulator is fooling ITSELF into thinking that it’s not every character, when it is.
I know this sounds bat shit crazy, but it’s one of the ideas I find most interesting (besides String theory where they are many “Mes”) and wanted your take on it. Thank you.
If you have to posit without evidence an extremely complex entity to explain a simpler one, you are going backwards in probability. The simpler explanation will always be more probable. And when the variance is huge (your explanans is vastly more complex than the explanandum), the probability becomes vanishingly small.
See The Argument from Specified Complexity against Supernaturalism.
String Theory is not a valid analogy. Not only does it have considerable evidence in its support (just not enough to rule out alternatives, which is why it remains a hypothesis being explored rather than a theory proved), as in, we can predict numerous specific facts with it (the entirety of the Standard Model and the differences between Relativity and Quantum Mechanics), but it is actually vastly simpler than the thing it explains, not the other way around (String Theory explains the vastly complex Standard Model—all particles and all their properties even their weird ones—by positing nothing more than vibrating spacetime in an eleven-dimension manifold—so by just adding a few dimensions, and reducing everything to fluctuations in them, String theory explains a huge, marvelously complex pattern of facts). By contrast, Simulation Theory lacks every evidential support String Theory has (Sim theory uniquely predicts exactly no single thing whatever), and requires inventing vastly more complex things than what it is supposed to explain.
And that’s not only because of the law of computation, that any computation to model a world must by logical necessity be more complex than the world modeled, as you must posit a whole extra meta-universe for the computer to reside in, and then all the components of that computer on top of the components in the model itself; the corollary of which is: the simplest computational model of any system is the system itself—so in the absence of any evidence for the vastly more complex epicycles within epicycles, the odds vastly favor the existence of just the observed system itself.
How dependent is a person’s consciousness on the specific matter arrangement that makes up the brain producing it?
For example, if something like the Star Trek transporter were invented, and it really were possible to disassemble a person atom by atom and then reconstruct them in the exact same atomic configuration as they had before, would they retain their original consciousness, or would they merely be a very convincing (even sentient) copy?
That is a semantic question, more than an ontological one. It’s more about what you think the difference would even be and why it matters.
In reality this is already the case: with quantum mechanics we know the quarks composing the nuclei of the atoms of your brain are being swapped out with virtual quarks all the time, so that from one second to the next what you just described has actually happened to you—all the core matter of your body is “disassembled” and “reassembled” every split second (it just happens at below a Planck scale in time, so none of that happening is detectable to modern instruments). So. Do you consider yourself a “mere copy” of your one-second-ago self, or the same person? And why?
The ontology of this is called The Teletransporter Problem and it’s one of the leading areas of discussion in contemporary philosophy. My response to it is catalogued in My Responses to the PhilPapers Survey.
Identity is a causal history not an ontological icicle. No one ever really stays exactly the same, as if frozen, so identity is not about being frozen in a single pattern; you remain the same person not by being the same pattern but by being in a causal chain of past states of that pattern, each causing the next, i.e. as long as your past pattern-state caused your present pattern-state, one-to-one, those pattern-states are the same “person,” in 4D geometry a single identifiable “person tube” spread across time.
In the transporter case, the past pattern organization causes the buffers to code a distinct output, which causes the new assembly. Thus causal history is maintained. Same as resurrection. Someone whose brain after dying is frozen in cryo and then resurrected nanorobotically centuries later retains the same, numerically identical causal history, and thus remains the same person, minus whatever changes may occur as occur every second of our lives, e.g. events change us, and thus so would death and resurrection, but whether they change us so much that we cease to be “the same person” will depend on what you choose to mean by “same person.” It is therefore a semantic, not an ontological question.
If you are asking whether the matter or the pattern matters; only the pattern matters. What your brain is made of is irrelevant (as proved by quantum mechanics: your brain is never made of the exact same stuff from one moment to the next; nothing about you is).
To clarify, your view is this:
Given sufficiently advanced technology, if a conscious person were to be ‘disassembled’ at the atomic scale, atom by atom, and stayed disassembled, while also having the whole process recorded with some very advanced kind of computer, and then a copy of that person was made out of different matter than their original brain (using highly accurate, very advanced manufacturing processes far beyond what we have today), it would simply be as if they were temporarily out of consciousness, then basically just ‘woke up’ again. Even though, technically, their original brain would be dead, and the ‘new’ one was made out of a different mass of matter, the same consciousness would continue from beginning to end of the process, merely being ‘unaware’ at one point in it, like a temporary dreamless sleep. And this is consistent with the laws of physics as we currently understand them.
Correct?
Yes.
Indeed, again, that is already happening to us all, all the time, every split second.
In the nucleus of all your atoms are quarks. Those quarks sometimes switch places with virtual quarks, leaving the same number of net quarks of the needed type, thus we see a continuous neutron, a continuous proton, but really neither enjoys real continuity: they are being swapped out with new neutrons and protons gradually over time, it’s just the time scale is so small we can’t “see” this, just as cinema films are really broken by black bars every 24th of a second but that’s too fast for our eyes to see so we “see” a continuous image on the screen. But it isn’t continuous. There are gaps as one cell is swapped out for another cell; this just happens too fast for us to see it happening. (Needless to say, your electrons are swapping out even more frequently than quarks, and quarks and electrons comprise the entirety of your atoms.)
So imagine instead of a few hundred Planck times separating the disassembly and reassembly of all your atoms, we built a machine that could “slow time” and thus “pause” any moment where your atoms have dissolved into a “quark sea” and “hold the pause” as it were for a much longer time (say, a day); and suppose for the purpose this machine can wait the few split seconds it takes for all of your atoms to undergo this sea change (since it’s individual atoms slowly bit by bit over time, not all at once, so it will have to hold each one as it dissolves, one after another, until all of them have done it; though that doesn’t matter to the point, per the Theseus Paradox).
That machine would be doing what you describe. Just imagine we can then “move” that machine across the country in that day, then we turn the machine off, all your quark-sea’d atoms collapse back into new atoms and you appear—where you would have appeared naturally a split second later, now it’s just been stretched to a day. You have teleportation by disassembly and reassembly by doing nothing whatever but pausing how long the gap is between the natural continuous disassembly and reassembly of your atoms going on all the time.
Why would that be any different than it taking a split second and crossing a fraction of a meter of distance as you travel in a car, say? Why would the “amount of time” in between when your quarks go away and are replaced with entirely new quarks make any difference to the continuity of your consciousness? And that’s all this machine did: stretch the amount of time. The rest is just what naturally happens to you all the time, every second, of every day.
You are not made of any of the same stuff by the end of any day than you started with. It has by then all been swapped out. You are Theseus’s Ship. That’s not even theoretical. It’s a known fact of physics now. So it cannot be that “stuff” has anything to do with who you are and what maintains your continuity as a person. You are not an object. You are a process. And a process can be sustained by any underlying stuff. The stuff is just means of generating and maintaining the thing; it is not the thing itself. And a process can be paused (sleep, feints, comas) and continued and remain the same process. Thus, consciousness (and personal identity) are irrespective of the underlying machinery recording and generating it. The machinery can be swapped out seamlessly, without you even knowing it (and in fact, per quantum mechanics, is swapped out seamlessly, without you even knowing it). The process remains the same. And therefore so do you.
Oops, sorry for repeating my question! I do apologize for the repeat but I thought that my last comment did not get through the first time, and so I repeated it. I hope you don’t take this as some kind of tomfoolery, it was an honest mistake, and I would like to retain the ability to post at your site in the future, with different questions on different topics. Feel free to delete this, and the repeat of my question too.
I didn’t notice a duplicate question, only a followup. I didn’t see anything untoward in that. Perhaps the duplicate didn’t get in. There was just now a server glitch that might have lost track of it. No worries.
No, I mean that the ‘follow-up’ was supposed to be my rewritten ‘duplicate’ of the question, and I made it because I thought the one before it did not get through (since, if I remember correctly, the way the posting was display on my computer slightly differed between the two posts). And, also because it perhaps could have been worded better.
But, come to think of it, since we’re already discussing an issue that touches on perfect duplication, and whether or not it results in states of existence different from the previous and all, then to fully clarify the issue, in answer to a question along the lines this:
“Are you saying that if a living person’s neurological configuration were known to a precise enough degree, but they died afterwards, and their brain was destroyed, then later on, a sufficiently advanced technology could replicated their mind to the point that they merely experienced “death” as a brief lapse in an otherwise continuous conscious state, similar to someone who was put under sedation, or had a dreamless sleep?”
You would agree that: yes, in such a situation, provided we had a sufficiently advanced technology, it WOULD replicate their mind to exactly the degree, and in the same manner, you just described.
Right?
P.S. I fully confess that, given my posting history, I’m probably not the ideal person to hire for a maintenance position of the afore discussed technologies. 😀
Yes. As I mentioned, there is disagreement on how to answer this (it’s a well-traveled question in philosophy, even included in the PhilPapers Survey), but IMO, that disagreement stems only from semantic confusions. There is no relevant ontological difference between what you describe, and what is happening to us all the time already. The only differences are trivial to the point of having nothing to do with anything that matters (such as the “amount of time” in a lapse between transformations, or irrelevant details about the machinery producing the effects, i.e. in computer terms, storing the data and/or running the program; we are the data and program, not the hardware that stores and runs them).
To illustrate, we have not yet faced a situation of cognitive cloning, so we have no vocabulary for that (which I think causes some of the semantic confusions here). We have biological cloning, but no one thinks that therefore identical twins are identical persons, and the reason they are not said to be the same person is that they have separate causal histories (they are not numerically identical). They are not in each other’s minds; they operate independently of each other as soon as they exist as cognitive entities. They were once the same identical cell, so they share a common biological causal history, but that cell contained no cognitive apparatus.
A cognitive clone would be like a biological clone only instead of splitting into separate cells from the same shared cell, they split into separate persons from the same shared mind/brain. We can imagine an alien race that reproduces through budding: they just divide into two at some point in their adult life, each one starting out with a copy of the same brain arrangement and thus being a singular person before the split, then being identical but separate people an instant after the split, and thereafter developing as separate people with independent causal histories (they are not in each other’s minds; they operate independently of each other).
These people would be the exact same person up to time t and different persons after time t (a circumstance explored in Star Trek: TNG when Riker is accidentally replicated and they develop as different people).