Today I am going to offer a naturalist theory of qualia—the particulars of “what it is like” of conscious experience, like the redness of red or the floweriness of a flower’s scent or the twanginess of a guitar, or even what love or fear, or deliciousness or pain, “feel” like. My model can’t be tested or proven yet, because we lack the active-brain-scan resolution to “see” the structure of any computational circuit in the brain, and we don’t have a digital consciousness yet that can help, either (which could mod or swap out phenomenological circuits in order to catalog and report their effects). So this is just a hypothesis, with some of the (necessarily weak) evidence for it.

My Underlying Philosophy of Qualia

You can catch up on my overall philosophy of qualia elsewhere, most especially in Mind Stuff (see also The Mind Is a Process Not an Object and Was Daniel Dennett Wrong in Creative Ways? and my discussions of Mary the Scientist and Philosophical Zombies). But the tl;dr is this:

Whenever computers process information, there is always (necessarily) “something it is like” to be the computer doing that, as that is an inevitable consequence of computational discrimination itself. This is just never experienced by anyone unless the computation includes a computed person experiencing it (and that is when we can ask What Does It Mean to Call Consciousness an Illusion?). When a computer (like a complex animal brain or even some machine-brains programmed in a similar way, like Shakey the Robot) does experience something, it is because it is computing the experience of that thing. So if you start with a simple visual process, whereby a computer needs to discriminate between areas of a space that are light or dark (perhaps so as to move toward one and not the other), this will always be experienced in some way—the geometry being computed will be felt, and there will be something different about “what it is like” to be looking at a light area or a dark area. Because it couldn’t be otherwise. So there is nothing “extra” to explain about qualia. And the exact way this is experienced will depend on the computational circuit—it’s physical arrangement and behavior—and thus what it is doing exactly. Other philosophers have reasoned the same way (including Daniel Dennett, Patricia and Paul Churchland, and Allin Cottrell).

We could learn a lot about this if we could do the following things (either by active brain-circuit mapping or AI circuit-sandboxing or both):

  1. Map the exact logic-structure of the computational circuit for every quale.
  2. Which, if computational theory is correct, would show that qualia domains share structural features in their circuits not shared by other domains, so we could then, just by seeing a circuit’s structure, predict whether it will generate a visual experience, or a tactile experience, or olfactory, gustatory, auditory, emotive, and so on. Those structural differences would then be causal (they literally cause an experience to be in one domain or another).
  3. And it would show what the difference, structurally, is between specific qualia (like each color, each smell, each sound, etc.), so we could then, just by seeing a circuit’s structure, predict what experience a subject is having—or would have, if that circuit were operated and wired into their self-module (unlike blindsight victims, when that is not the case, as I discuss in Sense and Goodness without God, index, “blindsight,” and where I also discuss the difference between irreducible and composite sensation, e.g. every color reduces to one quale, while the color pattern on your curtains is a composite of qualia, including geometric experience and not just color experience).

From there we should be able to finally work out not only what the physical, structural difference is between a “redness color circuit” and a “flowery smell circuit” (etc.), but why one computational process produces one experience rather than the other. That is when we will have a complete scientific explanation of qualia (and that this is many decades away from our current technological ability is why we are still facing The Bogus Idea of the Bogus Mysteries of Consciousness). The odds favor this happening, one way or another (my hypothesis is just what I think is currently the most promising), because Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them, and especially in cognitive science (see my previous discussions of the Argument from Qualia and related mind-brain physicalism evidence, as in Was Daniel Dennett Wrong in Creative Ways? and Holm Tetens, Dinesh D’Souza, and the Crazy Idea of the Mind Radio and my Bayesian Analysis of the Barkasi-Sant’Anna Defense of Naive Memory Realism).

When it comes to the competing worldview of theism (or any supernaturalism), as I wrote before, regarding the Argument to Design in The End of Christianity (p. 300), “If people can be philosophical zombies (minds without qualia), so can God; and so can God make philosophical zombies of us,” because “we can only get, for example, ‘god experiences qualia, too’ or ‘god wants us to experience qualia’, by assuming that’s the case ad hoc (since we don’t actually have any evidence of the fact).” So, epistemically, without background knowledge, naturalism is just as likely to have this result as theism; and with background knowledge, all trend-lines point to naturalism having it.

My Proposed Hypothesis

There could be many ways computation produces and thus explains all qualia. So my position above is unaffected by our not knowing that. But here is one possibility: all qualia are just “discriminating modifications” to the basic geometrical experience of touch. Everything, really, is just another conceptual dimensionality of touch. Seeing is touch. Hearing is touch. Taste is touch. Pain and pleasure are touch. Your inner monolog is touch. The feeling of love and ennui and being drunk are all complex mixtures of touch. In some ways this is obvious. Love includes psychosomatic feedback: the computation of it signals your heart to pound, and your somatic nerves then feel that, and incorporate that into the overall structure of everything else going into the overall experience of the emotion. And your inner monolog literally runs on the same circuits you use to hear spoken voice. And so on. But apart from those hints, it’s not all that obvious.

So let’s walk through a basic narrative of what I have in mind.

Touch is fairly easy to explain as a computed result, since it is just discriminating between geometries, and discriminating between contact and non-contact. The basic sensation of touch is simply that discrimination occurring in your running world model (you are computing “the difference between” shapes or between coming into contact with a thing or not). And that would obviously be felt as such; because that feeling is precisely what is being computed. For example, if there were no qualia, you could not discriminate between these things, as they would all “feel” the same; so they only feel different when you are computing their differences (in the case of true blindsight, this is still going on, but is physically cut off from you, the “you-model” you are running a computation of, and so it does not enter that computation, and thus is not “felt” there; just as with split-brain subjects, and all subconscious processing, which occurs before or without computed integration into a self-model). Geometric sensation is likewise obviously how computing geometry would be experienced: as simply an experience of shapes. There isn’t any other productive way to compute such information, than would feel exactly that way. It is just the rudimentary discrimination of contact, then of pressure (which is a computing of the degree of contact, including the geometric pattern of contact, e.g. feeling bristles vs. a poke), then of shapes and distances (how large an object is, whether it’s round or a box, etc.). So that’s easy enough to explain. But how do you get from that to…smelling camembert? Or feeling warm or cold? Or feeling ennui?

The Evolution of Qualia

There are two paths of current empirical research that can help answer that question. The first is the evolutionary history of sensation. You can find a summary of this in Gerhard Schlosser’s article “A Short History of Nearly Every Sense—The Evolutionary History of Vertebrate Sensory Cell Types,” in Integrative and Comparative Biology 58.2 (August 2018: 301–16). He provides this taxonomical map:

You’ll notice (as he describes in the text), touch was first (in Cnidaria, like sea anemones). And vision evolved from the same machinery and circuitry as touch. Then so did smell. And in most animals, sound, you may know, is processed by touch-sensors on moving hairs in the ear—which is how touch began: with sensors attached to tendrils and thence hairs. There is no inherent reason why sound had to be processed that way. So this indicates an evolutionary pathway: all senses go back to touch. Scientific study of the qualia of emotion will likely find the same results, since emotions are often complexes of simpler sensations, and use some of the same architecture as the senses, and evolved from some of the same brain circuitry.

For example, at the simplest level, we know pain sensing cells evolved from touch-sensing cells—and some of those, to detect irritants, adapted in a way similar to olfactory cells, reacting to chemical “touch,” where the molecule is being touched, but instead of generating any olfactory discrimination, the normal touch pathway is activated. Which evinces our theory in action. Likewise with heat and cold sensing nerves, which evolved from and operate in the same way. Contact pain is just touch sensation dialed to 11, the easiest way to blindly evolve the utility function of disliking so as to flee or address a stimulus. These sensor systems merely report contact, and then build patterns from there. For example, pain is a touch signal so intense it disrupts other computations (making it difficult to ignore or take attentional resources away from the pain and its cause). That this feels like pain is precisely because that is what is being computed: a strongly motivating, attention-claiming condition report. We intensely don’t like it (and can’t feel indifferent to it) because this intense dislike (and that disabling of access to indifference) is what the computation is.

To be clear, it is not necessarily the case that qualia computation evolved at the same time as the sensor systems; comparative anatomy suggests not (brains of suitable complexity for integrated complex modeling, and thus experiencing the model, appear later in evolutionary history). But as evolution is pathway-dependent (not intelligent), the probability is very high that brain circuitry co-evolved along the same paths. For example, whenever visual-qualia-computing began, it would have been through adaptations to the same circuitry as eye cells had always been connected to, which in turn began as touch-sensing circuitry, so it would be building on the same circuitry that touch-qualia circuits built on (whenever they developed). And this would explain their structural, computational, and felt similarities. Hence, it makes sense that pain would feel like intense touch, and even when it’s from reacting to a chemical stimulus rather than a touch.

By contrast, a different story must be told for the evolution of pleasure qualia, since that is post-sensory, in the same way “tasting bad” (disgust) is post-sensory and not a form of “pain,” so pleasure qualia appear to be a computed result, of “liking” and “desiring.” But evolutionary anatomy shows not only that all pleasure qualia are adaptations of each other and thus all evolved out of the same core experience-generator (as our theory would predict), just like all other senses did, but also that disgust qualia evolved from the same circuitry as well (it is thus the converse computation, entailing the converse experience). And it appears that all of this evolved from a pathway beginning with brute motivation, as pleasure circuits evolved out of circuits for stimulating movement (and eventually memory).

In other words, “pleasure” and “disgust” evolved from rudimentary “push” sensations, and thus are again a kind of touch, with the corresponding sensor cells folded inside (which is, really, the evolutionary history of the whole brain, being just an internalized bundle of the same nerve cells, with subsequent adaptations). In other words, computed decisions about movement (impetus, the feeling of a diffuse “push”) evolved into computed decisions about like and dislike. Anatomically, this stimulus starts with dopamine chemosensors inside the brain rather than in contact with the environment, and thus reacting to contact with the output of other computational circuits. The whole brain can be explained this way, and thus likely other emotive qualia will follow the same explanatory path.

But as all these qualia-systems (internal and external) needed to make more and more discriminations, the circuits doing this had to evolve to branch out into other ways of computing differences and keeping track of them. So, for example, pain receptors evolved from contact-sensors to chemical-sensors without discriminating them (they just report the same either way) because there was no special advantage to discriminating between them (or at least not one that evolution’s random walk hit upon), whereas when they further evolved into temperature sensors, this was precisely because of the benefit to discriminating specifically that kind of stimulus from mere contact (you can do more, and more useful things, with “hot or cold” information than with it just being another contact datum, like “something is itching me”).

Hence that computation couldn’t discriminate hot and cold with a feeling of contact or non-contact; the latter was already a computational space being used, and it is precisely the need to compute the difference between contact and heat (or cold) that put evolutionary pressure on evolving nerves (and brain circuits) for that purpose. So it is logically necessarily the case that a computation discriminating between heat and touch will result in those two things feeling different from each other. That’s precisely what the computation running is doing. And yet it’s easy to imagine that heat and cold feel similar to touch yet remain distinct. They also use the same geometric maps (to where on the body you are hot or cold, and where or what in your modeled environment is hot or cold). So this is looking easy enough to explain; there isn’t much else this computational distinction would likely feel like than this. Similarly when, again, the intensity of it becomes pain.

This is then a clue to how we might explain vision. Visual computation builds on the same circuit groundwork as touch, given that it needs to build geometrical maps with the same thing—touch (contact)—only now, with photons (in a similar way that chemoreceptors react to touch in the form of molecular contact). But it has to compute many more discriminations than touch. It can’t compute that the photons are touching you, because that would create too many mismatches between output and reality. Evolutionary pressures would build circuits that start using this contact data to build information about things like distance, and distant shapes. It could do this blindly (without shades or colors). But as it happens, color sensation began the other way around: sensing light or dark (the earliest and simplest eyes did only that).

So just as with heat and cold, the computer needed to start discriminating between shape and “bright” or “dark.” It’s easy to imagine that what we experience this as is indeed the simplest way to compute that, and thus why it feels (is experienced) that way, which is just enough differently than touch (or heat or cold) to serve its new computational purpose (mapping an external environment). Eyes then evolved greater sophistication, to where they could build ever more detailed geometric maps of an external environment, and thus start modeling shapes, which they may even have done using touch’s mapping hardware already built for that. Geometry plus light and dark gets you all the qualia of black and white vision, being what it inevitably will always feel like when the computer that is you is computing these kinds of distinctions.

Then eyes started discriminating different colors. How could your computer do that without it feeling different from just “light” or “dark”? It couldn’t. So when it started discriminating kinds of light (in our case, red, green, blue; other animals experience yet other basic colors), there had to be some way this would feel different from just “light” and yet we can expect it would feel a lot like light, because it’s basically the same circuit, just with a tweak. The computer of your brain is basically cataloguing kinds of light with color coding—and remember, this is now completely made up. Colors don’t exist apart from our computations—they are literally a fiction our brains contrived to keep track of differences in the world (the presence of different frequencies of light, rather than “just light”). If you aren’t up to speed on that, see my discussion in my Bayesian Analysis of the Barkasi-Sant’Anna Defense of Naive Memory Realism.

Here it is less obvious why a computational discriminator would specifically land on “red,” “green,” and “blue” (from which all other colors are constructed in our visual experience; even colors with no corresponding photon frequency, like magenta, or that are more computed than sensed, like yellow). But it logically necessarily had to hit on something; so this is no objection to the explanatory model. There must be some correlation between whatever circuit is hit upon to discriminate these colors, and how they are discriminated (and thus what colors we actually experience). Our use of RGB may be accidental (a random walk of the circuit space, locked in by subsequent evolution), but we would still like to know why one circuit computes into an experience of “red” and another “green,” for example (or magenta or yellow for that matter); or indeed what other alien colors we could experience if we modified those circuits in any corresponding way.

Only the future science I mentioned can get anywhere near answering these questions. But my hypothesis answers the more general questions, of why we experience color, and the number of colors we do, and why it is so much like just a tweaked way of experiencing mere brightness or darkness—and why it always computes geometrically. Since we never experience color except in relation to a shape—even synesthetes will experience a field of color (or a geometrically defined halo of color around a word or letter on a page, for example). We actually have worked out a lot of the computational physics of this already (see Vision Science: Photons to Phenomenology by Stephen Palmer). But the crucial point here is that color is experienced differently from sound and smell, for example, because it is part of a geometrical discriminating computation—its function is to discriminate localized surface and volume properties of our environment, but not just shapes and positions (see my discussion of the neurology of visual aesthetics in Sense and Goodness without God, section VI.2), but also by different kinds of light emanating or reflecting. What our computer is doing dictates what it will be like to be doing that computation. And all from an adaptation of touch sensors. Vision is just a “different way” of touching things, and extracting information from that touch.

Which gets us to the other, later adaptations of this early sensory computation, like hearing, taste, and smell. In cellular terms, hearing is literally just physically touching our hairs (with sound waves and vibrations in our body); and smell is a more evolved chemosensor (reacting to molecules in the air—literally touching olfactory atoms); and taste is just a variant version of smell (relying on liquid or solid molecular contact—literally touching taste molecules). As I noted before, early chemosensors may have simply been irritant detectors (straight touch sensors tooled to react to chemicals), but ever more discriminatory computations would obviously be useful—like detecting good things we want to seek out, or bad things at a distance we might want to hide from (and eventually, even distinguishing different kinds of those things), or detecting all kinds of things the knowing of gives us data we can use (like the coming weather, or the history of a place, or who or what is near, or what we are handling or working with, and thus what we can do with it: for the natural evolution of perception and reasoning here, see The Argument from Reason and Why Plantinga’s Tiger Is Pseudoscience). Again, if the computer needs to discriminate between mere contact irritants and specific (maybe even useful or good) things, like saltiness, or rosiness, or smokiness, it will start exploring a qualia space for the purpose, just as when light-modeling systems needed to start discriminating kinds of light (hence colors).

As with color, we are nowhere near knowing why a specific circuit feels like a specific taste or smell. But the general fact that it will (and that there will be a diversity of computed tastes and smells) is already explained by my computational model. It is a logically necessary outcome of computing these discriminations in our overall computed model of our environment. Telling the difference between tastes and smells computationally literally is telling the difference experientially. They are inseparable outcomes. The one entails the other. Because they are ontologically identical—differing only in respect to what perspective they are observed from. This is why only I can see what my mind is computing; you would have to run the same computation to see it, too, because you are not this computer, you are that computer (see, again, my discussion of Mary the Scientist). You could (with sufficient information about my activated brain circuitry) know that I am experiencing something when I am, but to experience it yourself, you would have to be the computation I am—or whatever is functionally equivalent. For instance, we can share an experience of a color while remaining different people, because all that’s needed for that is that we be, computationally, people, and running a color circuit of the same structure. Because each discriminating circuit logically entails its own quale, as explained above.

Notice that taste is computed specifically so that we can “feel” what is in our mouth; and that is why taste feels that way—and doesn’t feel like colors or sounds, yet feels eerily similar to smells, being evolutionarily related and thus structurally similar circuits (although, nature being random and blind, all these wires do sometimes get crossed). And smells feel like smells, and not colored spaces like with vision, or felt pushes and geometries like with touch, and so on, because they are computed differently. The information olfactory perception computes is the presence of odors; and gustatory, tastes. Evolved from very early chemosensors, these computations feel closer to touch than other senses do. We feel contact of smells in our nose; not exactly like being touched in our nose, but qualitatively similar. Because this is how those systems evolved to compute information. They also map a geography or space, but because the information is diffuse, our brains did not evolve to compute this the way it does vision or hearing. Odors randomly propagate, so their presence in our nose only really gives us that information; unlike light, which can give us more precise information about where it is coming from. Likewise, we don’t need to be alerted to whether something is touching our skin—the point of odor perception is to discern a lot more than that; that’s what it evolved to compute differently than mere touch. And that is why smell feels differently to us than sight or sound.

But that leaves sound (though there are other basic qualia we could explore). Why does sound feel the way it does? Why is it not experienced the way we experience color? (Much less taste or touch or smell.) Again, the answer is: because it is not being computed that way. Sound sensors need to discriminate something other than touch—otherwise, sounds would just feel like tickling or contacts in hairs in our ear. There is not a lot we can do with that kind of information; it’s not informationally rich enough for the purpose. Which is why hearing evolved in the first place: to do more than just sense whether a vibration is contacting us.

So, again, the touch-sensing system adapted to discriminate something different from mere touch. It is in this case easy again to imagine why the adjacent domain of computed result would be like touch (sound feels closer to mechanical touch than color does), but necessarily different (in its basic rather than composite qualia, essentially hums and whines and beats, but all with adjacent touch experiences we could call in as analogous), because the system needs to distinguish things other than mere touch: our brains use sound data to construct complex geometrical understandings of the spaces we inhabit and what is happening in them, integrating sound with sight into a total world model. That it would feel like it does (and not like something else) naturally follows from what it is doing in terms of computation. It’s just another way to feel being touched.

And though composite sound is highly complex, basic sound qualia are not—they seem no more special than they need to be for us to tell them apart from ordinary physical touch (like a brushing of the hair on our head or cheek). And that’s why they are like that: our computer simply started making these discriminations, and we can expect it would feel no differently than is required for that.

So how the evolutionary history of these sensory system tracks this narrative is one of the ways science can support this theory even now. And it appears to match.

Olfaction as Test Case

The second line of evidence for this theory is the way qualia are encoded to sensory inputs. My theory of qualia tracks close to Paul Churchland’s, which he lays out in The Engine of Reason, the Seat of the Soul. And we can focus on olfaction as an important example. Because the computational fields of other main senses all track mathematical models that computation theory would expect. The way sound and vision can be entirely understood with their own mathematical geometries of how a space of possible experiences is explored with just a few discriminating inputs looks like computation. Which is a promising lead. It suggests we’re on the right track to understanding their corresponding qualia.

But usually people think olfaction doesn’t fit this model. It just seems like it’s generating a random array of experiences, with no rhyme or reason. Why does cinnamon smell like that, and not like roses or dead fish or something else. That it would smell like something is easy for my theory to explain; this computation is discriminating different odors, so that has to feel in some way like every odor is different from every other, otherwise the computation wouldn’t be working—it would be failing to make these distinctions, rather than succeeding. And it could just be another “random walk plus lock in,” where any randomly hit upon circuits would do for the purpose (and our categorization of smells as good or bad happens after that—hence why dogs love poop and we don’t, yet it probably smells the same to both of us; since we are cousins, our brains evolved from the same lineages—aliens or AI might not share the same experiences because their circuitry evolved separately from that of mammals here on Earth). But there must still be some reason why each circuit hit upon generates its particular experience and not some other. We just aren’t in a technological place yet to start finding out what that is.

But there are clues. Churchland notes that olfactory sensation operates through vector computation, which means we can expect the physical circuity to match and confirm this. “Humans possess at least six distinct types of olfactory receptors, and a particular odor is coded as a pattern of activation levels across all six types,” resulting in millions of computed distinctions (p. 26). This is actually similar to color vision, only with “six” core detectors rather than the mere three of RGB color vision, leaving an even larger array of possible distinctions. Every one logically must feel different when computed; that’s what the computation is doing, computing the difference. So when you are the computation, it will feel like something. And it just so happens that’s “smells.”

But now we seem really far removed from my starting point, where everything is just a new way of distinguishing different kinds of touch. Touch is rudimentary (the most basic of qualia), and practically feels literally like what it is, because that’s how it’s being computed. But then it evolved to discriminate more than contact, so intense touch became pain, just one step of computational logic away. Chemosensors simply adapted the same equipment to chemical contact sensing. But then to distinguish heat and cold from mere contact, we started computationally distinguishing those things; yet they do seem close, like a step removed from touch, as would be expected given this history of development and function. These sensors then started reacting not just to contact and chemicals, but to light. And then we needed to discriminate between light and dark; so, one more step removed from pure touch, we get a simple move into a geometrical space of computation that distinguishes not just how things feel in terms of shape, but also with the added dimension of whether they are light or dark.

Needing to discriminate even more varieties of being light or dark then led into exploring the computational space of color, which is now pretty far, but you can see how it is just another way of experiencing touch, just many computational dimensions removed. Likewise sound, which went from feeling touched hairs to discriminating more differences in those touches—essentially, computing the presence of different kinds of vibration, with a similar exploration of a computational space. This could not be the same as visual computation, because then they would interfere and could not be compared or used to triangulate maps of our environment and what was going on in them; so evolutionary pressure pushed into other ways to compute these distinctions, leading us into another domain of “feeling touched,” that is again a few steps removed, but we can kind of see the similarity, for how beats and tones are just another way of feeling touched—different because computationally they have to be, but similar, because evolutionarily, that’s what they were built from. But with smell…how do we get from touch to…cinnamon?

Churchland and I are not alone. Cognitive scientists are exploring computational theories of qualia, and what Churchland and I am describing is essentially what is called Quality-Space Theory (or QST). And it has empirical support already. The idea is that tweaking physical computational circuits to make ever-increasing discriminations between sensed data should be reflected in qualia space itself: our experience of qualia should be mappable to a mathematical, computational space of possible discriminations, generating ever increasing dimensions of difference. This doesn’t answer the fine question (of why a specific circuit would generate a specific quale). But it does support our answer to the broad question (of why qualia in general exist and are experienced as they are, given mind-brain physicalism and all we know about the biology and neuroscience of perception).

For summaries and examples in olfaction science, see:

Koulakov’s study in particular finds that when subjects score odors by similarity, all odors can fall into a 2-dimensional curved space in degrees of similar and not similar. They also find synesthetic correlations to a similar map in sound discrimination. This, and the arguments and evidence presented by the other teams here cited, indicates “odor space” is indeed an organized computational space, just like color and sound space. They were thus able to graph odors by perceived similarities, and observed a continuum in result, with a particular geometry:

This would argue that smell is just like color and sound, which are just distantly distinct ways to feel touched, each pushing in a different domain of possible ways to discriminate kinds of touch (and thus computational differences can explain experiential differences in color and sound qualia, and between color and sound as qualia—and in turn between them and the more fundamental qualia of touch). It’s just that the computers that evolved to make distinctions between being “touched” by different odors started exploring a different (and evidently more distant) space of possible distinctions. The smell of cinnamon thus bears no discernible similarity to being poked or brushed or punched (unlike colors and sounds, which almost do). But that may be because the possibility-space here being explored is so far from the space of simple touch. If “redness” and “a low hum” can each be like a different way to feel touched, “cinnamon” and “dead fish” could be as well. It’s just farther out in the continuum of logically possible distinctions that can be computed—and thus experienced.

Conclusion

Imagine yourself as a digitized mind (hopefully after making sure you know How Not to Live in Zardoz), in a sensory deprivation tank (so you can’t naturally see or hear or smell anything), and I can turn a dial that mods your qualia circuits on a continuum through all possible configurations. I could start you at “being touched on the back of your hand,” which feeling is quite simple (there is no other way it should feel than that, given what your brain is computing about the difference between being poked there and not), and then slowly turn the dial. It might roll that sensation then into experiencing a flash of light in your otherwise empty visual field, at some place corresponding to your body map for the back of your hand perhaps. The circuit then, instead of discriminating “touched or not,” is discriminating “light or not.” But that’s just a different way of computing information from that contact—since there is already a distinct circuit for discriminating touch, it follows that a circuit that distinguishes a different kind of touch will necessarily feel different, because it is computing that difference.

With my dial thus turned, you don’t feel any touch sensation anywhere, but instead touches always cause experiencing flashes of light in different degrees corresponding to different intensities of being touched. I continue turning the dial, and those circuits configure one more step further away, into distinguishing different kinds of light. So now you get colors when touched. I keep going and the computational circuit starts discriminating vibrations, and in a way it can distinguish from light, and the simplest way to do that is to simply tweak touch sensations into sound sensations. And so now when you are touched, you instead hear sounds. And I keep going and the circuits configure into what we call olfactory distinctions, where the feeling of being touched is now very distinct, exploring a geometrical continuum of “smells.”

So now, when touched in different ways, you instead smell different things. Why smells? Because the computer is now computing distinctions from bumps, sights, and sounds, and thus has slipped into exploring smells as a remaining space of discriminations. There may be other domains of distinction the dial would move you through, which humans can’t imagine now because they literally lack the machinery needed to experience them—because our imagination runs entirely on the same hardware as our sensory perception. We don’t have some extra computer space for that, and even if we did, it, too, could not possibly include every possible computational distinction that could logically be made. So there must be alien qualia unknown to us, further-out spaces of “distinctions” to explore. And yet we can predict aliens will experience similar domains to ours. For example, their colors may be weird, but they will still be colors. Because that is what visual systems must compute: distinctions of the shading of spaces. Likewise sound, touch, smell, pain, pleasure, and so on. And, of course, Aliens might have evolved to discriminate other domains than us, too. But some of these (like hearing and sight) are too obvious and immediate for any evolutionary process to ignore. So we’ll have at least some things in common, in varying degrees.

The point of all this is that, though we are decades away from having the data we need to explain everything about qualia—answering that fine question of what is it about one configuration of a computer circuit that causes it to distinguish (and thus you to feel) a physical contact as “red” or “green,” and another configuration that causes it to distinguish (and thus you to feel) a physical contact as “cinnamon” or “dead fish.” But we have enough information already to suspect that this will be the case. The evidence supports, at least to a degree, that qualia are the inevitable outcome of being a computation, and in particular of computing distinctions, and thus qualia are expected on mind-brain physicalism. If you experienced no differences between things, you could not be computing any differences between them. But we know, mechanically, machines can compute differences between things—and even integrate them into a running model it can then mentally explore (like Shakey the Robot did), which is the second obvious requirement for experiencing qualia (since to be experienced entails being experienced by something, and that requires a larger computation to do the experiencing).

So there logically necessarily must be something that it is like to be doing that. And that’s qualia. When we then look at how different sensory domains are doing different things computationally, including computing distinctions from each other, we know the resulting qualia must be different from each other. Sound will not be “just like” sight; and neither sound nor sight will be “just like” touch or smell, because discriminating between these things is what our computer is doing. They will also match closely what the computation is trying to do. So, vision will feel like spaces of color in an external geometry at a conceptual distance, because that is what visual computation is doing. By contrast, hearing will not feel like that, because our auditory computation is doing something differently with sound, extracting different data and in a different way and to a different (though complimentary) purpose—and that’s why sounds are felt differently than colors, and in the specific ways they are differently felt.

It’s all physical computation, all the way down. And it’s all just different ways to tell the difference between different kinds of being touched, tooled to specific computational purposes in each domain. That’s the theory. And I do think it stands a better chance of turning out to be correct than any other proposed. Moreover, because this is all intelligible and coherent, it cannot be said that it is impossible for physicalism to explain qualia. Because here is one possible way it can.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading