This continues the Carrier-Bali debate. See introduction, comments policy, and Bali’s opening statement in Should Science Be Experimenting on Animals? A Debate with Paul Bali; and after that is my first response, Bali’s first reply; and my second. To which Dr. Bali now responds.
Against the Scientific Use of Animals
— Part III —
by Paul Bali, Ph.D.
-:-
Animal Sci is hard to pry from Agribiz—where “better feed & harnesses” tend to mean building better meat machines.
Eco-toxicology screens out the worst of our contaminants—mitigating harms of our industrial expansionism.
Vet Med, Zoology, Molecular Biology—stewards of the biosphere need these, and more. Yet how much AE is truly for animals, so far?
Hell Hospital
Since we tied Mouse to the track, AE is more like Fat Man or Transplant—where Richard says let the Five die:
If it were a universal law that single patients attending hospitals can be killed to save five, no one would ever go to hospitals. The social consequence of this would be vastly worse than letting the five patients die. [1]
For Richard, deontic reasons are consequentialist, since “It is logically impossible to decide what laws to wish universal without any context as to what such a universalized behavior will do (to you, and the world).” [2]
As I see it, the harm done to innocents is a consequence of the act, yet its badness lays partly in the past—that it’s undeserved.
But let’s map Richard’s consequentialism onto the Med-Sci regime. Here, a dominant species confines, manipulates, and kills the weaker to extract useful data—largely for its own benefit. Here, the hellish hospital persists—the people keep coming—since their favored kind are immune from the harvesting.
The hospital’s badness universalizes to a cosmos where uber-sentients may vivisect sufficiently unter-sentients; where humans may, in our galaxial wanderings, fall within the purview of an alien Med-Sci.
That hospital—that cosmos—is one we’d stay away from.
A Rawlsian Veil
Trackworker enjoys employment, Fatman has a Trolley pass, and Donor has hospital access. As condition of access, each might consent to a system policy of Five-favoring in Five/One dilemmas. The policy benefits their kind, tends to benefit any one of them, and they can always withdraw—stay away from Track & Hospital.
Yet lab animals are marked for sacrifice and barred from system benefits. From the Original Position, prior to assignment of species, you might consent to a Lab that uses all species for wide benefit; yet not to one that serves a dominant minority, with sacrifices only from the majority.
You might endorse a rule that the first to ecologic dominance—the first to a universal ethic!—should treat other species as nascents or implicits in a biosphere society. For the good of all, or from self-interest: odds are you don’t become a Dominant; and Dominants who self-aggrandize tend to self-destruct—see our own ecocidal reign.
The Dominant, at minimum, should not interfere.
Objection: all beings are potential beneficiaries
Donor could pre-affirm his preference for Five-favoring. The OP sentient, likewise, might agree that n animals shall be medically sacrificed for n5 Dominants. Thus labmouse is like the Trolley’s unlucky One: unlucky “this day” [i.e. in this incarnation], yet part of a system [the Anthropocene biosphere] that can benefit their basic kind [i.e. the sentient]. For each OP sentient, their odds of incarnating as a Med-Sci beneficiary are low; yet higher, still, than odds of being a lab animal.
A fortiori, the OP sentient might agree that n sentients shall be medically sacrificed for n50 sentients—from all species, for all.
AE & the wider scene
Lately, your odds of being an animal somehow abused by humans are high. You’re more likely to arrive as a male chick heading down a conveyer belt into a macerator (seven billion per year), than as a human enjoying his sisters’ eggs.
I judge current AE guilty by association with our wider systems of animal abuse. I see the AE Trolley in its political context: in the reign of Homo dominus, where life is made his chattel or obliterated.
Bret Weinstein: “you have no obligation to treat a dishonorable system as if it were an honorable one.” [3] Even were Mouse prone to leap onto the mainway for trans-species welfare, she ought to first consider that this Trolley shows low concern for trans-species welfare.
I’m not sure about the idealized “n for n50: from all, for all” AE. I’d weight it more coming from a people sworn to global good.
In his Naturalist magnum opus, Richard writes: “It is not by accident that humans have mastered the Earth and are the only species to go beyond it. We are highly adapted for social cohesion and mutual aid, and that makes us nearly unconquerable.” [4]
High adaption for intra-kind cohesion, yes—and inter-kind abuse. To animals, we may seem a choreographed hunt, a monster composed of we billions.
Robert Sapolsky sums a Dutch study where subjects administered oxytocin, then asked to consider the Trolley, were less likely to push one of their own onto the track—yet more likely to push a foreigner. “Oxytocin doesn’t make us more pro-social; it makes us more pro-social to people who feel like an us. If it’s a them, it makes us crappier and more xenophobic to them.” [5]
Animal Umwelten
A rabbit dam returning to her kits at dusk cares about her future!
She comprehends her life by valuing it—correctly—though may not calculate utiles.
Apes, cetaceans, elephants, corvids—who else? We might, with Darwin, see life’s spectrum, and query hard thresholds.
Our self-representation “is unlikely to have appeared de novo in a few large-brained animals”, but rather emerged “in small incremental steps”. [6]
Richard writes that “each human shares our awareness of being”. [7] Yet every sentient is sentient of being. Mouse may not abstract into Philosophy her self/world gestalt—yet few of us are Aristotle, still!
Our debate’s title is simple, yet the reality isn’t. I agree with Frederic Christie that there’s no bright line between invasive & non-invasive research, no hard wall between lab manipulation & ecologic management. And if sentience admits of degrees, our correlative duties could, too. Yet there’s no bright line here at our apparent apex, either. If fruit flies may be trapped, deformed, and killed for us, so may we for super-sentients.
Or is there a universal threshold we’re above, an inviolable Citadel we’re certain of?
Bear in mind, Child. . .there are higher planes, infinitely higher planes, of life than this. . . [8]
As sentient supersedes non-, so may modes unthinkable supersede us!
Cross-species empathy
Mice shown labmates in pain “intensify their own response to pain”. [9] Even if we define empathy as an abstractive “mental state attribution”, rats suggest it when refraining from a food-lever wired to shock a labmate. [10] They learn to liberate a labmate trapped in a container, preferring this altruism to accessing food—food which they’ll share, post-rescue. [11]
Likewise would we critics of AE free rats from the larger container, from the lab itself. By shared distress we advocate for them—and for ourselves, a bit.
Yet first I judge myself, and more than “by association”. I’ve sealed to their death a lab’s worth of fruit flies, taking out the trash thru the years. And the trash itself, the residence I generate it from, the road it traverses—all these convict me for death both direct (roadkill) and indirect (asphalt toxicology.)
-:-
-:-
Endnotes
[1] Richard Carrier (2015). “Open Letter to Academic Philosophy: All Your Moral Theories Are the Same” (Nov 11).
[2] Carrier (2015).
[3] Bret Weinstein (2021). DarkHorse Podcast 97 (Sep 18).
[4] Richard Carrier (2005). Sense and Goodness without God: A Defense of Metaphysical Naturalism (Authorhouse): 327.
[5] Robert Sapolsky (2017). “The Biology of Humans at our Best and Worst” (lecture, Stanford University, Oct 24).
[6] Frans de Waal (2008). “Putting the Altruism Back into Altruism: The Evolution of Empathy.” Annual Review of Psychology 59: 286.
[7] Carrier (2005): 331.
[8] So sang a telegraph wire to Thoreau. Sep 12 1851, from his Journal.
[9] de Waal (2008): 283.
[10] Frans de Waal and Stephanie Preston (2017). “Mammalian empathy: behavioural manifestations and neural basis.” Nature 18: 499.
[11] de Waal and Preston (2017): 502.
Maybe I’m just not a good reader, but I feel like I barely know what Bali is trying to argue for at any given time. It mostly just seems vague and a bit irrelevant like he’s just completely talking past you.
This post got pretty deep in jargon and notation, but I would say much of what he is saying is broadly responsive (it’s all relevant), just perhaps not targeted. To be fair to Dr. Bali, the format calls for brevity, so he is undoubtedly trying to address issues with jargon because that jargon can compress ideas.
Paul’s approach also illustrates the difference in rational style between continental philosophy rather than analytical. I’m an analytical philosopher, and I think his performance here illustrates so well why that is such a superior approach to continental that it could even be used as a teaching example of the point.
Continental style is extremely prone to all the cognitive errors and fallacies (especially of the emotive kind) analytical philosophy was specifically developed to work around and avoid. Modern science is the earliest realization of analytical style (which is how philosophy could catch and pick up the same lessons from it). To carry the analogy using an example from the sciences: unlike analytical philosophy, in respect to science continental philosophy more resembles Freudian psychology than what soon, and correctly, replaced it.
I think this might explain what Mike is noticing here.
We humans are in this closed fight ring, not of our own choosing. Putting aside the science of mascara, laboratory animal experiments on vaccines do increase our chances of surviving and that, in itself is sufficient for all animals: behavior that enhances survival must be considered biologically moral!
It might be grisly to observe a lion taking down a deer, but it’s the lion’s right!
Asher Kelman M.D., Ph.D.
If we did everything that advanced our individual or even micro-collective survival in the most short-term and myopic analysis, we would behave horrifically, even to each other. “I needed to do it to modestly increase my chance of living, and that’s okay because evolution” is perverse.
I got the impression Asher was trying to make the same point (just from a different angle).
Fair enough. Still, there are those who express the sentiment I read (that a range of human behavior toward animals is justified as long as there is even the slightest advantage for us because of evolution or what not), and that sentiment is quite silly.
Very glad to see Dr. Bali engage with the underlying moral reasoning! In particular, he addresses a synthesis approach of deontic and utilitarian reasoning.
Dr. Bali’s first points about agribusiness, toxicology, etc. are all critiques of capitalism, not AE. One could say the same about producing food for anyone: it looks nice and simple until you realize the economic system sucks. Fix the economic system.
About the fat man variant of the trolley problem: Dr. Bali says that the problem with fat man-sacrificing actions is that the victims don’t deserve it, and that therefore the problem lies in the past. This is an interesting framing of the point, but I think it’s very flawed. First of all, in the non-fat man variants of trolley problems, we have no reason to think that the fat man is any more innocent than the five people or the one on the tracks. Innocence is actually moot here. The innocence of the people who will be harmed if AE doesn’t work are the same.
What Dr. Bali is in fact arguing is that the distinction between inaction and action comes from the deontically-rooted consequences of directly acting against innocent actors.
My issue here is that Dr. Bali hasn’t really defined innocence. Word count matters, of course, but this idea is critical for the debate. What is “innocence”? Are we really so sure that a mouse has done nothing wrong or undeserving of punishment? Or an average person? “Innocence” is a human category. It effectively means “For this particular topic, we think something has not done something that warrants punitive or investigatory or other kinds of action against it”. And the reason why we care about the category of innocence is that societies that allow the exploitation, imprisonment, attack upon or confiscation of property of people (and other moral entities) who have done nothing to warrant that will be grossly vile, and because people have such tendencies toward anger and greed and other feelings and motives that we need to have some kind of brake upon our behavior.
But the whole problem is that “innocence” implies that there’s some kind of category that would “deserve” certain behavior. So the idea is already ugly in its implications. Do we think a criminal “deserves” imprisonment due to being somehow morally worse? We shouldn’t. We should think imprisonment is morally necessary, to the absolute minimum.
The issue therefore is never in the past. It’s based off of our ability in the present to assess and make judgments about the existing properties of the systems we’re interacting with.
With those thoughts in mind, I cannot see that there’s something inherently wrong in fat man scenarios. Duties of care may come into play, virtue ethics impacts may come into play, etc., but “innocence” is so obviously a moral category that is uniquely constructed and contextual. If we allow that notion to go too far, we can never act. We couldn’t do anything in the trolley problem scenarios, not even justify our inaction, because our behavior will make innocent people die and suffer. We certainly couldn’t do any policy planning. Policy is all about tradeoffs.
Dr. Bali’s hospital description is nightmarish, and it’s a really strong case that in my mind proves the value of using a combination of moral approaches. But it’s mistaken, and a bit of a strawman. For one, the “hospital” is actually investigating to improve the stakeholder issues of the “unter” (that is, what people like Richard propose is animal experimentation that is not unlimited and includes a priority on improvement of the lives both of the experimental subjects directly and animal welfare in general).
But, more importantly… the reductio that Dr. Bali is doing hinges on exactly the point in debate: that animals are morally identical to humans. Dr. Carrier’s standard stops that. A hypothetical space empire couldn’t justify harming us the way we would animals (not only because such an empire almost certainly wouldn’t need to use us for subjects), but because we are persons. When one makes personhood the critical moral characteristic (and I am not wholly convinced it should be the dividing line but it is very, very important), this short-circuits Dr. Bali’s counterfactual. It wouldn’t matter if we encountered Q or other space gods who had capacities we can’t imagine: We have the requisite combination of memory, sense of a continuity of self, and capacity for information processing and engagement with the environment that constitutes a person. They may be super-persons, but we’re not non-persons. Animals are non-persons.
Now, one could argue that the personhood distinction is arbitrary and that therefore a space empire could rightly pick some capability they have that we don’t that justify abusing us. But… if personhood is arbitrary, what about the ability to feel pain? Remember pain? Have trauma? Dr. Bali does not seem to complain about microbes or plants: Why? I actually think that we should consider the welfare of all living beings to various degrees and in their own context. It seems, however, that Richard and Dr. Bali agree that we abstract out a lot of their characteristics. Imagine being grown for food, Dr. Bali!
Dr. Bali then says that animals are free from AE benefits. But… they’re not. We do veterinary experimentation with them. And they get numerous indirect benefits as well.
Dr. Bali argues that AE is guilty by association. That’s… not a great argument, obviously. Fix animal welfare problems elsewhere. Destroying one of the few parts of the system that can generate greater understanding and better treatment of animals by its very operation because of the sins of the rest is silly.
I think this is Dr. Bali’s strongest showing, but the debate still needs to get to the core of the calculus; however, Dr. Bali has at least created frameworks to do so.
Hi guys! Jumping in here, after reading Richard’s final post.
“societies that allow the exploitation, imprisonment, attack upon or confiscation of property of people (and other moral entities) who have done nothing to warrant that will be grossly vile”.
Frederic, aren’t such societies vile precisely because [in part] they inflict undeserved harms? Imprisoning X because X deserves it is a necessary condition of just punishment. We can remove much of the ugliness of Retributivism by also demanding there be good consequences from the punishment [deterrence, protection of society from the vicious criminal, et cet].
“It wouldn’t matter if we encountered Q or other space gods who had capacities we can’t imagine: We have the requisite combination of memory, sense of a continuity of self, and capacity for information processing and engagement with the environment that constitutes a person. They may be super-persons, but we’re not non-persons. Animals are non-persons.”
Richard’s AE would inflict pain on animals if it’s epistemically required. Likewise, super-beings might disregard our personhood if it’s epistemically required.
I like Richard’s metaphor of a “phase shift” [in his final post] but find odd this confidence that there’s no possible shift beyond our ken which could render us exploitable as lab animals. A phase shift is unassessable by those who’ve yet to undergo it.
I don’t think ‘personhood’ is a wholly arbitrary category, but I worry it’s currently warped by anthropocentrism.
Even so warped, I’m not confident about the non-personhood of animals. Are we really so sure that rats “are never self-aware or possessed of any cognitive comprehension” and that all rabbit behavior is “instinctual”? [Richard’s 3rd post.] The most recent of his cited research on animal consciousness [Birch et al, 2020] begins with an admission that the field “is young and beset by foundational controversy”. It argues that consciousness has many dimensions, each of which admits of gradations. It ends with several of the field’s Outstanding Questions, including
“How can we show that animals are consciously simulating future scenarios and consciously reliving episodic memories?”
“How can we go beyond the mirrormark test to find evidence of higher grades of self-consciousness?”
It wasn’t long ago that consciousness of any kind was commonly denied to non-humans! Given Science’s historical direction of inclusion, and given what’s at stake for lab animals, we might adopt a policy of erring on the side of caution.
You ask, “Frederic, aren’t such societies vile precisely because [in part] they inflict undeserved harms?” In small part, Dr. Bali. But the problem is that the word “deserved” does a lot of work for you. I don’t care if a society routinely tortures only death row criminals or ordinary people: The deeper problem is the routinized torture, which we know is 99.999999% of the time pointless and only serves to stunt our empathy and build collective sadism. What worries me is that your focus on “undeserved” by necessity implies a “deserved” category, and that category has the very real risk of creating moral Others. Yes, in practice we do treat people differently and we do so in part based off of our assessment of their moral history, but that difference must by necessity be small. If there was a horrible, insidious criminal who had done awful things, and was totally unrepentant, but also happened to be near-totally paralyzed, keeping them in a prison instead of a hospital would be perverse.
So “just punishment” actually has much less to do with what the person did and much more to do with the calculus around the possibility for rehabilitation, returning value to the community and reparations to victims, and deterrence. The notion that some people “deserve” retribution is the barbaric idea of our ancestors. I am astonished I have to make this point to an animal welfare advocate!
Because, of course, why are animals “Innocent? One can argue that
a) For an agent to be “innocent”, it has to be part of a moral universe in the first place (we don’t call rocks “innocent”), and animals can be; and/or
b) Animals do messed up things to each other all the time
I don’t think you would think that it would be appropriate to subject a dolphin to a brutal experiment because it attempted rape on a human woman! The only way to rescue that in your framework is to say that “deserved” isn’t some ontological category but a contextual socially-constructed one: The criminal “deserves” punishment not because they are guilty but because we can prove it and because we have a clear understanding of the magnitude of the crime and the ability for us to usefully intervene. But that reveals the game: that construct in turn comes from a set of pragmatically-derived understandings of how we engage with the world and each other, and those understandings don’t support the notion that AE is harmful in the same way animal abuse is. Because it is about more than the suffering of the animal in isolation.
You say that we can remove some of the ugliness from retributivism by demanding positive impact. But…
a) Yeah, we can remove the ugliness from retributivism by making it not retributivism anymore! Again, would you be okay with experimenting on the criminal dolphin as punishment if afterwards the dolphin had medical care to bring it back to health?
b) Retributivism is still bad even when you’ve done this because it has virtue ethics impacts. A constant focus on “getting back” inherently trades off with moving on, forgiving and letting go.
c) You still haven’t talked at all about the rehabilitation of the agent. That’s… worrisome. That’s the biggest problem: Retributivism by necessity reduces one moral agent to an Other, a non-agent. That’s dehumanizing. And I find it bizarre that someone using a deontological framework is also willing to use people as a means to an end without their consent, which is inherent to that idea.
You say that “Richard’s AE would inflict pain on animals if it’s epistemically required. Likewise, super-beings might disregard our personhood if it’s epistemically required”. But this is straightforwardly false. It is a strawman. Notice how Richard excluded from analysis chimps, elephants, etc.? That’s in part because he was uncomfortable, and I agree, with the idea of subjecting animals close enough to us to that kind of harm without some kind of consent. “Epistemically required” is only part of the condition, for both he and I.
And even then, what would be “epistemically required” of Q? Something to save the universe? Then we are dealing with a straight-up “torture the bad guy to save the city” scenario! So your thought experiment actually deliberately hides the key dilemma here: “Epistemically required” isn’t just about our curiosity! We are talking about avoiding real harm and real pain!
The basis of human rights is that we need those rights to be honored to flourish as people. That is an inherent outcome of personhood. The Q Continuum could not deny that we are persons. Period. We can deny that a mouse is a person.
You say “I like Richard’s metaphor of a “phase shift” [in his final post] but find odd this confidence that there’s no possible shift beyond our ken which could render us exploitable as lab animals. A phase shift is unassessable by those who’ve yet to undergo it”. First of all, maybe, but… so what? Let’s say that we are dealing with a power beyond our comprehension that has achieved some level of consciousness that qualitatively beggars ours. It decides to experiment on us. Can we stop it? No. Understand it? No. All we could feasibly ask is that it only did so when necessary… which is what we are suggesting! What could we offer such a being? If such a being made an argument, a logically ironclad one, about why we are morally different from it, could we understand it? If we did understand it, would we assent? Your example seems to be intended to make us worry that we would want to be left alone if we were the animals, but… that is inherently only using half the data.
But, secondly, a phase shift from gas to liquid that then goes on to a phase shift from liquid to solid doesn’t deny that the shift from gas to liquid happened. It is our argument that there is a qualitative change when you hit some kind of personhood. It is exactly the issue at hand to say that this a moral Rubicon. You can’t just invoke hypothetical future ones: You have to show them.
The moral claim being made here is that personhood is the one and only Rubicon. You have to actually address that, not suggest that it could be mistaken.
You say that personhood is relevant but it’s warped by anthropocentrism. To test that, we’d need intelligence at the human scale that isn’t human and differs in some way. We’d need Grays or Klingons or Gorilla Grodd or demons or something. But we are in fact very, very sure that even the more complex animals like elephants, chimps, etc. do not have our sense of personhood and that lower animals certainly don’t, no matter how anthropocentrically you put it. Why?
Well… gee, they didn’t build massive civilizations with complex pathways for communication, novel adaptations to environment, technology being transmitted, language, etc. Yes, you can argue that personhood doesn’t hinge on any one of those things, but as a gestalt that capacity does come from the fact that we have selves that we can first imagine then imagine in a greater context.
When you spend time with animals, you can tell that they are not burdened by our sense of shame, our sense of moral agency, our need to have forward planning. And experiment shows it. Just look at the difference between how a corvid reacts to a mirror and how a mouse does. That is an example of an element of personhood, and why I would be very, very skeptical of any harmful or intrusive research on corvids, no matter how important.
But, again, this is a case where you have a burden of proof. Yes, we could be wrong, but you have to show it. Show that actually mice have elements of personhood that are morally relevant. We could also be wrong about humans having personhood (beyond the Cartesian certainty of having something like it in an individual moment): We could be Boltzmann brains about to flicker away, for example. Trying to justify people having a lower moral standing that way would be perverse. It doesn’t get much better when we do it for animals to justify a higher moral standing.
And you are right that we should tread carefully given science’s history. But… if we take that too far, we shouldn’t do science at all, given that science has also had a history of being racist, and sexist, and reductive, and harmful to environments, and colonialist, and… The solution is to fix those elements. In other words, reform. But it’s clear that there are researchers who work with animals who have tremendous love for them who still recognize that the evidence indicates that they are not persons. And that evidence is strongly dispositive at this point: We know the broad outlines of what kind of brain you need to make a person, and a mouse can’t have it, for the same reason that your smartphone can’t do the same job as a quantum supercomputer or even your desktop or laptop.
I too worry about the risk – indeed the reality – of creating moral Others who are subject to routinized torture, much of which is pointless.
Yet your concerns about the applicability of ‘innocence’ across species are plausible. Perhaps I should rely less on the concept of innocence and focus on factors like: that lab animals are bystanders, non-aggressors, and (largely, at present) non-beneficiaries of the system.
Factors like: they are helpless, wholly in our power, reduced to pure patiency.
A morally preferable AE would be: in defending our village from a marauding dragon, our kitana quickly vivisects it into total knowledge of life. Ah, to wish. . .
Regarding the inviolability of personhood: isn’t the shift from insentience to sentience the big one? Many qualities of personhood are extensions, modifications, meta-fiers of, sentience. For example, the ability to care about one’s future brings one’s future sentience into present concern, and helps unite the sentient moments into a more coherent stream. Another example: consent just articulates the wish implicit in all sentience to undergo euphoria / avoid dysphoria, and evinces a meta-cognition of one’s subjecthood. Persons may be especially complex and robust subjects of a life, but all sentients are to some degree subjects of a life. So it’s suspicious that the inviolable line should be drawn so close around our recently attained degree of subjecthood, so that we may use almost all of our biosphere neighbors, but are cosmically exempt.
If you draw such a hard line between mice and corvids, we don’t have to imagine that far up the great chain of being to reach our alien Vivisector. Might not even need a phase shift. E.g. consent is more meaningful the more information the consenter has, and the more consistent and truthful they are in speech. To rigorously truthful angels with vast knowledge, our consent may be relatively insubstantial, highly trumpable. We are relatively evanescent subjects of a life, to them.
The phase-shifted being may possess something as different from personhood as sentience is from insentience. They may use us not to save the universe but just to improve whatever their version of medicine is. Nine billion of us for Ninety billion of them.
I wouldn’t ask them to use me only when necessary! I would guess that we’ve met the Demon, and fight back.
Another emotive fallacy. This mischaracterizes AE entirely. We are not just torturing animals in labs for fun, willy nilly. AE is not pointless. Nor is it done in disregard of humane protocols. We’ve been over this already. Yet you keep acting like we haven’t.
This is also a slippery slope fallacy.
Consider the fictional example of The Rats of N.I.M.H. (the original novel, not the awful Disney cartoon travesty). In that story, certain lab experiments made a collection of rats self-aware, an accomplishment that continued to be disregarded by the lab researchers, leading the rats to successfully plan and effect an escape (they go on to live in hiding, solving the resulting problems of doing so with engineering genius, and not “magic” as Disney vomitously reimagined them doing).
The salient point in this analogy is that the researchers disregarded the fact that their subjects were now morally sentient. They could communicate their status and yet the researchers changed nothing about their behavior. This is fiction. In real life, I am fairly certain that is not how scientific animal researchers would react to such a development. But it is clear that such a reaction in the novel is outrageously immoral, and in precisely the way their behavior before that development was not. In no way whatever are the self-aware rats “just like” their unwoke compatriots in respect to moral status or deserved concern, as indeed (had this been depicted) even they would agree. Other rats would simply be permanently trapped in amoral insentience of their own status as even alive, much less able to have any appreciable thoughts or self-recognized identities, even less enter into any kind of social agreements or participation in the intelligent rat community. That wouldn’t even be something they could do in future (the normal rats were genetically different from the intelligent rats; so nothing could be done to elevate them).
This is why there is simply no analogy to some aliens thinking of us as comparable to insentient rats; they’d either act immorally, or recognize us as comparable to the Rats of N.I.M.H., and thus treat us the same way any moral human would treat the Rats of N.I.M.H. (as eventually depicted in the novel). Because the threshold has been crossed: we are morally aware (or, as infants, actively becoming so). That is not the case for animals, who have no comparable sentience of themselves or their lives.
So there isn’t even a slope here, much less a slippery one. One might make a valid slippery slope claim for certain other animals (e.g. apes), but we already agree we have evaded that mistake by not treating them as identical to either other animals or humans, but as a category morally in between, deserving of its own particular behavioral disposition. Which behavior tracks the actual objective cognitive difference between all three categories of living beings. As all behavior should.
Slippery slopes require disregarding factual differences. No true moral belief can exist that disregards factual differences.
I’ve already explained that this is both false (a lot of AE research is specifically for animal welfare) and moot. We do not perform AE as a punishment on animals, any more than we experiment on kids and infants, or visit morally necessary harms on them (e.g. medicine, discipline, education, safety, health & welfare), as a “punishment” on kids. Nor do we accept, as we do, harms as necessary even for adults; workplace pains and hazards are not “punishments” on workers, so their status as “deserving” has no bearing whatever on this moral calculation. So whether animals are “bystanders” or “non-aggressors” has nothing to do with the purpose or propriety of AE. That is purely an irrational emotive argument that has no place in serious philosophy.
Animals are incapable of even comprehending consent, because they have no comprehension of the moral worth of anything. Nor will ever acquire any. So there is no sense in which they should be “asked” to consent to anything; there is no “they” to ask. Animals are not people. Trying to frame AE as some sort of punishment that therefore must be deserved to be warranted is not only to continue fallaciously and pseudoscientifically anthropomorphize animals, it lacks even internal logic. In no way do we cause or allow harms solely on “those who deserve it.” So “deserving” it is a non sequitur.
The only relevant question is can they appreciate what’s being done to them and what they are being used for and what this means for their lives (or will they ever). The answer is no. Ergo, they can no more be “undeserving” than they can write a novel or run for President. It’s simply a category error. Innocence requires the possibility of intent. Animals lack any comprehension of intent. Thus they cannot be innocent of anything any more than they can be guilty of anything.
And yet, again, that being the case is also irrelevant, as we aren’t experimenting on them to punish them for anything. So their “deserving it” or not is moot.
We are doing it not only for their own good sometimes, but yes, often to save and better the lives of people, beings capable of appreciating all this and thus who are far more valuable to protect from harm than beings that can’t even comprehend harm, beings that lack even a concept of a self that can be done harm, and who will never understand any of their lives, whether it be suffering and dying miserably or horribly in the wild, or mildly micromanaged in a lab.
That is why it is objectively a fact that harm to animals is substantially less significant than harm to humans. And that is why we conduct AE in the first place: to lower the harm quotient of human trials. AE is the canary in a coal mine, better to die to save people from harm, as the death of a canary is orders of magnitude less cognitively significant than even a mere injury to a person.
This is either an equivocation fallacy (sentience as in self-sentience is personhood, not a basis of it; whereas sentience sans self-sentience is by definition the absence of a person) or not a relevant point. Certainly, because we evolved by natural selection and were not intelligently engineered in a scientist’s garden, our self-awareness (and concomitant ability to engage in conscious reasoning about life and our lives) is built on top of millions of years of cognitive machinery. But that does not somehow magically mean a worm’s ability to feel pain is the same thing as a person’s ability to appreciate and comprehend pain.
Animal cognition has proceeded by building on top of what came before: pain and pleasure (worms, flies, lizards) preceded any emotional life (e.g. rats) or ability to build complex sensory models of one’s causal environment (e.g. lizards); but emotional consciousness and world modeling (which included, eventually, the causal system of social interactions, though at that point still without cognitive awareness of what that even is) were built on top of that simpler pain and pleasure network, in fact expanding the ability to employ it more effectively; and this, in turn, long preceded any self-cognition at all: rational self-modeling, and an ability to appreciate and comprehend emotional experience, are add ons to the underlying inherited architecture of models and emotions sans understanding.
I provided several links to the science of this in my closing, which specifically go into how human cognition has radically transformed what emotions feel like and implicate in our conscious understanding, and thus why animal consciousness will be nothing at all like this, and thus why it is simply a fallacy to act like it is—as fallacious as Pythagoras apocryphally abstaining from beans because he believed they contained reincarnated human souls. Moral behavior must align with factual reality, not false beliefs.
This is why consent requires comprehension—because if it is not informed consent, it is not consent. Thus animals, lacking comprehension, don’t even belong to the category of things we require consent from. There is no “they” to issue consent. They are less cognitively significant than even the infants and children we morally do all manner of things to without their consent. That they are incapable of consent does not warrant our doing nothing whatever to them; to the contrary, we assign the role of consent to their socially assigned guardians, and we then limit what they can consent to for their children only in respect to degrees of child cognition and its future effects on their adult lives.
Extending this same reasoning to animals, who lack even the cognitive capacity of human children, and have no cognitive futures at all, entails even more liberality in respect to what their guardians can consent to allow be done to them and for what reasons; just as when we extend the same reasoning to plants and geological structures, that liberality is extended yet even further, exactly in alignment with cognitive capacity.
To disregard all this is simply pseudoscientific and illogical—the last thing our moral beliefs should be.
You worry about harming helpless things. I agree, I think this is a relevant concern. But again, this is a trolley problem. People are helpless against cancer and illness. Either way, both some animals and some humans will be in the helpless grip of something, often human-caused or human-aggravated.
I agree that bystanders/non-beneficiaries is a better mode of analysis, but, again, they’re not non-beneficiaries. For one, lab animals get to eat and live, so they get a kind of pay for their keep. More importantly, the insights we get from treating humans also creates a broader biological understanding that will help animals in general. Should we care more about veterinary care and animal control and care in general? Yes, but that’s not the same problem. Frankly, I’d rather us deal with kill shelters than AE, if we have to rank animal welfare priorities! (Which, yes, isn’t, strictly speaking, the debate, but when we are discussing reform as an option, finite political capital does become practically relevant).
And, yes, I would love to have some kind of AE that broadened the horizons of animals (though again that may be anthropomorphism: is being sentient really that great? I mean, I happen to think so, but that could easily be sour grapes about the alternative!) But broadly speaking that isn’t an option. Some creatures will helplessly suffer no matter what we do. At least in one case, some creatures get the chance to have their suffering mean something, even if they can never experience that themselves.
I think that the shift from sentience to insentience is a big Rubicon as well, but the problem is that “It experiences pain” in some dim sense is vastly too broad. Should we stop all carnivores from eating all herbivores, acting as moral agents? Well, then the carnivores suffer. As a Buddhist I firmly agree that we should be concerned about sentient beings, but treating them as identical to us in our moral universe is anthropomorphism: It ignores what makes them different and special. And those meta-filters around sentience matter because they vastly amplify the harm available to those beings. If you have animals that are predisposed to a ton of pain and animals that can help them, isn’t there something grotesque about not facilitating the case where the animals who can’t hurt as much can help the ones who can? You yourself have just explained why it is a corollary of the importance of sentience that personhood is so critical: It is a vast amplifier of the stakes that a creature has. Nothing arbitrary about it.
Consent is more meaningful with your conditions, but it never loses meaning. To be able to consent is to have agency, and deprivation of agency is obviously so harmful as to be crippling. An entity that ignored our consent would be evil. If such an entity could understand that our lack of consent were based on poor thinking, it could explain it to us. Just like real doctors do to their real patients, all the time. This is a straight up rule utilitarian problem: The rule of ignoring the consent of sapient beings is so much more harmful than the rule of not doing so, so you don’t do it.
And you are now positing a Q race that needs medicine. It is vastly more powerful than us but gets sick? And we are relevant analogs to it? Yes, I am nitpicking about hypotheticals, but I think the thought experiment tells us something. If we want to play the game honestly, then we could imagine these beings for whom experimentation on us done within guidelines of immense necessity would be greatly necessary, and so the harms they experience from not having that data are immense and us denying them the help is immensely selfish.
Except we can consent. Which is the key difference. Depriving the right of consent to an animal is depriving it of something it doesn’t have. This is the fundamental flaw. It’s like ripping the eighth legs off of a human: We don’t have those. So the Q people who experimented on us would be crossing a moral Rubicon we are not with animals.
Morally required. Not epistemically required. I don’t know why you keep doing this, but this is the third time you’ve disregarded this. I refer you back to my first entry, General Objections, §4.
But not if it’s not morally required. Thus, you are arguing by non sequitur here, completely ignoring the actual pertinent metrics. Beings that disregard moral requirements in favor of epistemic are immoral. And we cannot appeal to immoral agents as moral exemplars. To avoid this fallacy of false analogy, you have to construct a valid analogy: the aliens have to be moral exemplars, if you intend to use them as such in a reductio. But moral aliens would by definition recognize moral agents and treat them as such. Animals, not being moral agents, don’t warrant the same reaction. Humans, being such, do.
You seem to be fallaciously using this false analogy to build a circular argument that completely bypasses the morally relevant distinctions I have drawn based on objective facts about differences in cognition. The same ones aliens of any ability would see as readily as we do.
That’s illogical. It’s like saying because there might be a fifth state of matter, therefore liquids don’t exist, or don’t obey the physics of liquids. Obviously liquids exist, and obey the physics of liquids—no matter how many other as-yet-undiscovered states of matter there might be.
Likewise, moral value derives from self-awareness or the capacity for it. Full stop. Liquids are liquids. It does not matter how many other phase shifts in consciousness there may yet be, they will not change the fact that the emergence of moral worth was already accomplished at step one. That doesn’t “go away” when other steps are achieved.
I find it bizarre that you keep reaching for these wildly illogical rationalizations to avoid acknowledging a point already factually established. This isn’t the only example of that I’ve encountered in this debate from you.
I refer you back to my first entry, Foundations, §1, and the fact that despite my asking you to explain how you are even deriving your premise of moral worth multiple times now (or how it even differs from mine), you have still never answered, much less defended your answer with any kind of evidence-based reasoning.
Yes. We are. By multiple converging lines of scientific evidence, as I multiply cited there. We have comparative anatomical neuroscience and interactive behavioral studies. By the thousands.
This is a quote mine. That reference is not talking about what we are. It’s talking about minute details of animal consciousness research; not fundamentals, like whether they have brains capable of building self-models and developing self-reflective knowledge of their experiences and lives. We know for a fact that they don’t.
And even when it comes to minute particulars, there are a ton of studies now, and they all support what I am saying, not the pseudoscientific hope you are expressing. I cited numerous study surveys explaining this point, and even the one you now quote does so. You can’t just ignore all that, and quote something out of context and misuse it to make a point its author never made.
Indeed, you even go on to misquote one section in that same source about apes, and apply it (!) now to mice and rats. Good grief. How many times do we have to explain to you that these are not the same things? Mice don’t pass mirror tests. Nor are mirror tests evidence of self-awareness, as even that article explains. The cognitive capacity of apes is established by a large range of diverse experiments with converging results—all of which mice and rats fail. That’s scientific fact. As even there explained.
There is no hidden person inside a mouse, pretending not to exist—much less one that can exist there with none of the actual neural machinery we know is needed for it.
So you sound like Pythagoras at this point. “We don’t ‘really’ know human souls aren’t inside beans, so we should exercise caution!”
I suspect you didn’t take up this doomed tactic in the formal debate for a reason. Because I tried to get you to address it twice, and you only now attempt it here. And all we get is this updated Pythagorean non sequitur.
Beliefs must be based on actual facts. Not inventions contrary to all observed facts.
Frederic Christie said of “persons”:
That last statement is tenuous at best. It’s very hard to look at elephant and whale behaviors and our knowledge of their mental capacities to exclude them from personhood. Surely there are likely, people somewhere in the world, who posses no more advanced personal or social ideation and self-identity development than these remarkable cerebrate animals.
It might even extend to certain invertebrates, certain octopuses, for example, that can identify particular keepers who are more likely to be hoodwinked so as to allow a planned escape stratagem to succeed. After all, if the creature can frame such a high level discriminatory understanding of different men, then how big a jump in likelihood would it be for the creature to possess an image of self as opposed to other similar creatures?
Asher Kelman M.D., Ph.D.
Note in this debate I set aside cetaceans and elephants (as also apes and corvids) as not what we are talking about with the term “animal” here. It is a non sequitur to reason from elephants to lab animals (rats, rabbits, etc.). They are not at all comparable in any pertinent way. (Even more so flies and worms.)
Note there is no evidence of comparable sentience in octopuses. Almost all their neurons are devoted to skin color mimicry, and their utility with tools and escape behaviors appears to be noncognitive. There is literally no room anywhere in their brains left for metacognitive abilities. So we can be certain they don’t have them. (And expectedly, they’ve never displayed any.)
Finally, Christie was talking about Paul’s hypothetical alien superbeings’ judgment of us, not our judgment of lab animals (much less cetaceans and the like). In other words, Christie’s point was, alien superbeings would not classify us as we do lab animals; they would classify us more even than you do elephants. Thus you are actually making Christie’s point for him: if we (humans) can say that of elephants, the more so will Paul’s hypothetical alien superbeings say it of us. And for the same reasons.
This is the point I develop in my closing.
Yes, I was also using Dr. Carrier’s definition of “animal” here. Even there, though, it’s fuzzy. How important is an ability to understand the future and relate to it to selfhood, for example? We know from frontal lobe lobotomies in people that it’s not strictly necessary but people who have had them can feel less from that inability. I would say that even the more complex animals (the complex mammals, corvids, octopi maybe)
To say that there are people in the world who don’t possess that kind of reasoning, though, is… silly. Like, I suppose it’s possible that extremely brain-damaged or disabled people don’t. But it’s very easy for us to forget just how complex our ability to navigate social systems is, even compared to chimps. Just the fact that we have such an outsized language ability, let alone capacities to think in terms of honor and shame and duty (which, yes, chimps and primates and dolphins sort of do, but not really), kind of moots that comparison. Our ability to abstract also actually has implications for our understandings of our selves: The same ability we have to do math and complex language has some of the same roots as the ability to talk about our feelings and conceptualize what we are.
The octopus example is a good one: They’re damn smart animals, but the ability to know situation X has some non-obvious element Y and even potentially be annoyed at that unfortunate state of affairs is nothing like the human ability to engage in complex layers of deceit and to be personally and even existentially threatened by that deceit.
But, yes, Richard is right: A fortiori, your point makes mine even stronger. Indeed, the very fact that Richard set aside chimps, elephants, etc. (and I would even be willing to set aside many cephalopods and corvids) is because we, just like hypothetical aliens who exceed our interpersonal benevolence but follow our actual maxims for “lesser” beings, can see the marginal cases and err on the side of caution. So such aliens, even if they were uncertain of our personhood, would still see enough evidence to empathize in exactly the way informed people do about elephants and dolphins, and err on the side of caution.
Which returns us to the fundamental question: Is a mouse a moral stakeholder the way a person is? And I think they’re just not, by the same stakeholder analysis we use for people (so there is no a priori discrimination, only a posteriori differentiation): Mice have much less to lose than people through mechanisms like being experimented upon. (And, of course, because you can use transgenic techniques and examine over the life course, you can get data much more quickly with animals, which in practice actually means you experiment on net fewer organisms).
Sorry, unfinished sentence: I would say that even the more complex animals (the complex mammals, corvids, octopi maybe) don’t have the same cognitive horizon we do, with meaningful differences.