This continues the Carrier-Bali debate. See introduction, comments policy, index, and Bali’s opening statement in Should Science Be Experimenting on Animals? A Debate with Paul Bali; after that is my first response, Bali’s first reply; my second; and Bali’s third. To which I now respond. At Dr. Bali’s request, this shall conclude the formal component of our debate. Informal discussion might continue in comments below.
In Defense of the Scientific Use of Animals
— Part III —
by Richard Carrier, Ph.D.
-:-
Paul has had three chances at bat. And I haven’t seen anything beyond anthropomorphization and emotivism, neither of which is a factual or logical basis for moral conclusions. Nor has he answered any questions in my last entry’s first paragraph, yet I’ve asked them twice now.[1]
As already explained: (1) appeals to mere feelings cannot be a sufficient basis for establishing anything as morally true; (2) and animals are not at all cognitively like humans.[2] Paul deploys other fallacies, e.g. he emotively complains about feed and harness tech serving “meat machines,” ignoring everything I said about pets and zoos and labor and service animals. But Paul seems most concerned to abolish only “certain kinds and purposes” of animal experimentation, because it equals a “dominant” race exploiting an “inferior” one. But that analogy is false. Because animals aren’t people. You may as well say we should never wear clothes or eat food because that’s just a dominant race (humans) exploiting an inferior one (plants). “But animals are more sentient than plants” is true, but does not answer the pertinent question: are they relevantly more sentient than plants?
All of Paul’s attempts to make animals “just like” humans in Trolley Problems are thus factually false. Because animals aren’t comparable to humans in any pertinent sense. They can’t enter social contracts, they don’t have any comparable comprehension of life, existence, themselves, or even causal reality, and they have no cognitive futures. Thus it is not the case that humans deeming animals too insentient to morally appreciate their lives or circumstances is “just like” advanced aliens thinking the same of humans. Unless those aliens were enslaved to factually false beliefs, they’d agree humans are their moral equal because they possess the same quality of morally relevant cognition; whereas, they’d agree, animals don’t.
This is not a difference in degree, like comparing a cold to a hotter gas. It is a categorical difference in kind, like comparing a gas to a liquid. Human cognition is a fundamental phase-change in the nature and capabilities of consciousness: our minds build narrative personal identities and self-reflective understanding of what it means to be alive or dead, and what is happening to us, and what the options are. There may be middle cases (apes, elephants, cetaceans, corvids; even human infants), but I already set those aside [3]; we do not subject them to the same experimental rigors as the animals Paul gives examples of. A rat simply is not similar in kind to an ape. Nor are the kinds of experiments we conduct with them comparable (Paul seems okay with “cruelty-free, non-invasive” animal experimentation, so that’s irrelevant here).
Consider Paul’s appeal to Rawls. Let’s insert factual reality there and see what happens: if the random wheel of fate were to assign Paul to a rat, in that universe, Paul does not exist. The wheel has destroyed him. All his cognition, all his personhood, near everything that makes Paul a morally significant entity in the world, even his ability to appreciate where the wheel of fate has put him and why that matters, has been dissolved, as none of it can be contained in the limited neural apparatus of a rat. In that scenario, “Paul” is dead. Therefore nothing we do to “his” body can matter to “him,” because there is no him. Paul does not exist to evaluate his lot in life. Thus, a Rawlsian perspective doesn’t apply. Yes, a rat can appreciate more than a cadaver, but still nothing at all sufficient to constitute any relevant bit of a person, like Paul. It has no concept of a future. No understanding of life or death. No sense of self. No narrative memory. No causal understanding of reality (as opposed to mastering causal patterns through operant conditioning, which is noncognitive). And it isn’t building any of those things, either. Whether it lives one year or ten is cognitively irrelevant to it. All that it can comprehend is how it feels in any given moment, and sensory memories and emotive inclinations called up by noncognitive triggers, over which it has no appreciation or control. It is no more sentient than what would be left if we stripped away over 99% of Paul’s cerebral neurons.[4] Basically, nothing of Paul would remain; nor any capacity to rebuild it.
Yet Paul acts like he’d still be Paul, trapped inside a rat body, and thus aware of everything happening but unable to do anything about it, akin to the end of Being John Malkovich. That’s simply not factually what it’s like to be a rat. It’s impossible to fully imagine, of course, because humans can only “imagine” within a framework of a cognitive self-model (you are always you, self-aware, when putting yourself in someone else’s mind, with all your cognitive comprehension [5]), and rats never experience anything that way. They are never self-aware or possessed of any cognitive comprehension. This is why their suffering is less of a harm than a human’s. The cost is vastly less; the significance vastly less; the effect vastly less. It’s not non-existent (as with plants). So compassion warrants consideration, as I’ve described.[6] It would make you a malevolent or callous person (and thus undermine your wellbeing) to cause them harm for no worthwhile purpose.[7] But as we morally accept causing harm to humans (children even) when necessity compels (particularly trivial harms, like needle sticks and safety restraints), we far more so allow to animals, because the harms we thus inflict are so much less substantial in their comprehension and effect.[8] This is why we experiment with animals in the first place. It is far, far less harmful than if we jumped straight to human experimentation instead.
Contrary to what Paul has claimed, animals are not “sentient of being.”[9] They just have experiences, not an understanding of what that means; nor are they developing such.
- Rabbits do not “care about their future.” They have no capacity to even comprehend what a future is. They do not “comprehend” anything whatever. Their behaviors are instinctual. They do not know why they do them. They can’t comprehend what a “why” would even mean.[10]
- They feel. Rabbits (and rats; but not flies or worms) are motivated by social emotions, and intuitions (evolved and learned). But those all evolved as precursors to intentional planning and conscious action—they are what developed in animals that hadn’t yet capacity for such action; it was over on top of “emotional computing” that those human capacities evolved additional. This is why emotion and reason often conflict: the one was an add-on to the other, and recently; hence before there was rational comprehension, there was only emotion sans understanding.[11]
- And animals have desires. But they do not comprehend “valuing” things, as that requires a far more complex cognitive capacity.[12]
Therefore suffering is far less significant to animals than people; while death has no significance to animals at all. They also are incapable of entering contracts, have no moral conscience, and have no comprehension of even the point of consent. We thus have no duty of care to them beyond not making their lives worse to no greater purpose.[13]
-:-
This concludes the formal debate. See comments below for continuing discussion.
-:-
Endnotes
[1] From Dr. Carrier’s Second Reply:
Paul hasn’t responded to my First Reply. No account of why anything he describes is actually wrong. Why should we care about any of it? We cannot know where to draw the line, if we don’t even know why we are drawing lines at all. Paul also hasn’t accounted for the cognitive and thus moral distinctions between kinds of animals experimented on (fruit flies, vs. rats; dogs; monkeys; apes). Is he okay with certain animals? Why? Why not? And what kind of animal experimentation is okay and why? Paul sanctions “non-invasive” experimentation, but what all counts as invasive, and why does that matter? Paul also needs to explain where a line goes between practices we could continue once improved (e.g. limit cage time to what is actually necessary for a study); and those to abandon.
[2] In Dr. Carrier’s First Reply, General Objections, §2 and §3.
[3] Op. cit., “There are some nonhuman animals that fall in between these two categories (e.g. apes, cetaceans, elephants, corvids), on which my conclusions may differ than for the rest. But for economy I will hereafter mean by ‘animal’ only the rest.”
[4] See “Human-Rat Comparisons” at Neurondevelopment.org and “List of animals by number of neurons,” Wikipedia.
[5] See my discussion of our inability to discard our advanced cognitive apparatus when imagining things in Richard Carrier, “The Argument from Specified Complexity against Supernaturalism” (17 April 2018). On the foundations of human cognition see Richard Carrier, “The Mind Is a Process Not an Object” (29 June 2018) and Sense and Goodness without God, III.6, “The Nature of Mind,” pp. 135-60, and index, “personhood.”
[6] From Dr. Carrier’s Second Reply:
The overall quality of their experiential life matters (because they are not automata), and therefore animals subject to experimentation deserve compassion, thence a reasonable attendance to their emotional and physical welfare…attending the rule of necessity (if a practice causes suffering yet isn’t necessary, then it should not be a component of the experimental procedure).
[7] References on how I derive true conclusions in morality are in Endnote 1 of Dr. Carrier’s First Reply.
[8] On this comparison see Dr. Carrier’s Second Reply.
[9] Michael Tomasello and Hannes Rakoczy, “What Makes Human Cognition Unique? From Individual to Shared to Collective Intentionality,” Mind & Language 18.2 (April 2003): 121-147; David Premack, “Human and Animal Cognition: Continuity and Discontinuity,” PNAS 104.35 (28 August 2007); Marc Hauser, “Origin of the Mind,” Scientific American 301.3 (September 2009): 44-53 (for summaries thereof see “What Makes Humans Unique?” and “Hauser Defines the ‘Humanique’”); and Jonathan Birch et al., “Dimensions of Animal Consciousness,” Trends in Cognitive Science 24.10 (1 October 2020).
[10] In addition to the references above: Mathias Osvath and Gema Martin-Ordas, “The Future of Future-Oriented Cognition in Non-Humans: Theory and the Empirical Case of the Great Apes,” Philosophical Transactions of the Royal Society of London B (Biological Sciences) (5 November 2014); Baumeister, Maranges, and Sjåstad, “Consciousness of the Future as a Matrix of Maybe: Pragmatic Prospection and the Simulation of Alternative Possibilities,” Psychology of Consciousness: Theory, Research, and Practice 5.3 (2018); Thom, Clayton, and Simons, “Imagining the Future—A Bird’s Eye View,” The Psychologist 26.6 (June 2013): 418-21; Thomas Suddendorf, “Anticipation of Future Events,” Encyclopedia of Animal Cognition and Behavior (Springer, 2017); Jonathan Redshaw and Adam Bulley, “Future-Thinking in Animals: Capacities and Limits,” The Psychology of Thinking about the Future (Guilford 2018); and Dean Buonomano, Your Brain Is a Time Machine: The Neuroscience and Physics of Time (W. W. Norton, 2017), esp. index, “animals.”
[11] Joseph E. LeDoux and Richard Brown, “A Higher-Order Theory of Emotional Consciousness,” PNAS 114.10 (7 March 2017); Jiayi Luo and Rongjun Yu, “Follow the Heart or the Head? The Interactive Influence Model of Emotion and Cognition,” Frontiers in Psychology 6 (May 2015); Andy Norman, “Why We Reason: Intention-Alignment and the Genesis of Human Rationality,” Biology & Philosophy 31 (June 2016): 685–704; and Laith Al-Shawaf et al., “Human Emotions: An Evolutionary Psychological Perspective,” Emotion Review 8.2 (April 2016).
[12] See Carrier, Sense and Goodness without God, IV.2.1.1, pp. 315-16 (and index, “values”), with Hechter, Nadel and Michod, eds., The Origin of Values (de Gruyter, 1993), esp. Ch. 11, George Mandler, “Approaches to a Psychology of Value,” and Ch. 12, Richard Michod, “Biology and the Origin of Values.”
[13] From Dr. Carrier’s Second Reply: “Humans are not obligated to make animals “feel better” than they’d experience in the wild (any more than we ought to erase every ounce of human suffering); it is only our obligation to at least not make it worse without a necessary purpose,” with the rest of same paragraph; and Dr. Carrier’s First Reply, General Objections, §2 and §4.
This whole debate has felt like Paul talking to himself while ignoring Richard, and Richard trying to get through to Paul while being ignored. Very surreal.
I really like your Rawlsian point. I don’t know if every Rawlsian perspective would liken being an animal to basically being dead or brain-dead, but the point is still very relevant that, while I can say if I were a rat I would want to not be hurt, as a rat I would have no sense of what that means. I would have liked Dr. Bali to at least try to argue that it is precisely because animals can have such awful events come to them with from what to them would feel like no warning that in some respects we owe them greater courtesy than we would a human (just like we need to be especially careful with children for exactly the same reason we can override childrens’ agency), but that didn’t get to get hashed out.
And I understand that a reasonably detailed overview of policy positions would have perhaps been difficult to accommodate in the format, but still, so much of what Dr. Bali suggested could easily be responded to by “reform, not replacement”. At the point that even he happily accepts non-intrusive observational science on animals, I would have liked to see some focus on detailed proposals. That could have allowed some of the differences in moral intuition to come to the forefront and find where Dr. Bali and Richard disagree in specific on the moral characteristics of animals. As it was, all Dr. Carrier had to do was keep saying that animals are not like humans and some reform is almost certainly advisable, which is not a terribly interesting position to examine.
“while I can say if I were a rat I would want to not be hurt, as a rat I would have no sense of what that means. I would have liked Dr. Bali to at least try to argue that it is precisely because animals can have such awful events come to them with from what to them would feel like no warning that in some respects we owe them greater courtesy than we would a human . . ”
To be hurt is to know what it means, in the most morally relevant respect – its awful qualia!
And the rat’s angel advocate could argue for their best interests, at the OP’s negotiating table. In the OP, as I envision it, interests are made explicit in rational negotiation, but the incarnated interested party need not itself be capable of rational negotiation.
But yes, a mouse’s cognitive limits might render harms to it worse than those harms to a human. The lab mouse doesn’t know what good they suffer for, or when their suffering will end; they can’t rationalize their suffering or take respite in religious or moral narratives that sublimate their suffering, endow it with lab-transcendent meaning.
No, Dr. Bali, by definition it isn’t. And it’s easy to show this to be the case. I’m the kind of guy who wakes up and notices I have bruises and scabs. Always have. My body probably noticed pain in the moment. My brain may not have registered it to me. And even if it had, I didn’t form a memory of it. We all have had injuries we’ve forgotten about. So even people can “hurt” in ways that they don’t care about, or that won’t stick with us, or won’t be a part of our identity. But some wounds do stick with us, some injuries we do remember.
For a mouse, there can be only qualia, and some very limited memory (enough to learn to adapt to situations). No sense of meaning, no impact on identity, no change in the way that one relates to language.
I appreciate you not trying to exaggerate similarities between animals and humans, but this necessitates the other problem: focusing on the mere fact of pain as if it were dispositive. Actually, pain isn’t that bad. Extreme, excruciating pain is. But any of us who have done sports have felt some degree of pain and fought through it, sometimes even welcomed it. I like cooking, and so I often find myself testing the doneness of a hot item or moving an item when it is hot. As long as it doesn’t burn me, I barely care and sometimes barely notice. You can see this with dogs too: they can whine at something that hurts momentarily and then go right back to excitedly playing as if nothing happened. That’s to say nothing of the rough kind of play that many animals get up to, which will include pain.
I also agree with you that the rat can have its interest made clear at the negotiating table. But…
1) It has to have those interests. “My client still hurts when you prod it” is true. “My client has traumatic memories of those pains that caused its marriage to fail” cannot be.
2) As I pointed out, that necessitates a stakeholder analysis. In which you have to count lesser investment or impact when determining how to distribute limited resources (and, yes, welfare is an inherently limited resource, even as much as it is moral to try to act as if it is not in many cases).
Which still means that, when that negotiation is done, the rat cannot ask for as much, because it is losing less from experimentation. Both humans and (the average test) animals risk suffering momentary pain or even pain and discomfort of some duration, but only humans risk losing decades from permanent damage from experiments, having deep traumatic reactions to negative outcomes from experiments, losing parts of their self or identity, etc. It’s exactly like if we were on a desert island coming together and trying to figure out who built the houses: the people who were the strongest would be higher on that list.
And when you say that the mouse may be worse off, all your examples are pure anthropomorphism. Which is why your stakeholder analysis was misleading. You are like the physician who wants a patient to live who in fact wants to die. The mouse doesn’t care what good it does. It can’t. They may be eusocial animals, but they don’t have the ability to rank their behavior by positive impact, measure it, etc. That’s a human anxiety. And while the mouse may care to some degree that it wants the pain to end, humans will be far more likely to remember that pain and that helplessness, precisely because they have a much deeper internal world that is being impacted. They can’t rationalize their suffering because they don’t need to: Having to do that is an advantage compared to just not ever caring beyond the moment because you literally can’t. They don’t get lab-transcendent meaning because they don’t get meaning, no matter what they do. So none of these are actual considerations on the table. They weren’t ever theirs to lose.
False. Knowing what an experience means is a vastly more complex cognitive achievement than the experience itself.
There is no “they.” Mice have no personal identities. They cannot comprehend even what life is for, much less who they are, or what anything has to do with anything. They comprehend nothing. They only feel. And that’s that. They have no reflective sense at all. Thus in no way can causing harm to a being that lacks, and will never have, any comprehension of what it experiences or its significance be worse than causing harm to a being that has such. To the contrary, it is objectively far less significant.
Just as stimulating a pain nerve that’s been cut off from any brain (surgically or by anesthesia) causes far less harm than stimulating one wired into a conscious and fully aware brain, so also causing pain to a brain that can’t even relate that harm to any self or comprehension or narrative recollection at all causes far less harm than causing pain to a brain that can do all that.
This is an objectively factual difference. It cannot be disregarded.
I sometimes think of early man and all the suffering he went through for us to be evolved as we are today. Before fire was even discovered, for instance. In a way those people were the experimental ‘animals’ that got us to where we are today. We are still experiments in a way for disease, poverty, mental illness, aggression, etc. that later generations will look back on and say ‘How did they do it?’ So man is definitely taking on the responsibility of what’s right and wrong (Is there such a thing?). Abortion enters this picture.
I think it’s wrong to experiment on animals but I want to live so I’m ambivalent on the subject. I guess we have to continue and ‘do better when we know better.’ I’m against inflicting pain on anything (unless necessary) but am guilty of that (bugs, rats, snakes, ad infinitum) as anyone. And in my mind have a justification for it. Part of me thinks that I’m no better than any living creature and my life has no greater priority than any other life. As Albert Schweitzer said “Reverence for life.”
If someone is stabbing the sole of my foot with spider venom at night, then punctating the wound with Von Frey filaments, that’s terrible and should stop. That I forget the agony each morn (and that my assailant is Nobelized for it) may be a mitigation or a horror – I’m not sure which.
Extended cognition can mitigate or aggravate one’s pain. Forgetful rodents lose those aggravations and those mitigations.
But rodents indeed do remember pain. If you stab them in a white box, they exhibit anxiety when returned to that white box. Sadly, the lab is itself a big white box, hard to adapt to. It impacts their identity, if they’re persistently anxious.
The fact that you are not only not sure but not even in a position to be sure is sort of the point, Dr. Bali. Harms you never perceive cannot be harms. That’s a scary realization about the difference between phenomenology and ontology that goes all the way back to Kant: You could be killed by a colorless, odorless gas and never perceive your killer. Here, you never even encounter the gas, or you do so so briefly that it doesn’t matter. This realization has actually strengthened my belief that AE has merit. It is just as irrelevant to the mouse that it will experience transitory pain (note here I am not talking about long-term mangling or even pain of a significant duration) as it is to you that someone stole a lottery from you that you didn’t know you had won. (Yes, the theft is immoral in the latter case, but not because of the harm you never experienced, because you could have experienced the lottery win without the theft… but the mouse can’t experience it either way).
We as humans find that horrifying because knowledge is power in potentia and we don’t like being powerless, but it’s just true. Animals can’t find it horrifying.
And, yes, we do both agree that long-term or serious pain can cause anxious behaviors in lab animals, and that is a harm that we should have empathy for, seek to mitigate, and count in the stakeholder analysis. But people experience that too, and for longer, and with a much bigger impact on their sense of self and functioning. So, again, unless we are putting into place some absolute or near-absolute deontic rule toward inaction no matter the tradeoffs, that fact is moot. Animals have less to lose than people do from experimentation, period. If one’s goal is to lower the let amount of suffering in the world, AE matches that goal. I agree it’s more complicated than that, but exactly how is what the debate is about.
Memory is not cognitive understanding.
Nor is animal memory at all comparable to human. You are describing operant conditioning; which does not entail even conscious awareness. A rat will not even know why it experiences anxiety when it does. And it will develop all manner of random anxiety reactions in the wild, so there is no meaningful difference to it developing them in a lab—except in the direction of positive utility: we are channeling an often random process toward a net-beneficial one.
Animals do not have “identities.” They are not people. They do not comprehend any of this. They do not appreciate any distinction between the random anxieties they would develop in the wild and the more useful ones they’d develop in a lab. And all their experiences are fleeting and possess no narrative context.
Animals in the wild get far worse treatment, to far less purpose. To cultivate the same things to greater purpose and with humane and medical care as well is an improvement in their lives. One they still will never appreciate or care about or even be capable of realizing they should or could care about it. Because animals cannot cognitively appreciate anything. That’s why harm to animals is of less significance than harm to humans.
This is an objective fact. No amount of emotionalism can change objective fact.
ah yes, the useful lab anxieties. sad they can’t appreciate what good the forced drownings bring to Man.
the game is up, you’ve outed me – a weeping, raging emotionalist. i land on your planet, and find myself in friendship with these beings whose good you trump in your Calculus. i side with them a bit, a foolish old Moses falling in with the slaves, ungrateful to the Pharaohs who have hosted him, fed him, armoured him.
adieu, signed “The Rat Man”
To translate that into the jargon of professional philosophy, “I have no relevant response to what you just said.”
I would hope I wouldn’t have to point out, in human history, sympathies with sides, however sincere and even noble, haven’t always worked out that well. A stakeholder analysis recognizes that everyone has something to lose.
I am disappointed in this debate. Not the debaters, but moreso myself. I’ve read through the statements and responses, and even the comments. But I am struggling mightily to understand what was being argued or resolved. Perhaps this is related to the continental/analytical philosophy divide Dr. Carrier mentioned in previous comments. I don’t know enough about continental philosophy to really say.
Dr. Bali has made strong appeals to how I feel about the subject. I don’t like the idea of hurting innocents. I don’t like the idea of inflicting pain. I wouldn’t want to be turned into a mouse in a lab. Now, I also realize that appeals to my emotions are not rational reasons to hold a belief, but I won’t pretend they didn’t affect me.
Dr. Carrier has pointed out the fallacy of that appeal, but also has pointed out why that error is occurring. I am thinking from a human (personhood) perspective. I am unconsciously anthropomorphizing animals when I’m reading Dr. Bali’s replies. And it’s not just incorrect to do so (evidence suggests there IS a discernable difference between our brains and overall narrative), it’s actually ignoring the debate to jump past that.
Was any exchange actually had on the deeper points here that I missed? Frederic seems to have teased out much more than I was getting, and good on him. I wasn’t able to get there. I really wanted to see two philosophers get into where the difference in humans and animals lies, the moral impact of that difference, and the principles we should use to rationally address testing reform/abolition.
I think the real problem is that I’m not truly a rational actor. Even though there probably is a phase-shift difference between humans and animals, I would almost surely still save my pet in some trolley problem over a large number of humans. And similarly, I wouldn’t look to stop productive AE even if we had to test on animals I consider much closer to humans. That’s what I was hoping to help resolve. Even if I’m not rational, I wanted to glean the tools to impose rationality on top of my irrational mind. I don’t think Dr. Bali moved me in that direction (though he appealed more strongly to my immediate intuitions), and I don’t think Dr. Carrier really got to delve into the points I was hoping for (and it was actually much more difficult to turn off my anthropomorphizing lens to see his points).
I had hoped we’d spend more time there too. But Paul stuck with the emotive anthropomorphism approach. Which I find is so pervasively standard among advocates of his position that I should not have been surprised we never got past that; I find we never do. He kept repeating that as the foundation of his argument (twice!) even after its fallacious nature was called out at the start—rather than engaging with the question of phenomenology and how it relates to our moral concerns (which requires as well getting into what “moral concerns” even are and why they should dictate anything of our behavior).
In other words, why does suffering matter, when is the causing of suffering acceptable and why, and why should we care about any particular kind of it. I kept repeating references to my foundations on these things, but we never really engaged on them in the formal debate itself. I tried to get there. Paul just kept evading it. I realize now, this is how these debates always go. I am let to think this is why there are no opponents on this issue who want to take the correct tack. His position is entirely a product of his emotive anthropomorphism. Take that away, and you basically end up where I am. I must conclude now that that’s why there are no defenders of his position who can actually argue in the correct way (linking factual phenomenology to a defensible concept of moral obligation).
Note that that is because of your personal (third party) value for your pet. Which is not the same thing as, for example, a rat you have no relationship with. Or even a hundred of them. So whether that specific decision was morally acceptable would depend on different factual premises than animal experimentation does. I still think it would be wrong; but explaining that would require addressing different premises than apply to our debate here.
Illustrating the kinds of distinctions that link to your point and change the substance of what we are discussing, note that already in 2005 I wrote the following (in Sense and Goodness without God, p. 330):
And you are right, the debate changes when we are discussing, for example, experimentation with chimpanzees; but that debate was already won decades ago. The standards Western science applies to chimpanzee experimentation are far more in line with what they should be now. They are no longer simply treated as if they were the same as rats or rabbits and the like (example, example, example, example).
I too am beginning to think that there are a category of animal welfare defenders who, no matter their intellect and reasoning, can’t go beyond the tendency to anthropomorphize. That may be why Dr. Bali kept utilizing the continuum fallacy: The fact that animals are on a continuum with people in terms of various cognitive capacities doesn’t mean that there’s no difference and no places you can’t draw a line for the sake of analysis.
I pointed out to him, to no direct response (he didn’t get to that specific post), what Murray Bookchin said to the deep ecologists: You can’t anthropomorphize animals, not just because it’s insulting to people but because it’s insulting to animals. We as humans are social animals, and our role in an ecosystem can be to regulate and control it intelligently: We are the animals that can do that. And other animals have different roles. Not worse or better, or just different. There are social ecologist and Bookchin-type anarchists who are muscular defenders of animal rights and even will be skeptical of AE, but once that argument clicks, the PETA extremism starts going away. Even if all you care about is the welfare of animals, anthropomorphism is a bad idea. But humans are animals, and misanthropic reasoning is reasoning against animal welfare. One thing that we couldn’t end up getting into is that Bali borrows from deeply religious reasoning that tries to ascend humanity outside of its context and imbue it with some kind of metaphysical difference. Deontological and virtue ethics both have some of that reasoning within their history, and without careful correction one can repeat those ideas. Once you start realizing that you are just counting the welfare of different animals, and start accounting for the scale of impact, a lot of objections go away.
It is unfortunate, because animal welfare is a really important topic, and we need a research program to figure out how we can relate to animals in ways that are better for us and for them.
Fain should I have watcht Shelly Kagan or Peter Singer debate this topic. That would’ve got better purchase and yielded richer crop.
Alif: Singer has come out as agreeing that animal experimentation can be justified in narrow cases.
Hy Dr Carrier,
maybe you are missing some crucial evidence in animal science, especially about livestock. It seems pigs are self-aware:
https://en.wikipedia.org/wiki/Mirror_test
Pigs can use visual information seen in a mirror to find food, and show evidence of self-recognition when presented with their reflections. In a 2009 experiment, seven of the eight pigs tested were able to find a bowl of food hidden behind a wall and revealed using a mirror. The eighth pig looked behind the mirror for the food.[72] BBC Earth also showed the food bowl test, and the “matching shapes to holes” test, in the Extraordinary Animals series.[73][74]
https://www.sciencedirect.com/science/article/abs/pii/S0003347209003571
https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1000&context=mammal
https://mypigfilledlife.org/adopt/f/did-you-know-pigs-are-self-aware
But what’s your point? Is it merely that the phrase “e.g. apes, cetaceans, elephants, corvids” didn’t happen to include the word pigs?
I think he mistakenly thought affective consciousness is the same thing as personal consciousness. Which is a common error among a certain kind of animal rights activists who don’t really try to understand any of the science but just cherry pick anything they foolishly think “sounds” like it supports them and then try to use it for some rhetorical whataboutism. It’s a failure of critical thinking.
And especially this:
https://www.cbc.ca/news/canada/hamilton/pigs-know-their-fate-when-they-enter-a-slaughterhouse-expert-says-1.3829977
Pigs are “sentient beings” with emotions and empathy similar to dogs, and they know what they’re in for when they enter a slaughterhouse, said an expert during the trial of an animal rights activist today.
Neuroscientist Lori Marino was one of two defence witnesses in the fifth day of the Anita Krajnc trial in the Ontario Court of Justice in Burlington, Ont.
Krajnc, founder of the group Toronto Pig Save, is charged with mischief for giving water to pigs en route to Fearman’s Pork Inc. on June 22, 2015. She has pleaded not guilty, and if convicted, faces a maximum $5,000 fine and jail time.
Marino, a founder of the Kimmela Center for Animal Advocacy, testified that pigs sense the emotions of other pigs through “emotional contagion.”
“Pigs are at least as cognitively aware as a monkey,” said Marino, commenting on a video of a slaughterhouse in Australia. The high-pitched squeals, she said, are “distress calls.”
Pigs have individual personalities, Marino said. They’re also one of the few species that can recognize themselves in a mirror.
“They have self-awareness, self-agency and have a sense of themselves within the social community,” she said. “Each one is a unique individual.”
Are you sure you are up to date with your science?
And pigs do have theory of mind:
https://en.m.wikipedia.org/wiki/Theory_of_mind_in_animals
None of that is true. You are too easily duped, I’m afraid. The link you just gave on pigs does not show theory of mind. It only shows that one in ten pigs will follow another pig who signals an awareness of food. This is standard reflexive-reactive information processing. It requires no awareness at all, much less of a mind. Even bees can do that. Try being less gullible. Please.
Likewise your preceding comment.
There is literally no science backing any of those claims, except the ones that are too trivial to carry the point. You should not be so easily duped. Don’t trust people with an agenda who cite no actual evidence for their assertions and who confuse different assertions (emotions are not theory of mind; and theory of mind is not self-awareness).