This continues the Carrier-Bali debate. See introduction, comments policy, and Bali’s opening statement in Should Science Be Experimenting on Animals? A Debate with Paul Bali; as well as my first response to that In Defense of the Scientific Use of Animals, and Bali’s first reply.
In Defense of the Scientific Use of Animals
— Part II —
by Richard Carrier, Ph.D.
-:-
Paul hasn’t responded to my First Reply. No account of why anything he describes is actually wrong. Why should we care about any of it? We cannot know where to draw the line, if we don’t even know why we are drawing lines at all. Paul also hasn’t accounted for the cognitive and thus moral distinctions between kinds of animals experimented on (fruit flies, vs. rats; dogs; monkeys; apes). Is he okay with certain animals? Why? Why not? And what kind of animal experimentation is okay and why? Paul sanctions “non-invasive” experimentation, but what all counts as invasive, and why does that matter? Paul also needs to explain where a line goes between practices we could continue once improved (e.g. limit cage time to what is actually necessary for a study); and those to abandon. Paul also hasn’t acknowledged why we do experiments at all.
The above graphic breaks down some major categories of animal experimentation. The top goals are animal welfare and human welfare. For the former, Paul’s argument that we shouldn’t sacrifice animal suffering for human welfare is negated. We must experiment on animals to perfect our ability to care for them. This includes veterinary science, e.g. medicine, surgery, animal psychology and care; and dietary science, e.g. developing better feeds, for pets, livestock, animals in zoos and the wild; and so on.
Paul then hasn’t presented any actual argument for why we should not sacrifice animal suffering for human welfare. Three subcategories here: experimentation for frivolous ends and for serious ends, and edge cases (debating whether ends are frivolous or serious). Edge cases are case-by-case, and thus cannot be resolved by general conclusion. Frivolous ends (e.g. perfecting cosmetics) can be wrong, when actually entailing significant suffering, as that trade lacks merit. Though not all does. Experimenting with animal harness design, for example, is not invasive or misery-inducing, and actually by design attends to determining the animal’s health and comfort, rather than ignoring it; and thus neither would experimenting with a frivolous equivalent, like pet costumes. But the rule of necessity would rule out needlessly doing this (e.g. there is no value in testing human mascara on a monkey, when it’s safe enough for paid human volunteers to do this).
Which leaves us with experimentation that (a) is actually necessary, yet causes (b) some measurable suffering, in trade for accomplishing (c) serious ends. This includes medicinal and surgical safety, and useful anatomical and physiological knowledge; and ranges in purpose from (d) averting more harm (where outcomes would statistically be worse if we skipped animal testing and went straight to human trials) to (e) discovering goods (where an overall programme produces a statistical net benefit in human welfare; thus “hit rate” doesn’t matter as Paul claims); and functions in a scale of gradually increased risk (from computers etc., to animals, to preliminary human trials, to full-scale human trials, to worldwide application), to attenuate greater overall harms. Just as a catastrophic or negative result shutting down a small human trial saves us from the harms resulting from jumping directly to a large-scale human trial, a result shutting down an animal trial saves us from the harms resulting from jumping directly to testing in humans. And none of this can be accomplished with what Paul calls “Alt” methods. We use animals because physiologies are far too complex and unpredictable to model computationally. “Alt” can reduce animal harm (by narrowing what we have to test on animals). But that’s another argument for reform (if we could do this more than we are; though that hasn’t been demonstrated), not abolition. Computer and in vitro and other models cannot replace animal experimentation, for exactly the same reason they cannot replace human trials.[1]
The real issue is that the death of an animal,[2] even many, is not comparable to, but is in fact a much lesser harm, than the death of a person. Likewise, the suffering of an animal is not comparable to even the suffering of a human, much less their death. And not all suffering is the same. Mild or transitory discomfort (like getting poked with needles; feeling stressed for a short period; having to be caged temporarily) is simply not serious, even in humans; much less so in animals. Hence we experiment on animals as a means to reduce harm. Because we conclude on a valid objective basis that animal lives and cognition do not count the same as human.[3] Animals are not persons. They lack narrative memory, self-identity, abstract goals, even a comprehension of the significance of life or death. The (humane) death of an animal is simply, objectively, insignificant. Because “one more year of life” doesn’t mean anything to an animal. The overall quality of their experiential life matters (because they are not automata), and therefore animals subject to experimentation deserve compassion, thence a reasonable attendance to their emotional and physical welfare (possibly more than is practiced, but that becomes an argument for reform, not abolition); attending the rule of necessity (if a practice causes suffering yet isn’t necessary, then it should not be a component of the experimental procedure).
Humans are not obligated to make animals “feel better” than they’d experience in the wild (any more than we ought to erase every ounce of human suffering); it is only our obligation to at least not make it worse without a necessary purpose.[4] This is obvious in our ethic with regard to babies and children, who are also “innocents” who “cannot” consent and who at the lowest ages can’t even comprehend what is being done to them, yet on whom we also perform experimentation, and even full medical procedures (pharmaceutical to surgical; and psychological). We allow and inflict on them all manner of necessary or unavoidable suffering (from stress to needle pricks to drugging them to cutting them with knives). Yet babies already have a cognition exceeding “animals,” and have a substantially greater cognitive future besides. Thus even babies far exceed animals in moral value and concern. So if we accept experimentation on babies (and toddlers, children, teens), as in fact we do, we must the more grant the same for animals. “That they are innocent” and “that they cannot consent” only sets the bar that must be met in respect to the necessity and benefits of what precisely we do.
Still animals are not the moral equivalent of human babies. For human infant and child experimentation,[5] our moral standards reflect things like concern for their future development (they will be adults someday; so experimentation should not cause lasting harm), which animals do not have (they remain animals; they experience no cognitive development like this to be concerned for); and in older children, a recognition of their cognitive modeling of events (their ability to appreciate harms done to them exceeds that of animals, and renders those harms morally greater). Being drugged or stabbed with a needle means something to children of most ages; whereas these events are incomprehensible to animals; the harm animals endure is therefore of lower cognitive content, and of more transient duration.
For all these reasons, none of Paul’s reasons warrant ending animal experimentation.
-:-
See Dr. Bali’s reply.
-:-
Endnotes
[1] See: “Are There Alternatives to the Use of Animals in Research?” in Science, Medicine, and Animals (1991) by the National Academy of Science, Engineering, and Medicine; whose conclusions have not since changed: Juan Carlos Marvizon, “Computer Models Are Not Replacing Animal Research, and Probably Never Will,” Speaking of Research (7 January 2020). See also Stanford’s statement on Why Animal Research is still a thing.
[2] I’m using a restricted definition of the word “animal” for this debate, per Foundations §(2) in Richard Carrier, “In Defense of the Scientific Use of Animals — Part I.”
[3] I outline why animals cognitively (and therefore morally) differ from humans in Foundations §(2) in Richard Carrier, “In Defense of the Scientific Use of Animals — Part I.”
[4] I outline how obligations originate in Foundations §(1) in Richard Carrier, “In Defense of the Scientific Use of Animals — Part I.”
[5] Marilyn Field and Richard Behrman, eds., Ethical Conduct of Clinical Research Involving Children (National Academies Press, 2004).
One of the strengths of this limited-essay format is that people can pivot if a particular argumentative approach is going to be better. That having been said, I did notice that Dr. Bali’s second post really didn’t seem to have much to do with the first. I think he may have been appealing to the audience of your commentariat, which was appreciated but did make me wonder the throughline of his argument.
I would say that the animal-harness example shows that, for animal experimentation to be unequivocally wrong and not an edge case, it needs to satisfy three conditions: 1) The research is largely frivolous, such that gains to animal or human welfare are trivial; 2) The research is intrusive, harmful or cruel and 3) The research has little benefit for the animal involved or other animals. The animal-harness may be relatively frivolous, but the research itself isn’t terribly harmful. It seems to be the scientific equivalent of people playing dress-up with dogs: at worst a minor annoyance for the animal, to be quickly forgotten about later.
In my response I alluded to the fact that Dr. Bali ignores that trials that do not provide clear and useful answers in and of themselves are not, by the very nature of science, pointless. You’re expressing that here by noting that analyzing the outcomes of research can’t be analyzed piece-by-piece but must be analyzed systemically as more than the sum of their parts. This does lead to an interesting question, where I think Dr. Bali has a point: While negative results are still results, perhaps there should be a norm when using animal experimentation to try to make sure that the experimentation is at least high-quality enough to answer a question decisively, and if it can’t be then the research should if at all possible be done less intrusively if at all. I do see the point that it is one thing to point to the case where there was an extremely promising potential medicinal breakthrough that required animal testing to complete and quite another to point to the case where an animal ended up suffering for a line of research with a lower probability of success (particularly if the animal was suffering for medicine that was somewhat redundant).
I am curious if Dr. Bali can show that there are alternative methods that could at least reduce the need for AE by a substantial margin. He listed methods, but I’d like to see strong evidence that these new approaches can in fact work, like a good meta-analysis or a strong statement from a consortium of researchers. Otherwise, this could very easily be Monday morning quarterbacking.
I do think that the bar for treating animals akin to humans should be perhaps higher than you seem to think, but I also do think that it is quite clear that there is a massive difference between inducing or allowing the suffering of an animal that will not have its entire sense of identity and safety changed as a result versus an animal for whom the initial suffering will cascade into the rest of their lives. I wonder if Dr. Bali’s argument hinges too much on ignoring the trolley problem’s insight that suffering is suffering and that focusing on whether one is causing it ignores that either way it is being caused.
“2) The research is intrusive, harmful or cruel” : What I hope to get into more is exploring that possibility space better. I think there are assumptions here about what is “intrusive” or “harmful” or “cruel” to an animal that are questionable; and likewise with respect to degree (rounding everything up, no matter how trivial or minor or fleeting, to “intolerably severe”).
“animal-harness may be relatively frivolous” : To be clear, a harness has utilitarian function (like a seat belt, it prevents harms by restraining misadventure; and creates benefits, e.g. for a service animal guiding the blind, a sled dog driving a rescue team, a horse driving a carriage or a plow or transporting a rider, etc.); unlike “pet costumes,” which are purely aesthetic. Hence that was a distinction I wanted to bring forward. One can thus debate whether “purely aesthetic” benefits for humans are acceptable (e.g. many people think costumes shame or embarrass animals), without having to oppose genuine utility cases (harness). And of course there are edge cases (e.g. animal clothing can assist with their body heat regulation).
“perhaps there should be a norm when using animal experimentation to try to make sure that the experimentation is at least high-quality enough to answer a question decisively, and if it can’t be then the research should if at all possible be done less intrusively.” : I mostly agree. And I think this falls under the rule of necessity, where maybe (?) reform is needed here and there in consistently applying it. Although I don’t agree with the extreme variant here (“decisively”), as that disregards degrees of harm and their relation to degrees of net benefit.
But overall, if there is a non-animal way to get the same result (and all else is equal, e.g. it costs more or less the same and lacks concomitant moral problems of its own, etc.), obviously that should be done instead. As animal experimentation then violates the rule of necessity. Likewise, if there are steps that can be taken to increase quality of animal studies, they should be (as I mentioned, using Alt to narrow what is actually tested on animals, thereby increasing the net rate of good outcomes and decreasing the net harms; the very same thing we use animal studies for with respect to people). As again, not doing that would violate the rule of necessity (unless again something isn’t equal, e.g. it’s too expensive and/or the harm caused by simply going straight to animal testing is relevantly small).
Finally, (1) on the effectiveness of Alt, see my endnote: the Marvizon survey addresses that, and is very recent; and (2) I do think Dr. Bali is coming at this from a POV similar to Jainism, whereby all suffering is equal regardless of cognitive content or properties of the subject (whereas I am ranking harms by, and weighing them according to, cognitive differences in them). What’s lacking is his defense of that reasoning. He has also said some things I find perplexing on this metric (such as that even neutering and spaying is immoral, as if animals have the same “right to reproduction” as humans can claim, which is an even stranger position to plant a flag on, requiring some general premise that likewise has yet to be defended).
No matter what, I do want Dr. Bali to start exploring in an evidence-based way what animal welfare actually means. I think you and I are agreed that it’s not meaningful to talk about animal welfare from an anthropomorphizing perspective: they have to be understood in their own context.
We have a doggo who uses a harness in a car, and we are much, much less concerned with her driving since she’d have some safety in an accident, so I definitely agree harnesses are not frivolous per se, just more so than a life-saving treatment. I was just running with you using it as an example because it’s a case where there’s clear benefit and clear cost, but the cost is almost always going to be extremely benign and the benefits won’t be. And as far as costumes go, that’s a good example of what you’re trying to drill down to: Do costumes actually shame animals? I have seen plenty of dogs who pretty firmly seem to love that they are getting attention and making their pack happy. Shame as humans understand it requires social processing that is pretty much just the province of the hominids and maybe other extremely intelligent animals. My read is that the concern there is pure anthropomorphizing: We think they look silly, so we think they look silly.
I am not convinced of the extreme variant of “decisively” either; I was trying to steelman Dr. Bali, giving an example of a strong but utterly defensible animal rights position. However, while I think “decisively” may be a little too strong, I do think that animal research could require the same general kind of oversight we do through IRBs, and that could include discussing if the approach has a reasonable likelihood of success at providing important answers to a topic of significant importance. If the research can’t meet that benchmark, the benefits to society become much less and the costs to the animals become far greater in proportion. And I think that standard would force researchers to really look for innovative solutions and really have their ducks in a row before creating an AE protocol, which would start reversing the institutional inertia that may default to animal experimentation too easily.
Reading Mavizon and the Stanford statement, I think they make very strong points but don’t really suggest that alt approaches couldn’t be developed more quickly if we deemphasized AE. Stanford points out that animals have the immense advantage that you can study them throughout their entire lifespan and thus see unexpected results and can repeatedly experiment with them, which is a really massive advantage that I don’t see alt replicating. Mavizon just addresses computer models, but I can imagine many alt routes beyond that, and his basis for prediction is not as strong as his historical analysis. Mavizon points to transgenic mice, the critical nature of evolutionary pathways that introduces inevitable computational hurdles and the same for genes, and other factors as increasing the importance of animals to research. The computational argument is quite strong, but at the same time it doesn’t prove that there’s no alternative in all cases and that the current use of animals isn’t being helped by the fact that there is very little actual pressure to adopt the three Rs, of which he only really addresses Replacement. And a big thing that we’re actually getting a lot better at is evolutionary algorithms, so if evolutionary elements are so important, that may not be entirely insoluble either. Yes, an animal will always be a better computer to run a model of a human on than a computer, but I wonder if the growth in computer models is being hampered by the presence of easy AE.
And I would like Dr. Bali to defend something more like a Jainist ethic, but my scoring of the debate thus far has him having abandoned the moral framework aspect of the debate (the metaethics) and stuck with more fact-based, contingent ethical arguments. I certainly hope he spends some word count on dealing with the underlying moral reasoning!
The problem with Alt is the same as with alt meat: the tech is nowhere near capable of replacing the real thing. Maybe someday. But we remain hell and gone from that day now (and people have been trying damned hard to accelerate that curve, in both industries, to no avail: the situation for experimental Alt barely changed between the 1991 and the 2021 studies I cited).
Alt can do one or two things well, which can be deployed to increase the quality of animal studies, but it simply can’t replicate an entire physiological system (least of all any cognitive system, e.g. whether and to what extent a treatment causes pain or hallucination or what have you). Just as alt meat can “sort of” replicate real meat, but at much lower quality, and much greater cost (monetarily and, ironically, environmentally, owing to the energy and chemical resources required: see Zayner and Chriki and Hocquette and Fassler). The cases are similar: animal bodies are vastly more complex and efficient than any artificial substitute we can come up with (it may be decades, even a century, before that turns the corner).
The purpose of animal studies is the same as human studies: to find out if we overlooked something. You can’t build a computer model to do that—as that would require psychically knowing in advance what you will find. If we could do that, we wouldn’t even need human trials for anything. Which is why we have animal studies in the first place: they are a “safer” precursor to human studies. Alt can help us narrow which things we bother putting through animal studies. But they’ll never eliminate animal studies—until they actually eliminate human studies as well.
As I said before, I think the irony of the conversation is that the alt track is going to just make animal experimentation more useful, more humane and less frequent.
I do think there is a case to be made that at least some alt-pathways could be being researched more quickly if we made it a higher priority. Obviously if we had some kind of space program-esque investment to make into technology we would pick tech to solve climate change and sustainability concerns, which are a matter of human and animal welfare. And artificial organs or cell lines or what not are likely some distance away. But better computer models? Working on better observational approaches? Telemetric devices? Some innovative human studies? I think these could see greater use.
But your citations make a very strong case that confirms what I suspected would be the case: There are strong a priori reasons to suspect that AE will only be replaced with far greater tech than we currently have. Even if we imagine an incredibly good evolutionary algorithm, it can only put out the answers we feed it to start. Deep variable analysis will help because programs are getting to the point that they can identify possible connections humans never will, but even then you have to check. Again, the irony is that this will likely lead to people going to IRBs saying “My evo algorithm gave me multiple plausible and competing results based off of factors that are apparently chaotic and I can’t figure out which is why if I can’t have actual subjects, and it’d be unethical to subject humans to a fishing expedition”. In other words, I think the most plausible scenario for alt will be to make sure that our animal experiments have a much higher signal-to-noise ratio which will reduce AE only by virtue of preventing redundant or failed experiments. And the fact that you can use one group of mice for test subjects across their lifespan, in a way you can’t for humans, means that even if your utilitarian calculus counts animals as identical to humans, the ease of working with animals means you are experimenting on a net-smaller group of organisms for better data.
I’ll be curious if Dr. Bali has any good citations that suggest that a broad set of innovative approaches are sufficient.
“The Alt track is going to just make animal experimentation more useful, more humane and less frequent.” — I wouldn’t bank anything on the “less frequent,” as that has too many other inputs (e.g. research itself is increasing); at most it will improve quality (i.e. reduce the number of unfruitful or disastrous studies), which reduces harms.
“I do think there is a case to be made that at least some alt-pathways could be being researched more quickly if we made it a higher priority.” — I’m skeptical. It’s been thirty years with little progress despite enormous investment. Everyone thinks a moonshot will solve any problem. History does not bear that out. Getting to the moon was actually a lot simpler a project than people think (we knew everything we needed to do it, before we even started; the rest was just getting it done), compared to, say, reliably replicating an entire physiological and cognitive system.
All other Alt is inherently limited in what it can assist (at some point, you still always have to test things on an actual physiology), or else actually increases rather than reduces harm (e.g. kamikaze human experimentation is not an improvement over animal experimentation but far worse; Paul and I disagree on this because he thinks all animal harm is equal to human, and I do not—not even close).
I don’t object to capitalists who want to invest more in this. But IMO, we have far more pressing priorities in the relative scale of harms to blow our wad on (e.g. if we are going to moonshot something, I’d rather it be climate management; which, needless to say, would even benefit animals vastly more than physiology sims).
“And artificial organs or cell lines or what not are likely some distance away. But better computer models?” — We are actually far more advanced on the former than the latter. Artificial biology is showing substantive progress (not entire physiologies, but building single organs for transplant, for example, should arrive within a decade or two). By comparison, physiology sims are primitive at best. We are nowhere near what we need on that. We don’t even know what we are supposed to be replicating. That’s why we still have to do live studies.
I don’t see us being able to sim that reliably enough to eliminate live trials (and as long as we need human trials, we will need animal trials as a quality amplifier on the former) for at least fifty if not a hundred years, given the pace and rate of development seen so far. Indeed, we are more likely to achieve general AI much sooner than physiology sims; because true AI is at least achievable on general principles, or on pared down minima (the bare minimum machinery needed to realize it, e.g. when emulating brains, we won’t need to emulate literally “everything” in a brain, much less all the way down to selective DNA methylation in every single neuron), whereas replicating an entire animal or human physiological and cognitive system is vastly more challenging and complex (indeed, likely the most difficult thing the human race will ever do).
“Working on better observational approaches? Telemetric devices? Some innovative human studies? I think these could see greater use.” — These have too limited a use to ever replace live trials, though. The purpose of live trials is to catch things we didn’t think of. Therefore, by definition, tests that require already having thought of something, won’t help much. I’m all for using them when they do help. But to be honest, I haven’t seen any evidence presented here that we are substantially “neglecting” any of these things (much less, so much that we could radically alter the way we are doing science); but insofar as we are, that falls under the “reform” column of advocacy, not the “abolition” column.
“I think the most plausible scenario for alt will be to make sure that our animal experiments have a much higher signal-to-noise ratio which will reduce AE only by virtue of preventing redundant or failed experiments.” — I agree. Insofar as we can do that, we should do that. I just wouldn’t be too optimistic as to how much we actually can. Tons of people have been trying for decades, yet progress remains slow. I doubt more investment will help much. It’s not like corporations and institutions don’t already have ample incentive to master this tech (anything able to sim an entire physiology reliably would have countless other applications of immense value).
Just look to the alt meat industry for an analogy. Tons expended. Decades of work. Lots of hype. No sign of being anywhere near success. If we can’t even make a usable fake hamburger, we won’t be making a useful physiosim anytime soon. I doubt we will see one in our lifetime.
Since we will just double the number of things we study after we halve the number of animals needed per study, Alt will not likely reduce the number of animals experimented on. I think it’s worthwhile anyway (improved efficiency both reduces net harm and better leverages our material resources). But one should have a realistic idea of the outcome.
Broadly agreed. I would only say that perhaps there is a distinction between the kind of alt that we do now that is motivated by expediency, corporate needs, cost-cutting, etc. rather than animal welfare, and that perhaps if we put some amount of money (i.e. not moon-landing money but maybe some billions) into it we might find that there are alternatives. But it does make sense to me that, while animal welfare per se may not be a high priority for planners, that has never meant that we are at all happy with having to do messy, controversial, weird animal studies in the first place, and that the same tech that you can use to circumvent animal studies could be used to circumvent human studies, which do engage with stakeholders with real power and are messy, complicated and weird. The meat industry in particular has definitely put in work on animal substitutes and it’s a not-insignificant business, and they’re still nowhere near done even when it comes to making a vegan burger that has the taste and texture and culinary properties of a burger.
More importantly, this is a pathway for reform, not revolutionary change. A combination of better procedures, better review and better tech would help, but wouldn’t change the underlying dilemma. And we have to be realistic, always, about the capacity for enforcement anyways. After all, it is a huge priority that science be high-quality, and yet we have plenty of institutional problems going the other way! In fact, there is an argument to be made that doing things like fixing publication bias, improving peer review, promoting more replication tests and making success in that honored in the academy, etc. etc. would improve animal welfare more than even alt investment, as we would be spending less time having to ferret out crap AE results!
I’ve really enjoyed the discussion so far. I’ll wait for further posts from Dr Bali as I feel I don’t truly understand their exact viewpoint yet, but I think I’ve got a better grasp of where you stand Dr Carrier.
The one part of this post that stood out to me was; “The (humane) death of an animal is simply, objectively, insignificant.” We are obviously excluding animals like elephants, but is it really insignificant just because the animal itself can’t understand what is happening? The suffering and death of these animals is significant to Paul otherwise you wouldn’t be having this debate. If I misunderstand what you mean please let me know.
This discussion is so interesting not just as a debate between two people but also the tension in all of us. The gut emotion of not wanting innocent animals to suffer and die vs the rational justification for a greater good. Of course even the death of a human can be seen as insignificant in the grand scheme of things. Whether it be medical experiments or just matters of convenience and economy like driving cars, we all choose to sacrifice lives for the society we live in. We also make decisions for animals without their consent all the time, this seems necessary to me. Hopefully Dr Bali can present the case in a way I can understand better in the next post but I just can’t see how we can completely get rid of animal testing (is this even their view?). Ultimately I agree with you Dr Carrier there are many situations where it’s justified and these sorts of ethical decisions need to be made carefully on a case by case basis.
That’s a good point to bring up. Of course Paul and I are talking about whether animals have intrinsic moral value, not personal subjective value (see Objective Moral Facts for the distinction). Certainly animals can have assigned third-party value (e.g. someone can value their pet or prized race horse or the survival of a specific species for its ecological or aesthetic properties, and the latter can sometimes even have objective value, e.g. when the loss of a species would harm humans, likewise social maltreatment of others’ pets, and so on), but that wouldn’t generate the conclusions Paul wants here (no animals are threatened with extinction by experimentation; and Paul has no legal or moral rights over the animals that are being experimented on).
Since Paul does not own or have any right over the animals being experimented on (they aren’t his pets; no species extinction is being threatened; etc.), his “valuing” them in only his own personal sense has no objective weight (i.e. it cannot be a reason for experimenters to change their behavior, and therefore it does not dictate any moral conclusion over them). At most, such a position could be an argument for Paul expending his own money (and seeking like-minded persons to do likewise) “buying” so as to rescue animals from experimentation for his own personal satisfaction. But that’s an aesthetic goal (at most supererogatory), not a moral one.
(It would also mostly be a futile waste of resources; except when we already do it, e.g. even experimenters fund chimpanzee sanctuaries, but as I noted in my opening, apes are a cognitively different class of animal, and aren’t among what I am including under my use of the word “animal” in this debate.)
Dr. Carrier wrote the following concerning animals:
“They lack narrative memory, self-identity, abstract goals, even a comprehension of the significance of life or death. The (humane) death of an animal is simply, objectively, insignificant. Because “one more year of life” doesn’t mean anything to an animal.”
Response: But strictly from that standpoint couldn’t one then make the same argument about human infants?
Just sayin.
I already rebut that claim in the text you are commenting on. Read the wording carefully. There is high word efficiency in these entries because we have chosen to limit each to 1200 words, so every single word counts:
…
Also, remember my definition of animal for this debate (from entry one):
Have adult disabled or handicapt adults in vegetative states got “cognitive futures” so greater “harm” in experimentation is therefore permissible ?
Of course such people have cognitive futures worth accounting. And when they don’t, we morally accept (indeed, typically even recommend) discontinuing life support—and welcome the recycling of their organs or even entire body for actual and experimental use.
Keeping a brain-dead body alive is pointless; as it only used to be a person. Whereas people we do not know this of, we cannot presume this of (e.g. someone in a coma might awake; all the data constituting them as a person could still be in there, and it’s just a matter of time or technical development before they return).
By contrast, animals never contain a person. As far as that metric goes, they are empty vessels. They can have personalities, memories, and cognitive lives, but not such as would constitute a person. They don’t even share many of the related cognitive capabilities of a human infant. Unlike coma patients, animals never “wake up” and suddenly become a person (much less discover they always were one), and there is no chance of that ever spontaneously happening. So they are not morally comparable.
Dr. Carrier
On a somewhat related note, I’m curious what you thing about how we as a society handle situations differently when a person or animal is in non-stop suffering, and the situation will not get better and possibly worse.
With animals (even our most beloved pets) we do the “humane” thing and put them down. Obviously the pet can’t voice their own wishes in how they want this situation handled, but we make the decision for them based on what we think is the most humane thing to do.
But with people we handle it entirely differently. Now matter how much their suffering and prognosis, and despite their wishes we completely disallow that (assisted suicide).
Do you see a contradiction in how we handle that, especially given that humans are more sentient beings and are actually capable of expressing their own desires with respect to that decision?
I don’t observe “us” (progressives like me) doing anything differently here. Progressive morality entails support for voluntary euthanasia. The only reason we don’t extend the “voluntary” part to animals is that they are incapable of it, so we have to make the call for them. Just as we do for people (e.g. the brain-dead) who likewise can’t make that call for themselves (and thus their legal guardians decide when and whether to terminate life support). But as soon as that capability exists, it takes priority.
Maybe you meant to ask why there are people who still oppose progressive morality. Since these are usually the same people who think being gay or smoking pot or being nonmonogamous is immoral, for example. Their morality is simply bogus. The rest of us have left them behind, moving on to rational, evidence-based moralities. If you want a historical-causal account of how bogus moralities historically developed, and what still politically empowers them, that would be off topic here. The present task is to determine what is morally true; not the historiography of human error.
Dr. Carrier states the following concerning cognitive futures of humans in certain states or points in their lives:
“Yet babies already have a cognition exceeding “animals,” and have a substantially greater cognitive future besides.”
-and-
“Of course such people (adult disabled or handicapped adults in vegetative states) have cognitive futures worth accounting.”
Based on these comments Dr. Carrier you seem seem to put stake in the value of one’s cognitive future.
Having said that couldn’t an anti-abortionist take your position as a strong argument for why we should value life at conception, given that such forms of life undeniably have a “cognitive future worth accounting”?
That I “put stake in the value of one’s cognitive future” is correct. The entire thing that is of value, is the valuing engine, the thing that creates value and its comprehension. That is the only thing that can have value in itself; everything else only has value by virtue of a valuing engine valuing it.
A fetus does not have one of those. Until the third trimester, when indeed my position is that elective abortion should then be illegal (the very same position taken by the U.S. Supreme Court in Roe v. Wade; non-elective abortion at that point, however, is self-defense and therefore appropriate, which is also the position of the Supreme Court, with whose analysis I fully concur).
A potential house is not a house. But a house under construction, is. A body is not a cognitive instrument. But a brain, is. And before the third trimester of pregnancy, a fetus has the cognitive construct of an animal, not a human (and in the first trimester, hardly even that). It therefore has the same status.
Therefore, before the third trimester, the cognitive future of a fetus is purely hypothetical, not actual. That future therefore only has value to the mother carrying it (or not), and thus it is up to her whether to pursue it. Because the fetus itself cannot at that time have values, or value anything. Whereas by the third trimester, it possibly could (it has the machinery to construct cognitive desires and begin building itself into a person, rather than just a body without one). Then it’s a house under construction—not just the construction company standing by to build it (which is all a body by itself is).
Specifically concerning this statement:
“Therefore, before the third trimester, the cognitive future of a fetus is purely hypothetical, not actual. ”
Response: Given the very high success rate of most most pregnancies (including fetal development), I would have to take issue with the statement that the “future of a fetus is purely hypothetical”. I know what you are saying, that we can’t know for certain that any given fetus will reach full term, but the probability of that happening is so high that is can’t be discounted and assessed like it some some kind of crap shoot with some unknown or unpredictable odds of success.
So based on what we know about the rate of successful pregnancies, we can safely say that MOST abortions (of an otherwise healthy fetus) are in fact impacting the cognitive future of a fetus.
Actually, most pregnancies fail.
But that’s not relevant to the point.
A fetus is a hypothetical person in the same way a blueprint or a construction company is a hypothetical house: something does physically exist (a blueprint; tractors and managers); but it isn’t the thing in itself. Neither a construction company nor a blueprint is a house. Likewise a fetus is not a person (not even partially). Until (at least possibly) the third trimester, when enough machinery exists to start building a person, so (for all we know) that’s underway then. By a certain point, an incomplete person experiencing consciousness is still a person in the same way that by a certain point, an incomplete house you can shelter in is.
You have no obligations to hypothetical people (until they are not longer hypothetical). Those people don’t exist yet, so they have no interests (unless they are effectively certain to exist, but that then depends on some actual, not hypothetical, person’s decision to build them).
For the purposes of moral consideration there is no meaningful or relevant difference between a fetus and a sperm or egg. You have no obligation to the hypothetical children you “could” have—no matter whether there’s just a cell or an unfinished body sans cerebral cortex—until such time as you actually commit to having them (and thus, you have obligations to the future people others have committed to having, e.g. to leave the world in a better shape for them). Otherwise, until some actual person decides to proceed, a fetus and a cell are both just blueprints and construction companies; neither contains a person or even the ability to be generating one.
This is an ontological fact. So there is a real difference between a “hypothetical” cognitive future, and an actual cognitive future. Only actual cognitive machines can “have” the latter (as in, actually possess it: there is at least a partial, actual, existing person then, consciously present, who possesses a thing, a future). Hypothetical things cannot “have” futures (other than purely hypothetical ones). Because things that don’t exist can’t possess things.
Thus, when all you have is a blueprint or construction company, the decision whether to actually start and continue building has yet to be made, and there is no “person” existent who can make that decision—except the one whose womb is being used for the project. Once you have at least a person-in-progress (an actual cognitive machine with partial personal characteristics and active consciousness), then it exists, and thus it has interests (actual ones; not hypothetical ones). So it is then no longer the case that only one person is around to have an interest in what then happens.
Dr. Carrier,
I want to know what do you think about The Cambridge Declaration on Consciousness:
https://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf
do you think that animal rights activists will achieve by this declaration giving animals the same rights as we humans have?
What do you think about Oregon’s IP 13 and IP 3:
https://congressionalsportsmen.org/the-media-room/news/proponents-of-oregons-initiative-petition-13-abandon-efforts
Will this ever go through and become a law in Oregon?
What do you think about EAT-Lancet?
https://eatforum.org/eat-lancet-commission/eat-lancet-commission-summary-report/
Did you actually read the Cambridge Declaration? It actually doesn’t say anything I have not already myself said. Perhaps you have been misled by its esoteric vocabulary, but no one is debating whether there are many animals that “experience affective states.” That just means they feel emotions. None of our debate here is about whether many animals feel emotions. Personal consciousness is not just “feelings.” Those two kinds of consciousness must not be conflated.
And if you have read the debate you are here commenting on, you should already know I am most definitely arguing against laws like Oregon’s Prop. 13. But if you mean to ask instead whether I think it nevertheless could become law, even if it’s unjustified, my answer is also no. It has so little popular support as to be certain never to be law. They couldn’t even get the 112,000 signatures needed to put it to a vote. In a state of over 4 million people, that’s as nailed as a coffin can get.
Meanwhile the entire EAT-Lancet report is TL;DR. Too many subjects and claims to vet; and there is no easy way to even read the full report without giving them personal information (I distrust anyone who won’t just publish this kind of report openly). So you will have to be more specific.
But if you have read it, then you might want to read my article, as I linked to in this debate you are commenting on, Meat Not Bad (and possibly also Is Society Going to Collapse in 20 Years?), in case that report simply repeats the false science I debunk there (I don’t know that it does, but this exercise, if you carry it out, should clue you in on how you might vet the EAT-Lancet report on your own, by applying the same critical approach; and it might also help you narrow down to a more specific question to ask me).