This continues the Carrier-Bali debate. See introduction, comments policy, and Bali’s opening statement in Should Science Be Experimenting on Animals? A Debate with Paul Bali; as well as my first response to that In Defense of the Scientific Use of Animals.
Against the Scientific Use of Animals
— Part II —
by Paul Bali, Ph.D.
-:-
See the Trolley: speeding toward five humans. On the off-track, a restrained animal. Thus more innocents die by our refraint from track-switching—by our abjuring AE [Animal Experimentation].
Yet consider some disanalogies with classic Trolley:
- The cost of AE is certain, yet the benefit often dubious.
- There’s a third track: non-AE science.
- We tied the animal to the track!
Some disanalogies are better handled by Trolley variants—3, for example, invites discussion of Fatman and Transplant.
Another disanalogy, as Richard advocates: the mainway Five & offtrack One are different kinds—human & animal. Some differences may support track-switching—e.g. the Five’s exalted value; while some may support refraint: e.g. the One’s exclusion from system benefits.
In this post, I’ll expand on 1 and 2. I’ll address the question of species difference as I proceed, but with special focus on 3.
- 1. The cost of AE is certain, the benefit dubious
Every lab animal is harmed by AE—if only killed when no longer useful; yet few experiments yield life-saving knowledge.
Especially so in exploratory / basic research, “which, by definition, is not necessarily intended to lead to applications for humans.” [1] Doubtless we learn much by using our animal relatives. If we want to explore optogenetics, mouse brains will do. Yet there are no endangered humans—the mainway is clear—in much AE. Even when clinically translated, much AE doesn’t save lives. This trolley runs over animals to relieve morning sickness & acid reflux; to restore dopamine levels & visual acuity; to replace kneecaps.
Since AE is a dominant paradigm, often legally mandated, most successful therapies have AE in their history. No doubt it often contributes. Yet see the troubling hit-rates: the 195 treatments for Type 1 diabetes, the 30-40 HIV vaccines, the 300 Alzheimer’s interventions all effective in primates or mice that didn’t make it through to human use. [2]
Partly to blame is the superficiality of many animal models, rough contrivances that “fail to reproduce the complexity of human ailments.” [3]
Partly to blame are factors like publication bias and statistical massaging that drive the wider replication crisis. Whatever the cause, a primary justification for AE’s grievous harms—medical translation—is de facto hampered by the low rate.
- 2. The Alt track
I agree with Richard that a problematic system may be improved rather than scrapped. Yet the system worth preserving here is Science, which may be improved by moving away from AE.
AE is necessary for its ends (health & knowledge) only when the non-AE methods (henceforth ‘Alt’) are inadequate.
Alt includes in vitro methods, like the Monocyte-activation Test, which generates 50,000 tests from 500 ml of donated human blood, replacing the inferior Rabbit Pyrogen Test. [4]
Alt includes comp sims, and a 3D-printed heart model newly developed at Harvard’s Wyss Institute. [5]
Alt includes “non-invasive observational or behavioral studies of free-living or sanctuary chimpanzees, and experimental treatment of chimpanzees genuinely suffering from severe, naturally occurring disease or injury, when conventional treatment is ineffective.” [6]
Alt means telemetric data from free animals by devices shrunk to a point I’d call non-invasive. [7]
Alt means discerning the lab of our world, the implicit mega-studies ongoing for every ailment that, by our growing powers of data collection, collation, & analysis, may be foregrounded. [8] Big Data need not mean expanding to 30,000 the 3000 animals now expended in pre-market safety-tests for each U.S. pharmaceutical.
Alt includes research on consenting humans, even invasive. The insulin experiments on dogs could have used us: whether patients risking an experimental therapy, or altruists for medical progress. Dogs were necessary only by demurral of the better “model animal”.
[I]t simply does not follow that our special endowments (if such they be) justify the infliction of suffering on other sentients. Indeed, an argument could be properly run in the opposite direction—namely, that because humans are unique (especially in a moral sense), they should agree to sacrifice themselves to achieve useful knowledge, rather than inflict suffering on others who are morally blameless. [9]
Just as a warrior culture valorizes martyrs from their great campaigns, a scientific culture might valorize those who self-sacrifice for healing knowledge. By graduated dosing, former Dow chemist Alexander Shulgin self-tested hundreds of novel psychoactives at his Bay Area home lab. And kudos to all the community volunteers in the quest for a live cholera vaccine! [10]
Richard considers breaking the Village Hangman’s dilemma by disabling the dilemma-generating system: by killing the aggressors, thus sparing both One & Five. [11] Presumably the villagers are armed & many, since they’ve overpowered their would-be victims. I admire Richard for mulling this third track, and hope I’d find courage in the fray to join him—yet wonder if the Alt route is as plausible! Humans are amazing—we dream the impossible, then engineer it into being. The dream becomes the legislated EU goal of “full replacement”, which focuses to a Dutch five-year plan for toxicology, and so on. [12]
- 3. Med-Sci good spreads far into the future
Once we’ve unlocked a therapy, it’s forever—Dark Ages aside. Thus, even should AE unlock the therapy faster, the relative benefit of AE to Alt dwindles with time.
The good, for example, of a cure for congenital disease X consists of all future cases prevented. Say X afflicts a million births a year, and that, via AE, the cure could be had by the year 2050; yet not till 2100 by Alt. By the year 2200, the Alt world’s benefit is 67% of the AE world’s benefit (100 million people spared X, compared to 150 million); yet by the year 3000, it’s at 95%; and so on. (Assuming a stable population. Relative to a galaxial Earthling diaspora, the AE advantage could be vanishingly small.)
True, this increasing divergence in the cost/benefit ratio is a major argument for AE: relative to a vast future of benefit, AE’s harm may be small. Yet AE inflicts harm on the healthy & unconsenting. Amputating A’s dead limb may aesthetically disturb, but stealing A’s healthy limb so H can use it—this is the closer analogy to AE, and the horror is moral. Moreover, AE harms true bystanders, beings largely excluded from the system’s benefits. More on this coercion, next post.
Speed kills! We’ve come into the Bomb perilously premature, and now grab the molecular ring of life. AE may lack a vast future, if it speeds ahead of our moral maturation. Taking the ahimsic route—the track of trans-species benevolence—could better sync our technological & ethical progress, and forestall apocalypse.
-:-
-:-
Endnotes
[1] Elisa Galgut (2015). “Raising the Bar in the Justification of Animal Research.” Journal of Animal Ethics 5.1: 11.
[2] Jim Keen (2019). “Wasted Money in United States Biomedical and Agricultural Animal Research.” Animal Experimentation: Working Towards a Paradigm Change. Kathrin Herrmann & Kimberley Jayne, Eds. (Brill): 255.
[3] Keen (2019): 249.
[4] Thomas Hartung (2015). “The Human Whole Blood Pyrogen Test: Lessons Learned in Twenty Years.” ALTEX 32.2: 95; Thomas Hartung, Audrey Borel and Gabriele Schmitz (2016). “Detecting the Broad Spectrum of Pyrogens with the Human Whole-Blood Monocyte Activation Test.” BioProcess International (March 11, 2016); Hannah Balfour (2021). “Rabbit pyrogen test to be replaced by European Pharmacopoeia.” European Pharmaceutical Review (July 9, 2021).
[5] Wyss Institute (undated). “3D Bioprinting of Living Tissues.”
[6] Andrew Knight (2012). “Assessing the necessity of chimpanzee experimentation.” ALTEX 29.1: 94.
[7] See for untapped potential of telemetrics: Garet P. Lahvis (2017). “Unbridle biomedical research from the laboratory cage.” eLife 6:e27438: 5.
[8] Daniel Kraft discusses Big Data via wearable biometric devices in his 2018 interview with Rob Reid (cooption by Big Brother a concern, no doubt): Rob Reid (host). The After On Podcast 28 (May 29, 2018).
[9] The Oxford Centre for Animal Ethics (2015). Normalising the Unthinkable: The Ethics of Using Animals in Research. Andrew Linzey & Clair Linzey, Eds (p. 37).
[10] James B. Kaper, Hank Lockman, Mary M. Baldini, and Myron M. Levine (1984). “A Recombinant Live Oral Cholera Vaccine.” Nature Biotechnology 2: 345.
[11] Richard Carrier (2021). “Everything Is a Trolley Problem” (27 September).
[12] Directive 2010/63/EU. (European Parliament, 2010, Recital 10). For the Dutch plan, see The Netherlands National Committee for the Protection of Animals Used for Scientific Purposes.
Dr. Bali, I think this is a much stronger showing than your previous post, but I still find myself woefully unconvinced, even though I am sympathetic to your argument.
You say that the cost to the animal is certain but the benefit is not. But by reductio ad absurdum, we should never do medical research at all. Not only is medical research on humans not certain either, it is infinitely less certain because it is usually occurring outside an experimental context and the goal is to fish for a huge number of variables rather than the more-focused kind of things you can do in an animal study. Yes, at least those animals called “humans” get to consent and have some idea what is happening to them, but barring you establishing the very core point that it is deontologically wrong to use a sentient being without their consent for the benefit of another sentient being, this point in isolation is moot. We do things with certain costs and dubious benefits all of the time. If you are in a trolley problem where you know you will kill 10 people if you pull the switch but have a 50% chance for each person out of 100 that you will kill them if you don’t, you pull the switch.
Moreover, thus far you haven’t specified enough about animal experimentation to actually show that the costs are always certain and relevantly high. Of course you are trying to keep word count down and so I imagine you have thoughts here, but what immediately occurred to me was animal research (which certainly counts as “experimentation”) in the vein of Koko or Nim Chimpsky. We put animals through tons of experiments that not only don’t have to be harmful but can be intended to be helpful and have helpful outcomes. (Yes, I am aware that chimp research is fraught with some ethical considerations that don’t often make it into the analysis, but still, it is just ludicrous to try to imagine that all research that zoos do on animals is all sadistic and harmful). Now, perhaps you are trying to focus only on the worst because really the debate is more about the constellation of interventions that require vivisection, experiments that can easily kill or harm the animals, etc., and basically defining other kinds of experimentation on animals out of the equation. But my concern there isn’t even the cherry-picking that that can arguably entail: It strikes me that I don’t see a clear bright line on the spectrum from non-experimental observation of animals with no intrusion to non-experimental observation that involves intrusion (e.g. tagging an animal but not controlling their conditions) to experiments that just require controlling their habitat and observation up the ladder.
By your own admission, some life-saving data is found. What I find telling is that you set the threshold that high. Why is only directly life-saving, verified outcomes of research important? Let’s say that there’s a drug that researchers suspect could have serious side effects, like massive pain or a huge risk of cancer. True, there may be all sorts of alternatives, and true, the research on the animals may end up being inconclusive, but some portion of the time those researchers who would choose not to do animal research would then need to experiment on people. You’re not reducing net harm, and since the people involved may be consenting to the experiment but will necessarily be under a double-blind and may be in situations where the medical trial is their only route for treatment, even from a Kantian perspective there is still an element where people with moral agency are having choice in their life limited by a random element for the greater good. Sure, I agree that it is probably unreasonable to basically torture animals to make sure lipstick is safe, but to set the potential bar for the benefits you count as being at the threshold of life-saving insight seems to tip the scales in your favor ludicrously. Massive pain and serious side effects are worth taking seriously.
Worse, I read you as having counted failed experiments as failures. Ummm… that’s not how science works, Dr. Bali! Many of those failed animal experiments saved lives. By not wasting time on unfruitful lines for research in Alzheimer’s and other areas, and avoiding drugs that could have had serious side effects, lives were helped. Again, you invoked the trolley problem here! Flipping the “Person X doesn’t get poisoned switch” is essentially identical to the “Getting person X an immediate and perfect antidote” switch!
Mentioning publication bias, etc. only tells me that you are making moot objections. Yes, we need to fix research in a lot of ways. That isn’t unique to animal experimentation.
Your second point is more interesting, but ultimately even less strong in this debate. Again, you invoked a trolley problem and then made the argument here, which tells me that you may be arguing from preexisting concerns and then retrofitting arguments to fit instead of being consistent. You are arguing to Richard that sometimes you don’t need to pull the track, because maybe there is a conveniently placed rock in front of the fat man. But that doesn’t answer the question of what you do when the rock isn’t there! That’s the whole debate! Obviously if animal experimentation isn’t necessary, it’s not necessary! The problem is that it clearly, for the time being, is necessary in some contexts.
Might we be able to find win-win scenarios if we do as you suggest and “move away” from AE (in other words, raise the bar for using it?) Sure, I think that is reasonable. But I have a concern that you may be executing a motte-and-bailey fallacy here. “Animal experimentation is deontologically wrong” is a very strong claim, and one that requires vigorous defense. “We should be judicious about any experimentation that causes unnecessary pain or harm” isn’t. The first statement is one that suggests radical change. The second only demands vigorous reform. But trying to have both those debates simultaneously isn’t going to be productive. I suspect Richard may end up making the same point here.
And you have utterly failed to meet the burden that animal experimentation would never be useful. You’ve shown there may be alternatives, and I frankly think it is true that a lot of animal experimentation occurs because of socio-institutional inertia and cost: It’s easy, lazy and cheap. But even there, there are concerns. Untested experimental protocols, or less tested protocols, are just that: less tested. There will be a learning curve in switching to alternative methods. Moreover, if I were doing computational models or working with synthetic organs to test, I would still want an animal to test on as the next step before I could test on a person. Ironically, I think the biggest outcome of trying to find alternatives first will be to make animal experimentation more humane by increasing our hit rate.
Your final argument is also uncompelling. First of all, “Hey, Bobby, wait for your grandpa to not die of cancer because a rat may have a tummy ache” is not as good a look as you might think it is. People need help now. Second, not all medicines remain useful in perpetuity. You know, like antimicrobials. Third, even established medicines routinely need to be revisited when we want to improve their form factor or, you know, revisit costs. In the past we were blase about using powerful and addictive painkillers. We’ve learned the hard way to not do that. So now we need to revisit the painkilling experiments, despite the existence of the body of research of NSAIDs and steroidal painkillers and opioids/opiates etc. Fourth, we also need to be revisiting old ideas to, say, find medicines that may work for people who have an allergy, or replace medicines that require an unsustainable model to synthesize, or replace medicines that require some resource that is consolidated in particular countries so that resource extraction can be done locally. Finally, science itself has to grow and learn, and that process is better off the faster it goes as past insights are synthesized into new ones.
Most importantly, this argument hinges on you ever finding an alternative. In other words, you are excluding, prima facie, the scenario where a particular cure is never discovered without some insight from animal experimentation. The progress in science is weird enough I would not be so confident. We may have taken far, far longer to discover penicillin if someone wasn’t extremely diligent in examining what would be viewed as failures of protocol.
In other words, you seem incredibly optimistic that we will be able to retain and use every new approach we research in perpetuity, and… that’s just not true. (And there are also moral bystanders from the non-AE track too: If we don’t figure out solutions to certain tough-to-kill bugs, animals can get hurt too, to say nothing of veterinary medicine on animals, and numerous other categories one could imagine).
If we want to be blase about the pain of humans who are sick now and need help later, we should apply the same standard to animals. In the long run, every animal we experiment on will be dead anyways. So who cares about their welfare now, especially given most of them have such short lives? I think you would find that monstrously callous, as I certainly do. But that is how you are performing this analysis. Yes, I understand you are actually arguing that benefits accrue more in short term, which I still think is false, but in any case while that is important I don’t think it is anywhere near as strong as you think it is. We only live in the short term. And we can only make decisions within a horizon we can plan in.
Thank you. Those are all points worth reading and considering. I was already building a couple of them into my reply before I read this. But I want to make sure Paul knows he is only required to respond to my word-limited response. Though of course I think he’d benefit from reading your comment entire and keeping its points in mind for his own philosophical evolution. And of course he is welcome to respond to it here as well if he wants to.
Paul said in his previous post he would be looking through comments after he was done. I don’t expect a reply, just wanted to make sure that there’s some addressing of broader issues you may not be able to get to due to word count. I am indeed just a rando here.
Also, as always, I like to add that it’s really great to see people participating in this format. I really hope more people switch to this approach of debate rather than live debate (if a less-antagonistic discussion is not possible), because it allows for the brevity of a timed debate but also allows for citations to be made and discourages Gish galloping. Thanks to Dr. Bali for discussing an important issue!
Oh you are no rando. You have demonstrated consistently thoughtful and productive commentary here. Always value added.