Cartoon depicting a ridiculously complicated and illogical version of the Trolley Problem as a mock model of the 2016 U.S. Presidential election. Click the link to read the associated New Republic article by Clio Chang.

Ah the infamous Trolley Problem. So ubiquitous, we find it meaningfully featured even in the television show The Good Place. A lot people people don’t like the Trolley Problem. Its very existence vexes them. They’d rather complain about how it supposedly doesn’t teach us anything and we can just sweep it under the rug as just another bad idea, along with digital watches, portable currency, and coming down out of the trees in the first place. At Aeon we’re told Trolley Problems are too simplistic to usefully analyze reality (just like Game Theory…oh wait). At Quartz we’re told Trolley Problems are of no use because it can’t be decided how to program deadly AI from “first principles” (a notion just as obviously false: if you have no first principles to work from, you can have no principles at all). At Slate we’re told Trolley Problems are useless because people don’t actually know how they’d behave in real life, so what use is a thought experiment? Never mind that the entire fields of Economics, Game Theory, and Political Science rely on foundational thought experiments continuously applied to real-world situations, and that thought experiments are the most common and necessary instrument in Psychology, Contingency Policy, and Crisis Management. Yes, you can Hose Thought Experiments; and average Janes and Joes (and especially Karens and Chads), having the least skill at it, will fail at it the most often; but that they can’t do heart surgery or build a rocket, either, is not an argument against heart surgery or rockets.

As it happens, literally almost everything is a Trolley Problem. So these attempts to escape it won’t do you any good. Like the guy who insists he has no metaphysics—which in the very act of declaring, he embraces a conclusion in metaphysics; and worse even than that, a conclusion that doesn’t even track how he actually behaves, which instead will be in accordance with a rather elaborate metaphysics, one he has simply committed, like the apocryphal ostrich, to never examining or questioning, rather than actually abandoning (much less fixing). I’ll explain what I mean by “everything is a Trolley Problem” shortly. But first I’ll catch you up to speed on what I’m talking about. In case you didn’t know, the standard Trolley Problem, developed by Philippa Foot in the 1960s (one of the greatest women in philosophy in the 20th century), is most simply described as: a runaway trolley is flying down the track, about to run over and kill five workers (who can’t get out of its way, for some reason that doesn’t matter to the point), and you happen to be standing next to a switch that, if you throw it, will divert the trolley onto a different track, where it’ll only kill one worker. What do you do? And more importantly (because this is the point of the experiment), why? Like the basic Game Theory scenario that launched a world science, “Trolleyology” has since iterated the basic problem into all manner of variants, from the Fat Man on a Platform or the Hospital Transplant Dilemma to the Village Hangman (“If you stumble on a village where they are about to hang five people for witchcraft and offer to let four go only if you yourself hang the fifth, do you?”).

Trolley Problems have two particular attributes: one is that they force the experimenter to compare the outcomes of positive action and inaction; the other is that they force the experimenter to face the fact that either choice bears costs. As such, Foot’s Trolley only puts into stark relief a fundamental truth of all moral reasoning: every choice has a cost (There Ain’t No Such Thing as a Free Lunch) and doing nothing is a choice. Both of those principles are so counter-intuitive that quite a lot of people don’t want them to be true, and will twist themselves into all sorts of knots trying to deny them. I make this point because most people focus on the fact that “Trolley Problems” seem always about death (“Do five people die or only one?”), but that would be to miss the entire point of the Trolley Problem framework. I once showed a class of Christian high school students the scene in Beautiful Mind where John Nash explains his revelation of (what would become) Game Theory to his bar buddies, using the “dude” example of how to score with women at the bar: if they all go for the most attractive one, they all block each other and lose, but if they all cooperate to divvy up approaching her friends, they all get dates (yes, not that enlightened an example, but neither is a trolley rolling over people). The students couldn’t get off the notion that the scene meant Game Theory was about getting laid. But getting a date was entirely incidental, just a silly (and intentionally comic) barroom example; they missed the point.

So, too, are people missing the point who act like Foot’s Trolley is a philosophical question about killing people; or who think that even when it is about killing people, that it’s about how to find a way in which all the deaths could be avoided somehow, and so people “respond” to Trolley Problems by inventing a bunch of “what ifs” that allow them to “win the game” as it were, which is again missing the point—because the Trolley Problem is designed to model specifically those scenarios where they can’t all be saved. Just as Game Theory was designed to model, and thus analyze, situations in which everyone can’t get everything they want. Which describes much of human reality, being an elaborate construction of compromises—even with yourself. Even when you divvy up the chores so everyone gets to tackle their favorite one, still everyone would prefer to have had no chores at all. There is always a cost: in that scenario, we all have to do a chore. The response to Game Theory cannot be, “Well, we’ll just find a way where no one ever has to do any chores.” Because that’s impossible. Chores have to get done. So someone has to do them. Just as the response to Trolley Problems cannot be, “Well, we’ll just find a way to save everyone,” because that’s like saying, “Well, we’ll just find a choice that costs no one anything.” Because that’s impossible. Everything costs something. So which costs are you going to choose? “I choose nothing” is not an option; because “doing nothing” costs—often, in fact, a lot more than doing something. Thus, inaction is an action. It does not matter how much it “feels” to us like we are making no choice in the matter, that we aren’t doing anything and thus aren’t “responsible” for what happens. We are always responsible for our inaction.

Consider three examples of failed Trolley Problems:

  • “Doing nothing” to fix the levies whose failure devastated Louisiana in the face of Hurricane Katrina ended up costing Louisiana and the Federal government (and thus every taxpayer in the nation) vastly more than fixing the levies in the first place would have. Hence doing “something” instead would have been far cheaper. Inaction ended up outrageously more expensive—and outrageously deadlier, for those who want a lot of “killing” in their thought experiments. This was a Trolley Problem. In money or bodies. “Flipping the switch” would have killed fewer people—and cost us vastly less in resources. We chose to stand there and do nothing, and then claim it wasn’t our fault.
  • “Doing nothing” to fund the cold-weathering of equipment caused the 2021 Texas Powergrid Disaster, which killed hundreds of people and cost tens of billions of dollars, and immeasurable headache and ruination. While Republicans disingenuously complained about “wind power” not being up to snuff, to push their gas lobby, such that the story soon became how in fact most of Texas’s failed power came from natural gas plants not having been adequately fitted for cold weather, the same truth actually still underlies both: New England and Canada and Alaska and Colorado, for example, have tons of wind and gas plants that don’t get knocked out by cold snaps—because they kitted them out to handle it. Texas was warned repeatedly that a Trolley was coming to kill “twenty billion dollars”; they chose to do nothing and let it. They could instead have done something—in fact, what nearly every other state’s energy sector did—and saved billions and billions of dollars. There would still be a cost. Like, say, the few billion cost to weather-prep gas plants and wind farms; but it would amount to maybe ten times less what doing nothing ended up costing them. Likewise, far fewer deaths. While hundreds died from the disaster they did nothing to avert, we can expect one or two would have died in, for example, workplace accidents in kitting out the equipment (windfarms in particular have a steady death rate associated with their maintenance; but so does the fossil fuel industry, or in fact any relevant industry). So even counting deaths and not money, this was a straightforward Trolley Problem. That Texas lost.
  • “Doing nothing” in the face of a global coronavirus pandemic similarly led to many more hospitalizations and deaths, and far more harm to the economy and national security, than the “mask mandates” and “vaccinations” that millions of lunatics ran about like crazed zombies denouncing and avoiding. Even counting the minuscule threats created by those mitigations (the odd person who might have died from a vaccine reaction or breathing problem), the differential in deaths was vast (hundreds, even thousands to one). Anti-vaxxers suck at Trolley Problems. Even by their own internal logic-–never mind in factual reality.

Every war is a Trolley Problem (think of the “costs” of surrendering to Hitler vs. fighting him; “WWII was a gigantic Trolley Problem all of its own with no ‘solutions’ except for very difficult, painful, and entirely ‘suboptimal’ ones,” as points out Samir Chopra in “HMS Ulysses and the Trolley Problem”). The legal system is full of Trolley Problems. Recidivism risk assessments in parole decisions are Trolley Problems. The medical system is full of Trolley Problems. Even prescription drugs are a Trolley Problem; by definition, as they require a prescription precisely because they carry risks: one worker is still “stuck on that second track”; do we “save the five” by prescribing? Here the analogy is to the risk of a single patient: 90% chance they’ll die or get worse without the drug; 10% chance the drug will kill them or make them worse; or whatever the percentages, same problem, differing only in scale. It’s all the same problem. One model to rule them all.

Every first-past-the-post election is a Trolley Problem; because always your failure to vote will help ensure a worse outcome than if you’d voted for the least worst candidate instead. Just as Maryam Azzam explains in “The Trolley Problem of Politics” at MY Voice or as Sam Kennedy explains in “The 2020 Election: Our Lifetime’s ‘Trolley’ Problem” at CARRE4, although Kennedy still misses the revelation that this was not “our lifetime’s” Trolley Problem, for in fact it was just a starker variant of the same reality defining every “winner takes all” election: we are all deciding whether to do nothing and accept the worse outcome or “pull the switch” for a less-worse one. In every election, the whole of our lives. Indeed, democracy itself is the outcome of a Trolley Problem, as Winston Churchill wryly observed (here in paraphrase), “Democracy is the worst form of government; except for all the others.” Yep. He’s describing a Trolley Problem. Even every executive and legislative policy decision is a Trolley Problem, balancing costs to freedom with costs in disruptions to civil order or safety or the economy, or taking money from one bucket and moving it to another (even tax cuts just move it to private buckets, so it’s still the same bucket game), which is a Trolley Problem of just money and resources all by itself; but again, even deaths can be counted here, if such you need to “get your attention.” How many people does doing nothing about the American health care crisis kill—versus how much fewer would be killed if we’d just fix it already as every other first world nation has done? How many people does paying too little on our citizens’ education kill? From increased crime and poorer life choices and economic opportunities, surely it will be more than if we’d just fund and run our education system well already. How many people does cutting welfare kill? How many dollars cut, correlates with how many lives lost? There is an equation for that. And so on down the line. Every policy decision is a decision between two shitty outcomes: someone is getting their budget cut; and quite possibly, someone is going to die in result. How do you decide who? One man down or five? Pull the switch or “do nothing”?

But again it need not be focused on “death calculus.” You can ignore deaths, and just count money instead of bodies; or time, or personnel, or grain, or oil, or electricity, or cars, or bridges, or land—whatever the resources, every decision, including no decision, has costs, whether in one of those or some other respect, or even many at once. You are always deciding, legislatures are always deciding, administrators and bureaucrats and corporate managers are always deciding, between higher or lower costs. But it’s costs all around. No decision is free. And doing nothing is a decision—often the most expensive one. Be it in lost lives, lost years, lost time, lost money, lost food, lost fuel, lost clean air and water; or all of the above. Even in your own personal life, who to date, what job to take, what school to go to, what hobby to allocate time and money to: it’s all Trolley Problems, all the way down. Do nothing, and date no one, get no job, go to no school, enjoy no hobby. “Nothing” has costs. Nothing is a decision. Often, again, the worst one. Hence entire economies can be Trolley Problems. For instance, as well explained for recent pandemic economic policy by Radhika Rishi in “Trolley Problem and the War for the Control of the Economic Narrative”.

When U.S. hospitals overwhelmed by unvaccinated covid patients switched to crisis protocols, their entire operation became an explicit series of Trolley Problems. But that just made a stark relief out of what hospitals are already doing every day: rationing care based on available money, and the relative costs of treating different ailments. That’s all just less visible because we are so wealthy as a nation only a few people really get the short end of that stick so as to notice who is and who isn’t pulling the trolley switch, and thus who suffers in result (mainly, the poor). Even at the level of resourcing R&D, e.g. how do we split resources between curing cancer and preventing future viral pandemics? How much do we divert between ventilators or cancer drugs? These are Trolley Problems. Fewer deaths from one will result from more resources diverted to it, while more deaths from the other will result from those resources being diverted away. We run some sort of calculus on it to decide, ultimately, but what that really just ends up being is another Trolley Problem, however creatively solved. Because resources are finite (time, money, goods, equipment, real estate, personnel), and every decision as to allocating them costs something; especially no decision at all. I’ll soon be debating here the ethics of animal research, which is often itself a Trolley Problem, only it’s not one worker on the other track, but, say, a dozen rats; do we hit the switch to crush the rats to save the five humans on the mainway?

You might surely have heard as well how self-driving cars have “suddenly” exposed how fundamental Trolley Problems are to the entire economy (e.g. The Alan Turing Institute’s “AI’s ‘Trolley Problem’ Problem” and Amar Kumar Moolayil, “The Modern Trolley Problem: Ethical and Economically-Sound Liability Schemes for Autonomous Vehicles”; a problem even starker, by the way, in drone warfare, whether AI-assisted or not). But our entire transportation system is already a Trolley Problem: letting us drive cars on roads kills tens of thousands every year; but we accept that, because shutting down our transportation system would be net worse by all our society’s metrics. So we pulled the trolley switch; and presto, roads and cars and driver’s licenses, and the trolley rolls over ten thousand people instead of many times more. Hence it’s not just cars whose AI has to decide which bad outcome to select when only bad outcomes are available—keep barreling forward, or turn; kill the driver, or the pedestrian; kill five pedestrians, or one—because if you think about it, all of our AI (Artificial Intelligence) and even HI (Human Intelligence) has to do this. Everywhere in society where a computer or a person is making decisions the outcome of which can kill people: electrocute one line worker, or freeze to death hundreds of Texans; reject unemployment benefits to too many people (by accepting a high false positive rate for fraud), or too few (by accepting a high false negative rate instead); send cops after a driver based on face-recognition software prone to misidentifying black people and get, well, shall we say “a few bad outcomes,” or don’t, and let lots of criminals get away; what rate of poisons, toxins, or metal shavings do we allow in cereal boxes or diapers; what rate of food poisoning or infection in meat or apples or lettuce; and so on. Everything has costs. We have to make a decision. And “no decision” is just another decision.

The sunk-cost fallacy is a Trolley Problem. Yet it ubiquitously plagues individuals, corporations, and governments: once we have invested so much money in a course of action, even when we realize it was a bad decision and will continue costing even more to no result, we are reluctant to scrap it, even though that’s obviously the most rational decision. Giving it up feels like a steep cost, when really, we already lost all that money, and we should re-think our situation in terms of what we have now, not “what we had then.” Cut your losses. Retool with what you still have into a more efficient direction. It’s a reality even at the poker table: “pot committed” is a tactical tight spot a player can find themselves in when they have bet so much into a pot that they can’t afford to not keep betting all the rest they have and go bust even when they know they probably have a losing hand, simply because folding would be “too expensive.” The smart gambler knows when to let it go; the loss may be high, but you will still be in the game, or still have something to take home. Successful gamblers are by necessity good at Trolley Problems. They know when to pull the trolley lever and take the smaller loss, even when that loss is still itself steep.

Trolley Problems can structurally define all competing-cost problems (which are almost all decisions whatever, differing only in scale). Foot used deaths to simply make the question more salient. And there is plenty of real-world death calculus that Trolley Problems can in fact model correctly. But you can substitute anything: different costs in money; different costs in lost time; different costs in allocations of personnel; different costs in allocations of real estate; different costs in emotional harms, anxieties, stress; different costs in injuries and hospitalizations; different costs to one’s life expectancy; and so on. But above all, remember, almost all real world decisions are risk-cost weighted. In other words, many real scenarios are not “do nothing and five people will die, or act and cause only one to die,” but “do nothing and there is an 80% chance of hundreds dying over a ten year period, or act and there is a 15% chance half a dozen deaths will result instead,” wherein there are nonzero probabilities of no-cost outcomes on that metric (a 20% chance inaction will cost nothing; an 85% chance action will cost nothing). There are even inverted cost outcomes (a 20% x 15% = 0.03% chance that going for the better option will have, instead, the worst result). That makes decision-making a lot harder, because now it’s about probabilities, not certainties, and people love to dodge and fudge probabilities. But still one can show why one of these risks is unacceptably greater than the other, and a correct decision should follow a relative weighting of the risk. And “death” won’t be the only thing under risk. All those other costs may be as well.

On that point see my discussion under “Morality Is Risk Management”, and “The Rational Actor Paradigm in Risk Theories” by the Renn group. As I wrote in Your Own Moral Reasoning:

The probability of [a given] outcome is greater on that behavior than on any alternative behavior, such that even if [that] outcome is not guaranteed, it is still only rational to engage the behavior that will have the greatest likelihood of the desired outcome. By analogy with vaccines that have an adverse reaction rate: when the probability of an adverse reaction is thousands of times less than the probability of contracting the disease being vaccinated against, it is not rational to complain that, when you suffer an adverse reaction from that vaccine, being vaccinated was the incorrect decision. To the contrary, it remained the best decision at the time, because the probability of a worse outcome was greater at the time for a decision not to be vaccinated. Analogously, that some evil people prosper is not a valid argument for following their approach, since for every such person attempting that, thousands will be ground under in misery, and only scant few will roll the lucky dice. It is not rational to gamble on an outcome thousands to one against, when failure entails misery, and by an easy difference in behavioral disposition you can ensure a sufficiently satisfying outcome with odds thousands to one in favor—as then misery is thousands to one against rather than thousands to one in favor. This is also why pointing to good people ending in misery is not a valid argument against being good.

This is describing a Trolley Problem, only with abstract risk as the metric rather than “dead railway workers.” Many people do indeed feel an aversion to “acting” (e.g. getting vaccinated) for fear of being responsible for even an extremely unlikely bad outcome, and this emotional aversion can cause them to make a much worse decision: to do nothing, thereby choosing a vastly higher risk of a bad outcome. It does not matter if they get hit with that risk; it was still an irrational decision to do nothing, and receive a thousand times greater chance of death or hospitalization, than would have resulted from taking the obvious positive action (think, drunk driving, or randomly shooting a gun into the night in a populated suburb). But our brains have been badly designed; so they “feel” like if they do nothing, then they can’t be responsible for what happens. After all, they didn’t “do” anything, right? Sure. Until Katrina hits your levies. Inaction is action. No decision, is a decision.

It is of course important to realize that there is no such thing as “a” solution to Trolley Problems. It is not always “flip the switch; save five lives, lose one.” Often that’s the best decision. But sometimes the other decision can have worse consequences, and in those cases, the correct move is the opposite (let the trolley roll). Hence the reason the Fat Man and the Transplant iterations lead to different conclusions than other Trolley scenarios is that those implicate wider consequences upon a social system: when you account for all the costs of normalizing a certain choice, what “seems” the lower cost (“just one guy” in each case), actually isn’t. I discuss this in my section on Trolley Problems in “On Hosing Thought Experiments.” But in short, for instance, far more people will die if you create a system where no one goes to hospitals for fear of being cannibalized there, and far more harm will result if no one uses trains anymore for fear of being pushed in front of one, such that “the one life for many” equation no longer holds up. And sometimes there are more options than two, more than one place to swing the trolley switch, and you shouldn’t overlook any if they are viable (e.g., if a village asks you to hang one innocent person so they’ll let go four, maybe, instead, kill the villagers: thereby modeling every justified decision to go to war, ever). Thus, just as with Game Theory, the best decision can depend on the individual circumstances. All that the Trolley Problem framework does for you is clarify what the costs of indecision really are (rather than pretending there are none), so you can actually evaluate whether doing nothing is indeed the best move or not, in each real situation. Hence the lesson to learn from the Trolley Problem is just what I started with: recognize that doing nothing is as substantive an action as anything else, it is a choice you are responsible for making; and that every available choice, in every possible situation, bears costs, so you should be sure you know what those costs are, and that you find them acceptable before choosing them.

§

To comment use Add Comment field at bottom or click a Reply box next to a comment. See Comments & Moderation Policy.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading