Ah the infamous Trolley Problem. So ubiquitous, we find it meaningfully featured even in the television show The Good Place. A lot people people don’t like the Trolley Problem. Its very existence vexes them. They’d rather complain about how it supposedly doesn’t teach us anything and we can just sweep it under the rug as just another bad idea, along with digital watches, portable currency, and coming down out of the trees in the first place. At Aeon we’re told Trolley Problems are too simplistic to usefully analyze reality (just like Game Theory…oh wait). At Quartz we’re told Trolley Problems are of no use because it can’t be decided how to program deadly AI from “first principles” (a notion just as obviously false: if you have no first principles to work from, you can have no principles at all). At Slate we’re told Trolley Problems are useless because people don’t actually know how they’d behave in real life, so what use is a thought experiment? Never mind that the entire fields of Economics, Game Theory, and Political Science rely on foundational thought experiments continuously applied to real-world situations, and that thought experiments are the most common and necessary instrument in Psychology, Contingency Policy, and Crisis Management. Yes, you can Hose Thought Experiments; and average Janes and Joes (and especially Karens and Chads), having the least skill at it, will fail at it the most often; but that they can’t do heart surgery or build a rocket, either, is not an argument against heart surgery or rockets.
As it happens, literally almost everything is a Trolley Problem. So these attempts to escape it won’t do you any good. Like the guy who insists he has no metaphysics—which in the very act of declaring, he embraces a conclusion in metaphysics; and worse even than that, a conclusion that doesn’t even track how he actually behaves, which instead will be in accordance with a rather elaborate metaphysics, one he has simply committed, like the apocryphal ostrich, to never examining or questioning, rather than actually abandoning (much less fixing). I’ll explain what I mean by “everything is a Trolley Problem” shortly. But first I’ll catch you up to speed on what I’m talking about. In case you didn’t know, the standard Trolley Problem, developed by Philippa Foot in the 1960s (one of the greatest women in philosophy in the 20th century), is most simply described as: a runaway trolley is flying down the track, about to run over and kill five workers (who can’t get out of its way, for some reason that doesn’t matter to the point), and you happen to be standing next to a switch that, if you throw it, will divert the trolley onto a different track, where it’ll only kill one worker. What do you do? And more importantly (because this is the point of the experiment), why? Like the basic Game Theory scenario that launched a world science, “Trolleyology” has since iterated the basic problem into all manner of variants, from the Fat Man on a Platform or the Hospital Transplant Dilemma to the Village Hangman (“If you stumble on a village where they are about to hang five people for witchcraft and offer to let four go only if you yourself hang the fifth, do you?”).
Trolley Problems have two particular attributes: one is that they force the experimenter to compare the outcomes of positive action and inaction; the other is that they force the experimenter to face the fact that either choice bears costs. As such, Foot’s Trolley only puts into stark relief a fundamental truth of all moral reasoning: every choice has a cost (There Ain’t No Such Thing as a Free Lunch) and doing nothing is a choice. Both of those principles are so counter-intuitive that quite a lot of people don’t want them to be true, and will twist themselves into all sorts of knots trying to deny them. I make this point because most people focus on the fact that “Trolley Problems” seem always about death (“Do five people die or only one?”), but that would be to miss the entire point of the Trolley Problem framework. I once showed a class of Christian high school students the scene in Beautiful Mind where John Nash explains his revelation of (what would become) Game Theory to his bar buddies, using the “dude” example of how to score with women at the bar: if they all go for the most attractive one, they all block each other and lose, but if they all cooperate to divvy up approaching her friends, they all get dates (yes, not that enlightened an example, but neither is a trolley rolling over people). The students couldn’t get off the notion that the scene meant Game Theory was about getting laid. But getting a date was entirely incidental, just a silly (and intentionally comic) barroom example; they missed the point.
So, too, are people missing the point who act like Foot’s Trolley is a philosophical question about killing people; or who think that even when it is about killing people, that it’s about how to find a way in which all the deaths could be avoided somehow, and so people “respond” to Trolley Problems by inventing a bunch of “what ifs” that allow them to “win the game” as it were, which is again missing the point—because the Trolley Problem is designed to model specifically those scenarios where they can’t all be saved. Just as Game Theory was designed to model, and thus analyze, situations in which everyone can’t get everything they want. Which describes much of human reality, being an elaborate construction of compromises—even with yourself. Even when you divvy up the chores so everyone gets to tackle their favorite one, still everyone would prefer to have had no chores at all. There is always a cost: in that scenario, we all have to do a chore. The response to Game Theory cannot be, “Well, we’ll just find a way where no one ever has to do any chores.” Because that’s impossible. Chores have to get done. So someone has to do them. Just as the response to Trolley Problems cannot be, “Well, we’ll just find a way to save everyone,” because that’s like saying, “Well, we’ll just find a choice that costs no one anything.” Because that’s impossible. Everything costs something. So which costs are you going to choose? “I choose nothing” is not an option; because “doing nothing” costs—often, in fact, a lot more than doing something. Thus, inaction is an action. It does not matter how much it “feels” to us like we are making no choice in the matter, that we aren’t doing anything and thus aren’t “responsible” for what happens. We are always responsible for our inaction.
Consider three examples of failed Trolley Problems:
- “Doing nothing” to fix the levies whose failure devastated Louisiana in the face of Hurricane Katrina ended up costing Louisiana and the Federal government (and thus every taxpayer in the nation) vastly more than fixing the levies in the first place would have. Hence doing “something” instead would have been far cheaper. Inaction ended up outrageously more expensive—and outrageously deadlier, for those who want a lot of “killing” in their thought experiments. This was a Trolley Problem. In money or bodies. “Flipping the switch” would have killed fewer people—and cost us vastly less in resources. We chose to stand there and do nothing, and then claim it wasn’t our fault.
- “Doing nothing” to fund the cold-weathering of equipment caused the 2021 Texas Powergrid Disaster, which killed hundreds of people and cost tens of billions of dollars, and immeasurable headache and ruination. While Republicans disingenuously complained about “wind power” not being up to snuff, to push their gas lobby, such that the story soon became how in fact most of Texas’s failed power came from natural gas plants not having been adequately fitted for cold weather, the same truth actually still underlies both: New England and Canada and Alaska and Colorado, for example, have tons of wind and gas plants that don’t get knocked out by cold snaps—because they kitted them out to handle it. Texas was warned repeatedly that a Trolley was coming to kill “twenty billion dollars”; they chose to do nothing and let it. They could instead have done something—in fact, what nearly every other state’s energy sector did—and saved billions and billions of dollars. There would still be a cost. Like, say, the few billion cost to weather-prep gas plants and wind farms; but it would amount to maybe ten times less what doing nothing ended up costing them. Likewise, far fewer deaths. While hundreds died from the disaster they did nothing to avert, we can expect one or two would have died in, for example, workplace accidents in kitting out the equipment (windfarms in particular have a steady death rate associated with their maintenance; but so does the fossil fuel industry, or in fact any relevant industry). So even counting deaths and not money, this was a straightforward Trolley Problem. That Texas lost.
- “Doing nothing” in the face of a global coronavirus pandemic similarly led to many more hospitalizations and deaths, and far more harm to the economy and national security, than the “mask mandates” and “vaccinations” that millions of lunatics ran about like crazed zombies denouncing and avoiding. Even counting the minuscule threats created by those mitigations (the odd person who might have died from a vaccine reaction or breathing problem), the differential in deaths was vast (hundreds, even thousands to one). Anti-vaxxers suck at Trolley Problems. Even by their own internal logic-–never mind in factual reality.
Every war is a Trolley Problem (think of the “costs” of surrendering to Hitler vs. fighting him; “WWII was a gigantic Trolley Problem all of its own with no ‘solutions’ except for very difficult, painful, and entirely ‘suboptimal’ ones,” as points out Samir Chopra in “HMS Ulysses and the Trolley Problem”). The legal system is full of Trolley Problems. Recidivism risk assessments in parole decisions are Trolley Problems. The medical system is full of Trolley Problems. Even prescription drugs are a Trolley Problem; by definition, as they require a prescription precisely because they carry risks: one worker is still “stuck on that second track”; do we “save the five” by prescribing? Here the analogy is to the risk of a single patient: 90% chance they’ll die or get worse without the drug; 10% chance the drug will kill them or make them worse; or whatever the percentages, same problem, differing only in scale. It’s all the same problem. One model to rule them all.
Every first-past-the-post election is a Trolley Problem; because always your failure to vote will help ensure a worse outcome than if you’d voted for the least worst candidate instead. Just as Maryam Azzam explains in “The Trolley Problem of Politics” at MY Voice or as Sam Kennedy explains in “The 2020 Election: Our Lifetime’s ‘Trolley’ Problem” at CARRE4, although Kennedy still misses the revelation that this was not “our lifetime’s” Trolley Problem, for in fact it was just a starker variant of the same reality defining every “winner takes all” election: we are all deciding whether to do nothing and accept the worse outcome or “pull the switch” for a less-worse one. In every election, the whole of our lives. Indeed, democracy itself is the outcome of a Trolley Problem, as Winston Churchill wryly observed (here in paraphrase), “Democracy is the worst form of government; except for all the others.” Yep. He’s describing a Trolley Problem. Even every executive and legislative policy decision is a Trolley Problem, balancing costs to freedom with costs in disruptions to civil order or safety or the economy, or taking money from one bucket and moving it to another (even tax cuts just move it to private buckets, so it’s still the same bucket game), which is a Trolley Problem of just money and resources all by itself; but again, even deaths can be counted here, if such you need to “get your attention.” How many people does doing nothing about the American health care crisis kill—versus how much fewer would be killed if we’d just fix it already as every other first world nation has done? How many people does paying too little on our citizens’ education kill? From increased crime and poorer life choices and economic opportunities, surely it will be more than if we’d just fund and run our education system well already. How many people does cutting welfare kill? How many dollars cut, correlates with how many lives lost? There is an equation for that. And so on down the line. Every policy decision is a decision between two shitty outcomes: someone is getting their budget cut; and quite possibly, someone is going to die in result. How do you decide who? One man down or five? Pull the switch or “do nothing”?
But again it need not be focused on “death calculus.” You can ignore deaths, and just count money instead of bodies; or time, or personnel, or grain, or oil, or electricity, or cars, or bridges, or land—whatever the resources, every decision, including no decision, has costs, whether in one of those or some other respect, or even many at once. You are always deciding, legislatures are always deciding, administrators and bureaucrats and corporate managers are always deciding, between higher or lower costs. But it’s costs all around. No decision is free. And doing nothing is a decision—often the most expensive one. Be it in lost lives, lost years, lost time, lost money, lost food, lost fuel, lost clean air and water; or all of the above. Even in your own personal life, who to date, what job to take, what school to go to, what hobby to allocate time and money to: it’s all Trolley Problems, all the way down. Do nothing, and date no one, get no job, go to no school, enjoy no hobby. “Nothing” has costs. Nothing is a decision. Often, again, the worst one. Hence entire economies can be Trolley Problems. For instance, as well explained for recent pandemic economic policy by Radhika Rishi in “Trolley Problem and the War for the Control of the Economic Narrative”.
When U.S. hospitals overwhelmed by unvaccinated covid patients switched to crisis protocols, their entire operation became an explicit series of Trolley Problems. But that just made a stark relief out of what hospitals are already doing every day: rationing care based on available money, and the relative costs of treating different ailments. That’s all just less visible because we are so wealthy as a nation only a few people really get the short end of that stick so as to notice who is and who isn’t pulling the trolley switch, and thus who suffers in result (mainly, the poor). Even at the level of resourcing R&D, e.g. how do we split resources between curing cancer and preventing future viral pandemics? How much do we divert between ventilators or cancer drugs? These are Trolley Problems. Fewer deaths from one will result from more resources diverted to it, while more deaths from the other will result from those resources being diverted away. We run some sort of calculus on it to decide, ultimately, but what that really just ends up being is another Trolley Problem, however creatively solved. Because resources are finite (time, money, goods, equipment, real estate, personnel), and every decision as to allocating them costs something; especially no decision at all. I’ll soon be debating here the ethics of animal research, which is often itself a Trolley Problem, only it’s not one worker on the other track, but, say, a dozen rats; do we hit the switch to crush the rats to save the five humans on the mainway?
You might surely have heard as well how self-driving cars have “suddenly” exposed how fundamental Trolley Problems are to the entire economy (e.g. The Alan Turing Institute’s “AI’s ‘Trolley Problem’ Problem” and Amar Kumar Moolayil, “The Modern Trolley Problem: Ethical and Economically-Sound Liability Schemes for Autonomous Vehicles”; a problem even starker, by the way, in drone warfare, whether AI-assisted or not). But our entire transportation system is already a Trolley Problem: letting us drive cars on roads kills tens of thousands every year; but we accept that, because shutting down our transportation system would be net worse by all our society’s metrics. So we pulled the trolley switch; and presto, roads and cars and driver’s licenses, and the trolley rolls over ten thousand people instead of many times more. Hence it’s not just cars whose AI has to decide which bad outcome to select when only bad outcomes are available—keep barreling forward, or turn; kill the driver, or the pedestrian; kill five pedestrians, or one—because if you think about it, all of our AI (Artificial Intelligence) and even HI (Human Intelligence) has to do this. Everywhere in society where a computer or a person is making decisions the outcome of which can kill people: electrocute one line worker, or freeze to death hundreds of Texans; reject unemployment benefits to too many people (by accepting a high false positive rate for fraud), or too few (by accepting a high false negative rate instead); send cops after a driver based on face-recognition software prone to misidentifying black people and get, well, shall we say “a few bad outcomes,” or don’t, and let lots of criminals get away; what rate of poisons, toxins, or metal shavings do we allow in cereal boxes or diapers; what rate of food poisoning or infection in meat or apples or lettuce; and so on. Everything has costs. We have to make a decision. And “no decision” is just another decision.
The sunk-cost fallacy is a Trolley Problem. Yet it ubiquitously plagues individuals, corporations, and governments: once we have invested so much money in a course of action, even when we realize it was a bad decision and will continue costing even more to no result, we are reluctant to scrap it, even though that’s obviously the most rational decision. Giving it up feels like a steep cost, when really, we already lost all that money, and we should re-think our situation in terms of what we have now, not “what we had then.” Cut your losses. Retool with what you still have into a more efficient direction. It’s a reality even at the poker table: “pot committed” is a tactical tight spot a player can find themselves in when they have bet so much into a pot that they can’t afford to not keep betting all the rest they have and go bust even when they know they probably have a losing hand, simply because folding would be “too expensive.” The smart gambler knows when to let it go; the loss may be high, but you will still be in the game, or still have something to take home. Successful gamblers are by necessity good at Trolley Problems. They know when to pull the trolley lever and take the smaller loss, even when that loss is still itself steep.
Trolley Problems can structurally define all competing-cost problems (which are almost all decisions whatever, differing only in scale). Foot used deaths to simply make the question more salient. And there is plenty of real-world death calculus that Trolley Problems can in fact model correctly. But you can substitute anything: different costs in money; different costs in lost time; different costs in allocations of personnel; different costs in allocations of real estate; different costs in emotional harms, anxieties, stress; different costs in injuries and hospitalizations; different costs to one’s life expectancy; and so on. But above all, remember, almost all real world decisions are risk-cost weighted. In other words, many real scenarios are not “do nothing and five people will die, or act and cause only one to die,” but “do nothing and there is an 80% chance of hundreds dying over a ten year period, or act and there is a 15% chance half a dozen deaths will result instead,” wherein there are nonzero probabilities of no-cost outcomes on that metric (a 20% chance inaction will cost nothing; an 85% chance action will cost nothing). There are even inverted cost outcomes (a 20% x 15% = 0.03% chance that going for the better option will have, instead, the worst result). That makes decision-making a lot harder, because now it’s about probabilities, not certainties, and people love to dodge and fudge probabilities. But still one can show why one of these risks is unacceptably greater than the other, and a correct decision should follow a relative weighting of the risk. And “death” won’t be the only thing under risk. All those other costs may be as well.
On that point see my discussion under “Morality Is Risk Management”, and “The Rational Actor Paradigm in Risk Theories” by the Renn group. As I wrote in Your Own Moral Reasoning:
The probability of [a given] outcome is greater on that behavior than on any alternative behavior, such that even if [that] outcome is not guaranteed, it is still only rational to engage the behavior that will have the greatest likelihood of the desired outcome. By analogy with vaccines that have an adverse reaction rate: when the probability of an adverse reaction is thousands of times less than the probability of contracting the disease being vaccinated against, it is not rational to complain that, when you suffer an adverse reaction from that vaccine, being vaccinated was the incorrect decision. To the contrary, it remained the best decision at the time, because the probability of a worse outcome was greater at the time for a decision not to be vaccinated. Analogously, that some evil people prosper is not a valid argument for following their approach, since for every such person attempting that, thousands will be ground under in misery, and only scant few will roll the lucky dice. It is not rational to gamble on an outcome thousands to one against, when failure entails misery, and by an easy difference in behavioral disposition you can ensure a sufficiently satisfying outcome with odds thousands to one in favor—as then misery is thousands to one against rather than thousands to one in favor. This is also why pointing to good people ending in misery is not a valid argument against being good.
This is describing a Trolley Problem, only with abstract risk as the metric rather than “dead railway workers.” Many people do indeed feel an aversion to “acting” (e.g. getting vaccinated) for fear of being responsible for even an extremely unlikely bad outcome, and this emotional aversion can cause them to make a much worse decision: to do nothing, thereby choosing a vastly higher risk of a bad outcome. It does not matter if they get hit with that risk; it was still an irrational decision to do nothing, and receive a thousand times greater chance of death or hospitalization, than would have resulted from taking the obvious positive action (think, drunk driving, or randomly shooting a gun into the night in a populated suburb). But our brains have been badly designed; so they “feel” like if they do nothing, then they can’t be responsible for what happens. After all, they didn’t “do” anything, right? Sure. Until Katrina hits your levies. Inaction is action. No decision, is a decision.
It is of course important to realize that there is no such thing as “a” solution to Trolley Problems. It is not always “flip the switch; save five lives, lose one.” Often that’s the best decision. But sometimes the other decision can have worse consequences, and in those cases, the correct move is the opposite (let the trolley roll). Hence the reason the Fat Man and the Transplant iterations lead to different conclusions than other Trolley scenarios is that those implicate wider consequences upon a social system: when you account for all the costs of normalizing a certain choice, what “seems” the lower cost (“just one guy” in each case), actually isn’t. I discuss this in my section on Trolley Problems in “On Hosing Thought Experiments.” But in short, for instance, far more people will die if you create a system where no one goes to hospitals for fear of being cannibalized there, and far more harm will result if no one uses trains anymore for fear of being pushed in front of one, such that “the one life for many” equation no longer holds up. And sometimes there are more options than two, more than one place to swing the trolley switch, and you shouldn’t overlook any if they are viable (e.g., if a village asks you to hang one innocent person so they’ll let go four, maybe, instead, kill the villagers: thereby modeling every justified decision to go to war, ever). Thus, just as with Game Theory, the best decision can depend on the individual circumstances. All that the Trolley Problem framework does for you is clarify what the costs of indecision really are (rather than pretending there are none), so you can actually evaluate whether doing nothing is indeed the best move or not, in each real situation. Hence the lesson to learn from the Trolley Problem is just what I started with: recognize that doing nothing is as substantive an action as anything else, it is a choice you are responsible for making; and that every available choice, in every possible situation, bears costs, so you should be sure you know what those costs are, and that you find them acceptable before choosing them.
Yeah, it’s true that game theory and trolley problems are, on their own, rarefied situations that don’t ever match reality.
But… many situations in life are analogous enough that they do exactly what we want a model to do: eliminate extraneous noise and irrelevant variables and drill down on what we’re trying to examine. As long as we don’t pretend that nuance isn’t relevant when we then make decisions in the real world, we can learn something.
And what I find astonishing about the resistance to trolley problem is that it’s self-evident that decisions where our active agency can harm a minority to help a majority are common.
I think part of what may be going on is the very inactivity bias that the trolley problem helps to expose in the first place. People like to think that being neutral, being uninvolved, not doing something, has a special utility. But inaction is a choice. As Howard Zinn put it, you can’t be neutral on a moving train. Inaction has to be justified exactly the same way action does. I have noticed in my political discussions that those people who have a status quo bias in various ways tend to dislike the suggestion that some action may be necessary as long as that action is not literally totally benign. If any stakeholders are harmed, the action is viewed as immoral, even if the status quo harms many more.
You are quite right (and there is some science and experimental philosophy establishing it): this gets to psychology, and a cognitive bias we have evolved in our brains, that makes us “feel” differently about “doing nothing” vs. actively doing something (a defective distinction that does suit social animals expected to “go along” with social norms, and hence “do nothing” most of the time, as a rather brutal survival strategy, as most amoral survival strategies are).
And oh, thank you! Another good example: status quo bias exemplifies a Trolley Problem. Just as sunk cost fallacies do. I suspect quite a lot of decisional fallacies and cognitive biases reflect Trolley Problems in one way or another (like those two do).
Dr, Carrier I should point out that there is one important difference between the three examples of failed Trolley Problems and the Standard Trolley problem.
In the latter there were certain outcomes (binary known outcomes at that) so risk management was not a factor or consideration in the game or one’s decision making process.
That is not the case for the three examples that you provided. One might say that the outcomes were “inevitable”, but that is only with the benefit of hindsight. Like someone that never wears a seatbelt is definitely at a greater risk of losing their life in an automobile accident. And if they are choosing between the convenience of not having to buckle/unbuckle versus the greater risk of being more seriously injured or killed in a POSSIBLE major auto accident, there are not 2 binary outcomes that they are certain to be dealing with. If that person eventually dies from an auto accident where it is apparent that wearing a seat belt would’ve probably saved their lives then one might be tempted (with the benefit of hindsight) to equate their decision to a Trolley Problem. But certainly there are other non-seat belt wearers out there that won’t be met with such fate so the outcome if not of this decision/scenario lacks the certainty of a decision that one is facing in the traditional trolley problem scenario.
I discuss how we translate Trolley Problems into risk assessment models in the article. You might want to revisit those paragraphs. They address your concern directly and in detail. In fact, I give examples almost identical to yours.
And that’s without even mentioning that all probabilities approach 100% as t increases. So it’s really just a frequency game, not so much a probability game. As any insurance actuarial can explain.
Understood. But for it to be a Trolley Problem to start with doesn’t the subject (decision maker) first have to see a choice is at hand where making the wrong decision at least carries a risk of a possible outcome?
Let me give you this one non-hypothetical example.
I grew up as a kid in the 70’s. Back then cars were equipped with safety belts but almost nobody used them. There were no laws to enforce their use and no educational campaigns yet to educate the public.
So back then my parents (like most back then) literally had no clue that by not having us wear them they were putting our lives at greater risk in the event of a fatality. Now fortunately that didn’t happen but my point is that their decision to have us to buckle up or not was not an actual trolley problem for them because they didn’t see the potential harm in their actions or inactions – anymore than someone that decides to pickup and handle an unknowingly poisonous frog. On the other hand for someone that was aware of such dangers (or risks) those type of decisions to present a trolley problem.
So the point in that I’m trying to make in all of this is before we can assume that anything is a trolley problem for anyone, we must first come to understand if/how they view the potential dangers and risks involved. A modern day example might be global warming where despite the available scientific data there are some that for whatever reason don’t trust/believe it, so in their minds there is not a vexing trolley problem that must be dealt with.
Obviously. The point of the model is to help people recognize when the template applies, so they don’t make the mistake of thinking inaction has no costs or isn’t itself a decision they are making.
The broader issue (which applies to all moral theories across the board and is not particular to any model, whether Game Theory or Trolley) of what we do with “unknowable cases” (decisions we are unaware we are making or have no information pertaining to the differential outcomes and can’t get any in time) I address in my chapter on moral theory in The End of Christianity (n. 28, p. 424, and nn. 34 and 35, pp. 424-25). That also discusses “rationalized stances” (whether the delusional, rather than the genuinely igorant, are still culpable).
As to your example, though, it is not an applicable case. Seat belt laws began in 1968. So your parents cannot plausibly have been ignorant of the issue in the 1970s.
Global Warming, likewise, is not an applicable case. People are willfully refusing to accept vast amounts of data and are rationalizing a do-nothing stance rather than taking one in ignorance (they cannot, like some lost jungle tribe, claim “not to know” the relevant information). It does not matter what harebrained excuses the person sitting by the lever watching the Trolley gives for not pulling it. They are still choosing, consciously, not to pull it. And the Trolley model reveals that that is, indeed, a decision, and just how costly that decision is (to anyone who actually cares about those costs).
I occasionally teach law and ethics to Community services students. The main issue is responding to abuse in care. In many text the Trolly problem comes up. In Peter Weir movie Master and Commander there is a scene where a sailor falls overboard and is clinging to a broken mast still tethered to the ship by ropes. The mast is also threatening to drag the ship under. So Capt Aubrey cuts the ropes so condemning the sailor to down but giving the ships crew a chance. I don’t show the students that scene. I show them the court marshal scene from Peter Ustinov’s Billy Budd. By time I have finished with the students they remain unsure if Capt Vere is right or wrong to hang Billy. They most certainly don’t want to be in Capt Veres place. I feel this is more percent examination of life issues than the contrived trolly car problem. Life is greys are rarely black and white
Except those are all just Trolley Cars; so it isn’t contrived.
The resistance people have to the simplified model involving “trolleys” teaches us more about them, and their need to avoid admitting all decisions fit the model, than about what form the model presents itself in.
And that’s even before we get to the fact that the hangman model is even in the Trolley literature already. And the article I linked to on the HMS Ulysses is pretty much an iteration of your naval example and how it models all decisions to go to war and act in wars.
The beauty is that this simple model, models so many various circumstances, and people only “notice” when extreme examples are presented (someone has to die or even, gasp, be killed). Which was supposed to be the exact opposite lesson Foot wanted us to draw (she was using the extreme example to force people to notice specifically so they’d stop ignoring how less extreme examples don’t differ in construct—they are all the same decision matrix).
So one thing I’ve found is that there is a really powerful question you can ask that makes the Aubrey case more like the Budd case, or that can make the trolley problem more profound.
At the end of a normal trolley problem case, you can ask everyone what they would want the conductor to do if they were one of the five people who were being barreled down upon by a train. Even if they were uncomfortable with the thought of pulling the lever, they are also probably going to be uncomfortable with the thought of them dying because someone else was unwilling to.
In the Aubrey case, what would they want Aubrey to do if they were the sailor?
Note, there is a defect of those kinds of tests: what people say they want, is often poorly considered. There is a difference between what an average sailor of a ship would say, and what a well-reflective and sound-reasoning one would say (someone who is taking into account the total system of events that will transpire and what they want as a person to be responsible for and realize, perhaps even with their own death). Correct decisions will align with the latter, not the former.
Obviously so! But my point is that for them to become fully considered, you need to at least think about what you would want if the situations were reversed). I would hope, for example, that good sailors would recognize that they had a duty when they got onto the ship (in a universe in which that duty was one they could fairly and freely avoid doing or in which their compulsion was actually necessary) to the ship as a whole. If they’re dragging down the ship, they are obligated to take their chances with the water.
So, for example, when people talk about the entire cultural discussion we are having about “cancellation” and people being held accountable for their speech even informally, I point out to people that if I were being a dick I would want someone to tell me, and tell me in such a way that I would actually listen. Yeah, I would ideally like it to be as civil as possible, but I would rather have someone hurt my feelings in the short term but help me hold myself accountable in the long run. So by a golden rule analysis, I would want someone who has a moral objection to what I’m doing (assuming it is sufficiently well-reasoned and not a total kneejerk response) to express that, and, yeah, “cancel” me if it was warranted!
What I have found is that one can get a lot of mileage out of pointing out to people that, once they considered whether they would pull the lever or not, they didn’t end up thinking about how it would feel to have the lever not pulled to save them. That can actually be a bit triggering, and so it should be done gently (this debate about how to make these points came up with the video game Spec Ops: The Line, where you can do things in the context of the game if you are thinking about it that avoid atrocities that otherwise happen and seem to just be the game having its railroaded script, something that causes people extreme distress when they realize it later), but it can be hugely valuable. It is tough to bear in mind the perspectives of other stakeholders in moral situations… which is exactly why we need to make that a habit!
I hadn’t thought of that; but yes, that would change the situation. I use Billy Budd because a) at the heart of Melville’s story is sexual assault, b) the characters are faced with uncertainty (Vere suspects the Petty Office assaulted Billy) c) most such matters a group decision(a work team will want to down play matter and one feels otherwise). But the idea that one may absolve responsibility because it was a team decision is also quite real. I have found that Groups seem to do wicked things that individuals won’t. Rather, there is much research about “in and out” group where a majority group will harm a minority.
More food for thought
Perhaps to some degree our legal system sets the tone (or perhaps helps reinforce) the idea that as long as you do not actively participate in the death of someone you should not be responsible (held accountable) for the outcome. Except for a few select states you are under no obligation to save anyone’s life even if doing so would cause no hard or risk to yourself.
Good Samaritan Laws & Protections
https://www.criminaldefenselawyer.com/resources/good-samaritan-laws-protections.htm
That is covered in some of the Trolleyology literature I cite: the role legal systems have played in “training” people’s intuitions in such scenarios, leading to adverse consequences (thus exemplifying the problem of unintended effect) such as bystanders doing nothing to intervene in an incident or hazarding nothing to prevent a bad outcome but rather letting it just happen.
But do note, negligence is a tort. It proceeds from Duty of Care, which is a social (and legal) principle I show influences people’s intuitions in different Trolley Problem variants in my article on Hosing Thought Experiments, linked in this article above. So, we have trained people to have contradictory intuitions: when society “accepts” a Duty of Care exists, negligence is bad; elsewise, negligence is preferred. Which explains a lot of why we as a society resist most efforts to actually solve problems like racism or sexism; for want of Duty of Care, these either “don’t exist” or are “someone else’s problem,” an example of trying to solve Trolley Problems by inventing scenarios to choose that don’t actually exist (they just provide cover for rationalizing the “do nothing” option; but are really just choosing “do nothing”).
In some articles in the past I have analyzed the distinction between conservative and liberal politics as hinging very much on this very thing: conservatives abhor having to “do something” about anything and want things to just “run themselves” (free markets; bootstrap individualism; thoughts and prayers), except when they are terrified, then every extreme action is acceptable to “make the problem go away.” Liberals, by contrast, are much more interested in pulling the lever to make the world better; they accept the costs, because they recognize that the cost of the alternative is greater. Then sometimes they obsess over pulling the wrong lever; and then conservatives use that to denounce the entire idea of ever pulling levers at all. And round and round it goes.
In addition to Richard’s point, notice that this is another case where putting back in the complexity of the real world forces us to not act like the simple trolley problem… which doesn’t obviate the simple trolley problem.
Why do we not have the laws that would let someone be prosecuted for failing to save a life?
Well…
And one can go on and on. The fact that the legal system has to do this doesn’t settle the issue. Just like the existence of “beyond a reasonable doubt” in a criminal trial isn’t an epistemic limitation but a pragmatic policy one.
But that doesn’t mean that most people will look at someone who could have helped, and didn’t, as having at the minimum seriously hecked up. Look at the literature around the bystander problem and Kitty Genovese. Most people see the Kitty Genovese story as deeply shocking and awful, and the film Boondock Saints used the story as an idea to justify the vigilantism of the protagonists. And, of course, one can look to the “I was just following orders” defense to see how far the idea that benignly doing nothing about evil goes as defense…
All good points. But be aware the Kitty Genovese story is mostly mythical (in fact, much of it seems to be propaganda invented by the police to cover up their mistakes in pursuing the case; the rest, subsequent exaggeration typical of urban legend building).
But the general point, suitably qualified, remains valid. It is not that doing nothing is regularly what happens, but rather that it happens in too many observers (at least two people called the police on Kitty’s attacker, out of perhaps a dozen, albeit none of whom knew she was stabbed, and half of whom misidentified what they heard or saw). And there can be valid reasons for this (e.g. Milgram-style experiments often mistake what test subjects assume is going on and thus mis-identify their reasons for compliance; and often many are worried, often justifiably, that they’ll become victims themselves if they intervene; many others literally don’t know what to do; and so on). The usual explanation offered for Genovese-style stories is that everyone assumed someone else was calling the police. That wasn’t actually the case. But it can be in other instances. So that really comes down to individuals’ erroneous judgment about how the system will function (and forgetting that they are the system); a mistake we attempt to correct with social engineering (e.g. “If you see something, say something” signs in public places).
I actually didn’t know that about the Genovese story! We covered it (with some skepticism but not to the point that I knew it was outright propaganda) in sociology classes as part of a discussion on bystander effects. I always figured that obviously there was much more going on and like any good story details were being shaved off. One thing the story supposedly shows that is true to some extent is that large cities build up, by necessity, a privacy norm to not intervene as often and to mind one’s business, one of the reasons that crime is always more present in cities (criminals not only have more targets but also a greater degree of anonymity).
But, yes, the broader point is still valid. Anyone who has ever been involved in an any kind of crisis situation or emergency response at any distance has seen it. It’s just too easy to worry about duplication of effort, and not being embarrassed by being the fifth person to make a redundant call, and so on. And, in fact, there are some valid concerns there: Overclogging emergency lines with redundant reports is a problem (it’s just a problem that we need to build emergency lines to be able to handle with excess capacity because overcoming the bystander effect is so important); if too many people charge around someone who needs CPR it can cause harm; etc.
And the emotional reaction to the Genovese story, propaganda as it apparently is, is my point: It is very common that we find inaction to be morally reprehensible. The same applies to the Nazis. What trolleyology helps demonstrate is how much this is an outgroup bias, “beam in your brother’s eye” problem: We can see so clearly that the people in fascist societies had a duty to try to do something early on to stop the madness and that all it takes for evil to triumph is for good people to not do good things… but we then tend to act, through the the fundamental attribution error, like that doesn’t apply to ourselves and our own social systems.
Frederick wrote:
“Why do we not have the laws that would let someone be prosecuted for failing to save a life?”
You make some really good points. That thought had crossed my mind but you laid it out really well.
And such law making decisions are trolley problems in of themselves. And I suspect that you are probably correct that from a utilitarian standpoint, for the reasons you stated the law makers probably got it right.
But like the original trolley problem, even if you make the right decision you’re still left with one dead body (resulting consequences), that doesn’t just go away because you made the right/better decision.
And so it goes in this instance. What a society accepts as right and wrong is at least in some degree established and justified by authority. Religion rules as you know are based on that very concept…”God said it so that settles it”.
So I think that is some of the underlying psychology behind the way people think.
OU812INVU69 says: “But like the original trolley problem, even if you make the right decision you’re still left with one dead body (resulting consequences), that doesn’t just go away because you made the right/better decision”.
Yep! This is something that one gets used to when one does policy analysis that a lot of people struggle with once it dawns on them that, yeah, the stakes are human bodies. There’s bodies either way. You have to count them and do your best. It’s deeply morally challenging and can cause burnout, but there’s no other honest way to do it.
Dr. Carrier — Thank you for this explication.
I was wondering if “Everything is a Trolley Problem” is synonymous with “Everything is an optimization problem” (which I happened to come across in a lecture by Stephen Boyd of Stanford on Convex Optimization, a sub-disciplined of math and engineering)?
Also — would it be apt to call this a tautology? As far as I can tell, if one thinks a bit deep about this, and given your elucidation, it makes sense.
Would like to know your thoughts on the above.
I couldn’t say it’s a tautology in a strictly logical sense because it is logically possible for there to exist, in some scenario, a cost-free decision. Not only because of cost-neutral cases (where no decision costs less or more, i.e. every decision is equally bad, or equally good, e.g. flipping the switch kills the same number of people or even the same people) but, theoretically, even cases where there are, somehow, no costs at all (I cannot think of any, but to equate that as a logical necessity would be the Fallacy of Lack of Imagination).
Moreover, it is not useful even to identify something as a tautology if you use that as an excuse to ignore the information content of it. For example, if you dismiss any tautology of the form “A = B” because “it’s a tautology,” you might then go about ignoring the fact that A is B. That they are the same or mutually entailed is information. It does not matter if that information arises from tautology or empiricism.
It would be interesting if someone could develop a formal logical proof (and thus establish the Trolley premises as tautologically true and therefore logical necessities rather than merely empirical observations), but it wouldn’t change very much. And it isn’t likely to happen (neutral-cost scenarios suggest no such logical proof is going to be possible from the start).
Conversely, identifying Trolley Problems as a subset of Optimization Problems is probably not very helpful. Rather like asking whether Trolley Problems are a category of Problem. Well, yeah. But what do we gain from pointing that out? Some tautologies lack substantive or useful information content. And this might be one of them. Unless someone can find some way to port tools used in other Optimization Problems into solving Trolley Problems that haven’t already been developed there. Again, I can’t think of any. But that’s not a logical proof that there can’t be.
I highly recommend you check out Economix Comix’s blogs: This article in particular ( https://economixcomix.com/2014/11/19/what-is-our-children-learning-or-greg-mankiw-and-the-terrible-horrible-no-good-very-bad-textbook/ ) . He points out that the obsession that classically-minded economists have with always thinking in terms of tradeoffs leads to a fallacy that is quite common where people think that because tradeoffs exist that therefore the current situation we are in is one where no win-win scenarios are left unexplored because of lack of imagination or other barriers (a specific manifestation of a just world fallacy). His example is as follows:
In practice, we actually have to behave as if win-win scenarios are on the table in a lot of situations, because pareto optimality doesn’t actually accrue in the real world all that often. There are very often cases where everyone could be better off with a new distribution of the same stuff.
However, on the flip side, there is an argument to be made that when you take into account opportunity costs there logically can be no exception to the idea of tradeoffs. Even if you suddenly had a miraculous pile of gold appear in your lap with no moral externalities whatsoever, determining that one should sell it to support a charity ignores that maybe it could be better off put into art to support the same charity.
Alas, I don’t think these examples are all that good (lol; there are some whiffs regarding net thermodynamics and the power of art here), but I agree with the principle. Opportunity costs, are costs. And that changes the game a lot. (Many of the best decisions I know people to have made, myself included, came after recognizing the opportunity costs involved.) Likewise, we should always be looking for, and scoring highly, win-win solves. But even those require a Trolley perspective to recognize.
For example, taxing the rich more than we are in the U.S. would actually improve their lives overall, indeed even the productivity and profitability of their businesses (the ones they run or invest their portfolio in; or for really lazy rich people, that their banks invest their portfolio in), so in fact they should be in favor of it. But most don’t recognize this, or refuse to; instead, only seeing it as a “they lose (money)” scenario. To them it “looks like” a zero sum game (“redistribution”), because they don’t (or won’t) consider the net benefits all around (some of which I discuss in my articles on UBI and income thresholds).
Obviously Trolley models only have use to those who have an accurate grasp of reality—other than their use in helping people get to one.
To be clear, I actually don’t think that a pile of gold would be better spent on art for the same charity: I think that one has to ask, and rate the ways one could use the gold for the charity (would it be better as a lump sum or put into an account that pays interest?)
Otherwise agreed totally. If a person is either not informed enough and/or doesn’t care enough to realize that there are tradeoffs for something they propose, pointing that out has a reasonable failure rate. That’s why policy analysis can reasonably only be done by people who care enough to do it.
You write “…a runaway trolley is flying down the track, about to run over and kill five workers (who can’t get out of its way, for some reason that doesn’t matter to the point), and you happen to be standing next to a switch that, if you throw it, will divert the trolley onto a different track, where it’ll only kill one worker.”
The reason they can’t get out of the way does matter though. The fact that the course of events can only go in one of two ways, literally being a fixed track with one switch, matters too. Knowing the number of casualties matters most of all. I suppose you could call these objections from epistemic ignorance. But Socrates said virtue as the power to do good rested upon the knowledge of the good.? Something like that?
If the fundamental point of trolley problems is that apparent inaction is still a choice, an action despite appearances, this is incontestably true. But it still doesn’t help up because the possible outcomes of inactions are generally vastly more difficult to estimate than the outcomes of a positive action. This is why I’m not convinced it’s all trolley problems…trolley problems abstract from all the really difficult issues. Perhaps this is shockingly counter-intuitive, but this seems to be the case.
Tautologically, it does not. The tool is for modeling scenarios where there are no other solves. Thus, scenarios that have other solves, aren’t relevant to the model.
And yet Trolley Problems can be expanded to include all models (just like Game Theory). All you do is add in the other solves and their costs (add more “switches to flip”). The net result is almost never a solve that has no costs. Thus, it really doesn’t matter how many other solves there are. All that does is change which costs we have to consider when deciding which action to take. That doesn’t magically make inaction into no longer being an action. Nor does it magically make inaction cost-free.
Epistemic ignorance, meanwhile, is addressed by risk theory. I have several paragraphs on that in this article, which link to my moral essays that address the question of unknown outcomes and how they factor into evaluating the moral preferability of one choice over another in any given moment.
What would be your perspective on the Trolley problem when there is high uncertainty as to what is down one or both of the tracks?
Using the Texas example. If the public had little to no information concerning the probability of an abnormal cold front that could damage the power grid, what would be a logical way of considering whether weathering the grid would make economic sense?
I would argue that there the trolley problem includes counting costs taking into account margins of error. Just like situations where the trolley problem includes opportunity costs (e.g. if you throw the switch you do damage to the line which costs delays in future train rides; or if you throw the switch to protect the five people, one of the five people who was saved is a genius violinist who now has the opportunity to do something good).
So, let’s say that you saw that one line on the trolley had a clearly visible person on it and the other had ten shapes on it that could be people with bags on their head or could be ten scarecrows. It’s meaningful to ask, say, how many times you see a random thing that looks like it’s not human on the train tracks, that it is. You might think that, usually, if one of the people are scarecrows, they all are likely to be (so the probability isn’t independent). You might think that if none of them are struggling they’re probably not people. At that point, you probably don’t switch the track.
To me, this is one of the intuitions that the trolley problem gets at and forces us to acknowledge: That is, one of the reasons we don’t like to make strictly utilitarian decisions is because in the real world there are always margins of error, and those margins tend to mean that some degree of care is a prudent idea. But the point is to get us to realize that we are still making a calculus at that point, and sometimes inaction as a choice actually has greater moral uncertainty around it. Like, say, shooting someone who is violently threatening another person who seems frightened and beaten. It could be that the person who is attacking was in the right and the person who was being attacked is a dangerous moral monster, but if we could only decide which to kill and had no other knowledge, not shooting would not be the moral decision, since the vast majority of cases where you shoot a violent attacker threatening someone who is not fighting back do not end with an immediately disastrous scenario.
Indeed. All of that can be modeled by comparing different risk levels (it’s called actuarial science). Of course there will be ambiguous cases (where differing risk uncertainties and outcome scales give products such that we cannot discern which is greater), although what to do in those cases is also something we should have figured out ahead of time, and hence learning which is itself a value of the model.
I’m sorry if you already explained this but I looked and couldn’t see an example.
How would the trolley problem be framed if you introduced fake or delayed switches, so that pulling the switch didn’t result in the trolley switching tracks, or the switching occurred to late to stop the worst outcome.
As a real world example. The debate over new energy policy occurs in which the arguments entail that an energy wont be scalable in time ( track switches too late), or that investment in an energy source detract time/money from others that can be scalable ( fake switch)
It depends on what you mean. Your scenario (but not your example) suggests the agent does not know these facts; in which event, the moral choice can only follow from what is known (since it is impossible to act on inaccessible information, and doing the impossible cannot be a moral imperative). Thus, those details simply don’t matter; until they are discovered. This then moves to the alternative scenario, which your example reflects: that we do know those things. In that case, then you account for those things in the analysis. Impossible actions cannot be taken for example, since they have no utility function.
The third (middle) scenario is where we have some suspicion but not definite knowledge. So, for example, suppose we have heard but aren’t certain that maxing investment in wind-solar will help (rather than make things worse—as actually, it will). What then? We will then suspect the “switch may be fake” as you put it, but can’t be sure—in this hypothetical, that is. We actually, in the real world, can be, and are sure: the math is inescapable, and proves conclusively we cannot get out of the situation we are in with wind-solar, which only have limited utility in achieving any goal. But let’s assume, for the sake of argument, that we don’t know this yet. What then?
This reduces to a risk-balance model, which I cover in the article above: probabilities must then be assigned to the outcomes, representing our degree of knowledge at that time, and the net worst outcome, and thus the incorrect decision, follows from the actuarial solution. In other words, when all else is equal, choose the least risk. If “maxing investment in wind-solar” has a low probability of working (i.e. our certainty is low), then we should probably act like it’s a fake switch and look for another (as in that particular analogy, there is a giant switchboard of switches we know about and can reach; not just that one). Which doesn’t entail zeroing investment in wind-solar (that’s a black-and-white fallacy: the converse of “maximum” is not “zero”).
What information the “public” had is not relevant. It only matters the information those in charge had (or their failure to obtain that information) and what action that was or wasn’t taken based on that information.
But that point aside agree with your underlying point that these type of decision making issues where you’re weighing the probable outcome of something is way different than the original trolley problem where the are only 2 certain outcomes and the outcomes were clearly defined.
This Texas freeze thing could’ve played out differently.
It might’ve not happened at all or it might’ve been much less severe. I’m not saying that we can’t blame those that were in charge. They were ultimately responsible for keeping the grid up and failed to do their job.
But only with the benefit of hindsight can we now say that the eventual outcome was a certainty at time they made their decisions (or indecision).
And even if one tries to argue that the data was there and the right decision was there to be had, that is still irrelevant from a trolley problem standpoint.
Because the trolley problem needs for the subject (decision maker) to clearly/accurately see the choices and buy into the certainty of the outcomes for it to work correctly. And just saying that they should’ve known is not good enough for this type of thought experiment. Indeed I can’t imagine that with the benefit of hindsight any of those in charge would’ve not have done things differently. Whereas the traditional trolley problem requires no benefit of hindsight because the outcomes are purposely certain.
Except in Texas they knew the outcome was coming. The persons responsible just hoped to golden parachute before it did. It wasn’t some unpredictable event. It had been predicted for years. Just like the levy situation in Louisiana.
I have several paragraphs in my article on risk theory and how it works in Trolley models.
As to the Texas example, that isn’t true. Statistical models predicted the outcome (it turned out it had a fixed and looming frequency, worse even than for catastrophic earthquakes in California) and companies were warned of this for years. It wasn’t some big surprise. Ditto Katrina. Notice how California handles such event predictions differently: we invest a lot in surviving the next 8 Richter, through both infrastructure development and building codes, and weather 7s easily; likewise our investment in tsunami warning systems; we don’t ignore statistically rare events, we plan for them (albeit, to within our means) precisely because their outcome costs if unprepared are so much higher.
Trolley problems reduce the facts to certainties. It is the assumption that somebody unspecified can somehow always assign probabilities (or frequencies if you insist on a demarcation,) whenever needed with a speed and reliability and accuracy to be useful is not a part of the trolley problem. The fat man version only is a problem if you can magically know the probability the trolley can be diverted.
Also, trolleyology is not even about making collective decisions on policy, but about guiding assigning blame. That someone could perhaps be blamed for inactions on the irrefutable grounds inactions still have consequences ignores the genuine moral issues of intent. People focus on positive actions because the potential outcomes can be traced to them. Hangman trolleyology diverts attention from whoever was hanging five innocent people in the first place.
The commitment to cost/benefit analysis is no doubt gratifying to economics professors. The real world application of cost/benefit analysis is illustrated by Katrina or the Texas freeze or the pandemic, where the real issues are not the principle that inactions have consequences. The question is who benefits and who costs. Or indeed what are costs and benefits? The answers there are not illuminated by trolleyology. (In my opinion, not by economics either.) The hospital transplant dilemma illustrates how misleading trolleyology is. A hospital where you might be dismembered for organs if you came in with both a broken arm and the right/wrong blood type, is a menace to humanity. Trolleyology says this is a true dilemma illustrating a fundamental moral principle!
I am perplexed by your misunderstanding here. It seems strangely emotional, driving you to not rationally engage with any point.
Trolley problems do not require any certainties. I have several paragraphs in the article above on how they also operate with risk measures, which are uncertainties. So I’ll repeat: you simply aren’t responding any more to the article or its points here. They do not require magical knowledge. They only model what to do with whatever knowledge you do have.
Trolley problems are also not about “assigning blame.” I don’t know where you get that. They are about analyzing how and why we use certain cost-benefit measures to make decisions. There is nothing in my article about using these things to “assign blame.” Blame gets assigned by your values matrix after you have analyzed the trolley structure of a scenario. The problems by themselves presume no value system at all. That’s in fact the entire point of them. Perhaps what you mean is, what “worries” people about trolley problems is “being judged” for failing at them. That’s true, but is a problem with the people who worry about that (and how that worry causes them irrationally to react to these scenarios), not with the problem models themselves.
And again, Trolley problems do not tell you what to value or what weight to assign to anything. They tell you what exists that you have to evaluate and assign weight to, rather than ignore. How you then assign values and weights depends on what you bring to the table to solve these problems. The problems themselves prove you can’t avoid doing that. They do not prove how you should do that.
You seem to have missed every single one of these crucial points.
Ask yourself why.
I gave three examples of classic trolley problems misleading analysis. The fat man showed uncertainty was not a part of trolleyology. Your article may say some words about that, but the fat man problem is only a problem is uncertainty is excluded. Otherwise the fat man would have been forgotten long ago as too uncertain to be useful for anything but an unconvincingly implausible plea for skepticism.
The hospital transplant problem shows trolleyology is most certainly not about cost/benefit analysis. The cost to humanity of turning hospitals into abattoirs is deliberately excluded. The only purpose I can see is a “gotcha,” in that case against utilitarians.
And the hangman problem is only a problem because it assigns that weights in its premises. In that case, the individual is at fault for not for saving four, while the culpability of the system has no weight at all.
One of the crucial points allegedly made by trolleyology, that inaction is moral action, is by the way a commonplace of arguments for capital punishment, both legal execution and the popular scenario of a brave cop putting down a vicious criminal so he can kill no more. If people already accept this, there’s no need for trolleyology. Nor is there any indication trolleyology helps to prevent selective application of the principle. And the insistence on rigging the premises directs them away from real analysis in my opinion. There is a lot of reactionary fantasy in commercial TV and movies and genre fiction that uses trolleyology!
The apparent inference that I don’t accept the supposed great lessons of trolleyology is incorrect. Indeed, I accept that in ways I suspect you would be outraged by. For instance, history I believe demonstrates very well that revolutions, by tortuous paths blazed at horrendous cost, are the birth of freedom. (Try Barrington Moore, Social Origins of Dictatorship and Democracy, for a sample.) Therefore, I believe that the inaction of not making revolutions perpetuate injustice and human suffering and oppression, keep people unfree. Perhaps you personally can find trolleyology’s little puzzles useful in arguing for or against such a proposition? That needs some demonstration, or so I see it.
Somehow you aren’t paying attention. My article has several paragraphs on when probabilities rather than certainties attach to the cost options. You simply are ignoring them. Why?
Likewise, when you say “the cost to humanity of turning hospitals into abattoirs is deliberately excluded,” you seem again not to be paying attention. Including that very cost is precisely what I discuss in this article, even linking to an entire detailed discussion of the fact. Yet you seem not to know this. Why?
As to assigning culpability to “systems,” you seem to forget, a system is just a collection of people. Making decisions. That’s the point. A point you keep missing. Why?
I have no idea how you think “there’s no need for trolleyology” when its entire function is to analyze, so as to justify or critique, precisely the assumptions people make in the decisions they think they are or are not making. That there are people who don’t want to hear it, does not make the model incorrect.
“Revolutions, by tortuous paths blazed at horrendous cost, are the birth of freedom” is disproved by the Iranian Revolution (hello, Ayatollah), and all Red revolutions in history (e.g. Venezuela, China, Cuba, Russia). This principle is also called into question by contrafactuals (compare Canada’s path to freedom with France’s; which path was better?).
Obviously not all revolutions are good, and even the ones that end up good, are often not optimal. But I fail to see the relevance to Trolley problems here. There is no general rule (revolutions vary in utility and value across the entire spectrum, from absolutely disastrous and ending up in an entirely worse place, to the opposite, and everything in between), so one could only apply Trolley models to specific cases. And even there it is not obvious all cases will reflect educable ignorance of the costs of inaction or that inaction is a choice being made, which are the only lessons Trolley problems intend to teach. So I am not getting whatever point this digression was supposed to be making.
Steven, notice what considering the trolley problem made you do. You dispensed with the notion that inaction is inherently superior to action. You recognized that you need to actually understand the stakeholders in a moral scenario, and consider costs and benefits, and maybe even consider costs and benefits to them. (For example: In the original scenario, if the five people on the tracks were all suicidal and on the tracks to commit suicide and the one was not, it would certainly become immoral to flip that switch). And you even started thinking about systemic costs in a rule utilitarian perspective: In other words, you recognized that in reality you have to think about how both institutions and individuals need to have maxims that work in most cases because in actual practice we need habits that work since we can’t actually analyze every decision with the time needed to make an exhaustive utilitarian analysis (and in any case can’t act morally unless we’ve built good habits).
The problem is that you seem to be assuming these insights are widespread. And… they’re not. I constantly have to invoke Zinn’s insight that you can’t be neutral on a moving train, precisely because status quo bias is so pervasive that people think that they are actually not morally implicated in the actions of the institutions that they fundamentally are a part of, pay taxes to, etc.
It’s straightforwardly false to say that trolleyology is about assigning blame. Imagine that you stipulated in a trolley scenario that the person pulling the switch had a perfect patsy and would never experience any problems. Does that change the analysis? Not much, really, though it may for some people (and the fact that it would actually tells us something about mortal intuitions!) You still have the same decision to make. In actual practice, stakeholder analysis constantly is determining how to act in situations where people won’t all get what they want or even what they really deserve. That’s trolleyology. Yes, there’s more going on than the simple versions of the trolley problem, but there’s also more going on counting apples on a tree than addition (what counts as an apple? do we care that the apples are not the same size?) and yet we use math to count apples just fine. You’re complaining that the trolley problem abstracts out a lot of things, but the whole point is that it does that so that an interesting question doesn’t get bogged down. The fact that people are so hostile to and bothered by the question shows that it is interesting to explore.
Also, why does inaction not show intent? If I sit there and do nothing as someone is beaten to death, that says something about me, doesn’t it? In trying to refute trolleyology, you’re showing why it’s useful. It only gets worse when you realize that in the real world, the thing you are trying to get focus onto, inaction is almost never actually inaction. We aren’t throwing switches. We’re part of systems, ecological and social and technological, and we are having an impact. The very notion of what constitutes inaction is itself a deeply political question. In my view, you’re showing status quo bias by assuming that positive intent can only ever manifest in apparent action rather than inaction.
Worse, by being so hostile to trolleyology, you actually are dispensing with tools that are relevant to the actual stakeholder analysis you’re talking about. Take the hospital where one person is mangled to save many, for example. Yes, in practice the harm to the entire medical system for such an action would be so grotesque, both reputationally and in actual practice, that it would be immoral to do so. Heck, one could even say that such an action was categorically immoral. But notice how we’re assuming, for example, that the hospital gets caught. And in the real world, since actions that can pit stakeholders against each other can often be done in secret, that condition doesn’t apply.
Frederic, this analysis of yours is brilliant and filled with productive observations. I concur with every point, and am grateful to have contributors adding valuable things in that I haven’t. Thank you.
As to the “don’t get caught” aspect, I recommend Drescher’s Good and Real, particularly the last chapters where he brings in QM Many Worlds Theory and Newwcomb’s Paradox to make a point about how moral decisions ultimately relate more to what sort of person we are choosing to be than to what people will do in response. Of course the latter matters too. But so does the former (which connects to this month’s debate on animal rights). And when one thinks of decisions in a Many Worlds and Newcomb’s context, it becomes clearer why it should matter how we decide to police ourselves in what we do, “getting caught” or not. (To be clear, I do not endorse MWT, I think it’s metaphysically lazy and a non-explanation requiring too many epicycles to sustain; but my point here is that it remains valuable as a thought model in the way Drescher employs it; it could be replaced with a hypothetical “possibility space” to make the same point, IMO.)
As to the not getting caught problem: I will need to look into Good and Real, it sounds like my jam! My immediate thoughts are to once again check my rule utiliarian, deontological and virtue ethics hats, as well as any other moral frameworks I have. And so, if I ask about the hospital that can get organs from innocent people and not get caught, I think…
1) As you rightly point out: Yep, a hospital that does that is going to become a bad institution and also turn good people into it into worse people (an intolerably high negative externality and a violation of the duty of employers to make sure that working for them can be done morally). But it is critical to note that that’s not a fact about morality in isolation but in psychology. I can imagine hypothetical thinking beings, some kinds of computers for example, for whom prior bad behavior doesn’t necessarily build up negative habits or cause horrible impacts to their virtues as a person. (For the record, I suspect such beings may be logically impossible: It may be that the ability to think and feel as we do, or even anywhere close, logically necessitates some kind of learning algorithm that reacts to prior decisions in terms of norm-building and thus no hypothetical intelligence remotely like humans could do awful things in a purely utiltiarian way).
2) The hospital itself may have purely good intentions, but as an outsider, I could never know that. I could never tell that such a hospital was not mangling people and getting their organs for some kind of gain, whether some pecuniary gain or influence peddling or even some kind of sadism that could be morally self-justified. So not only could such a hospital never be able to honestly convince anyone that they are beneficent who was fully informed about their actions but wasn’t psychic, even if they were beneficent, but…
3) In actual practice in the real world, if you’re constantly acting in a way that makes others think you’re an amoral asshole, you’re probably an amoral asshole. That is, organizations that keep doing something morally questionable with a blithe utilitarian response being practically indistinguishable from evil organizations should give anyone in such an organization pause. Because we as human beings are never so in control of our own faculties and never so perfectly innocent that we should never think we may have vile motives.
4) And, of course, since such a hospital could get caught, even if the chance was infinitesimal, it shouldn’t do so.
And there are more objections one can easily make.
And so when I count all that up, that hospital was morally vile even from their own stated values. But my point to those who question trolleyology is that that needed to be thought about. As much as I’ve thought about problems like this, I know for a fact that someone else considering the same issue will eventually come up with something on my list that I didn’t and may never have.
What I find so shocking in this discussion is how people will either directly say something like “All trolleyology does is expose the intuitions we are already using”. That’s not even true (you can get someone to change their mind on utiltiarian ethics if you force them to face the trolley problem, for example), but even if it were, that would just mean that it is raising consciousness about our own intuitions, which is a good thing. I wonder if there may be some hostility to emotional intelligence on display on the part of at least some skeptics of trolleyology. Even if all trolleyology does is to act like an oil gauge that usually tells us our oil is just fine, that’s reassuring and useful!
Sir, trolleyology did not influence my thinking in the sense you mean. The hospital transplant “dilemma” doesn’t exist, it’s a straw man attack aimed at utilitarians who never advocated any such thing.
The main principle you and our host are advocating really is extremely wide-spread. It’s just known by the name “opportunity cost.” I suppose you can follow up with a popular introduction with Henry Hazlitt’s Economics in One Lesson, published 1946. And that follows up on Frederic Bastiat’s That Which Is Seen and That Which Is Not Seen, from 1850.
You could also study libertarian political philosophy, people like Jason Brennan are very prolific on the internet. Right wing economics and right wing history and right wing philosophy. They could explain in great detail how “stakeholders” are stakeholders because of property rights. I am not confident in any stakeholder analysis, which strikes me as stemming from a genuine status quo bias.
You cite Howard Zinn as if he would agree with trolleyology. But so far as I know, Zinn was never so bold as to write utter nonsense about “Red” revolution like our host does above. Trolleyology may lead to condemning the Bolshevik revolution as a crime…but this claim is also the claim that the defeat of the German revolution was a good thing. Thus, trolleyology has led our host to approve of the Freikorps and their political allies, including one Hitler. I don’t think this is intentional, just a product of confusion, because trolleology is useless or misleading, permitting prior reactionary commitments to predertermine the conclusion.
Trolleyology serves as a word game to avoid real analysis. I mean, Canada never had a king and aristocracy oppressing them, so there wouldn’t have to be violent revolution to move forward. France hadn’t exterminated whole cultures to seize their land in their march to freedom. Trolleyology may be logical, even tautological, but these cases are not comparable even in principle, thus cannot serve as grounds for a contrafactual argument, no matter what trolleyology says.
Trolley Problems are not straw manning utilitarians. They are testing the consistency of utilitarians, by forcing a reckoning with why they would decide as they do. Compelling them to come up with a credible reason is the point.
And this is not “just” opportunity cost analysis. Trolley Problems reflect a specific subcategory of opportunity cost problems, which elucidate how people tend to treat non-decisions as non-actions and thus something one is not responsible for, and how every decision (including a non-decision) has a cost (so we had better determine what those costs are), which people tend not to admit to or realize.
This also has nothing to do with libertarianism or conservatism or property rights (which are a cultural invention, not an ontological property of objects).
And I have no idea what you are going on about vis Zinn. Not a single thing you say about Trolley Problems in that paragraph is correct. And once again, you seem to have completely missed the point about the varied uniqueness of revolution scenarios forbidding your generalization regarding revolutions. That’s simply a historical fact. Not a consequence of any Trolley analysis.
“Trolley Problems are not straw manning utilitarians. They are testing the consistency of utilitarians, by forcing a reckoning with why they would decide as they do. Compelling them to come up with a credible reason is the point.”
The hospital transplant “problem” has been answered in this very thread, but trolleyology, er, “trolleyology” has refused to accept the answer and moved on. It is still trotted out to gotcha the unwary. When credible reason is unacceptable, it proves bad faith argumentation.
The other paragraphs are the notorious not-even-wrong. Pure logical analysis, which is what “trolleyology” is supposed to provide is simply not, useful. It provides no protection against uncritical acceptance of historical mythology. The grand claims made in the title of the OP are unfounded.
Your behavior here is very emotional and bizarre.
Examples:
I have no idea what you mean by “trolleyology has refused to accept the answer and moved on.” Trolleyology is not a person. And as I am one of the “trolleyology” philosophers who has explained it in exactly this way, clearly I have not “refused to accept the answer and moved on.” I provided the answer, and the trolley framework was crucial for my being able to do that, and to know that I needed to do it. I took you to task for ignoring that, and now you pretend you didn’t ignore it, and change the argument, moving the goal posts, to something even less coherent. As if you don’t want to confront the fact that you completely blanked on the fact that the article you are commenting on, which is about trolleyology, actually included the solution to the transplant problem you (then, bizarrely) claimed no one was looking at or taking into account. You seem keen to avoid admitting that happened, and confronting why you let it happen. This perplexes me.
Similarly, trolley modeling is not supposed to provide “protection against uncritical acceptance of historical mythology,” any more than logic or Game Theory or statistics or economics or any other subject are. The avoidance of fallacious applications of models is the province of a different field called epistemology; if you want to avoid abuse of models, apply the correct epistemology. Just because people can insert bogus historical facts into logical arguments, historical arguments, statistical arguments, economic arguments, political arguments, and Game Theory models, does not mean “logic, history, statistics, economic and political science, and Game Theory” are therefore “simply not useful.” Ditto trolley modeling. This is a bizarre form of denialism that makes no logical sense, and can only have some really strange emotional motivation that I do not discern.
Steven:
You say “The hospital transplant “dilemma” doesn’t exist, it’s a straw man attack aimed at utilitarians who never advocated any such thing”. But… ummm… dude, it does. It did exist. Because we’re talking about it, right now. It is an interesting question to answer, even if you think non-rule utilitarianism answers it (and I frankly don’t know if it does, at least not without Mill’s revisions). You and I agree that it’s not that interesting of a dilemma because there are obvious objections that one can make even from a utilitarian perspective… just like there’s an answer to Zeno’s supposed paradox (“Idiot, Achilles isn’t just halving the distance to the tortoise at any time interval, he’s only doing that if you’re cheating and halving the time you’re counting each time you count, obviously Achilles gets past the tortoise”), and yet thinking about it gets you calculus.
Whether you did or didn’t actually use trolleyology to arrive at the conclusion you did, your arguments still had to consider exactly that which trolleyology exposes. Richard and I have both walked you through that now. A tool that arrives at useful conclusions is a useful one. The fact that you got there differently and may even find the reasoning to be intuitively quick is irrelevant, not least because not everyone is like that.
You say that we are discussing opportunity costs. First of all… yeah, no shit, dude. I’ve mentioned opportunity costs multiple times in this thread. Do you think opportunity costs as a notion are just about assigning blame, or are just useless, or are just strawmen? I doubt you do. So if trolleyology makes you think about opportunity costs… so the hell what? Are you willing to admit now that your initial objections were flawed?
But, secondly, actually, no, no we’re not just discussing opportunity costs. We’re also discussing externalities, both moral and practical! We’re discussing systemic risk! There’s a lot going on here that can’t be fit into the opportunity cost framework.
One could argue that Richard has a tendency to want to use what I would call “hungry” or “greedy” frameworks (applying trolleyology to a very wide range of problems, ditto Bayesianism). There’s possible objections to that approach, but what I find so interesting is that you in disagreeing with him boiled down a whole range of things that aren’t opportunity costs (unless you define “opportunity cost” beyond the point of coherence) to just opportunity costs. I find that very telling.
You then say Zinn wouldn’t agree with trolleyology. Zinn was a radical activist. He routinely defended the reasoning of people who engaged in actions that could in isolation be viewed of as violent or immoral or disruptive by pointing to the broader trolley problem: The entire system being so dangerous and corrupt meant that actions to force it onto a different track were necessary, even if that meant running over some people. Because people were being run over already.
I’m a leftist, like Zinn was. I’ve personally emailed Chomsky, Michael Albert, met Tim Wise, been on the Z Forums. Part of the reason I’m passionate about this discussion is exactly that what I find in leftist activism is what Michael Albert has found (and what he pointed out in response to David Horowitz): People complain about the disruptions that are caused when you push someone’s boot off of someone’s face, as if boots on faces are not disruptive. One is free to disagree with the moral calculus of the left and of more centrist progressives and liberals all one wants. What one is not free to do honestly, and yet I find is omnipresent, is to say that there is no such calculus. And what I find is that people are actually perfectly willing to accept a host of solutions to problems that they implicitly frame as passive because they are sanctioned by existing institutions or fit a particular mode they are comfortable with: They accept modern policing, mass incarceration, drone strikes, preventative war, etc. But they then will wag fingers at someone blocking traffic in order to protest police brutality. That’s not an honest objection. Either the BLM activist is wrong that their tactics will have a positive effect on the world or they’re not, but trying to act as if BLM activists blocking traffic are engaging in some deontic violation that is qualitatively distinct, from, say, what the cops are doing is just lying.
Putting all that aside, my point to invoke Zinn is specifically the moving train analogy. So let’s actually think about it. Imagine a perfectly innocent person on a moving train. They have done nothing to make the train careen downhill. Yet turning the train onto a safe track will require everyone to act. Can they complain that they were innocent and so shouldn’t need to help, even at cost to themselves? No. If the train careens off the track, everyone gets hurt, including them.
Zinn’s point, putting aside whether you more broadly think he would have accepted the analysis you think trolleyology implies (which frankly is still a strawman but whatever), was that the status quo has a direction. It is going someplace. It is not possible to be neutral, to not go a direction, because there already is one. You either accept that direction or you don’t, but you cannot claim to have no opinion and still be an informed actor.
And, like I pointed out above, “neutral” rarely is. People will act like, say, racism isn’t their problem when they inherited money created by explicitly racial policies, don’t learn about and often resist learning about the cultures of other Americans they don’t live near, dismiss non-white English as aberrant instead of noting that it is in fact a dialect, pay taxes to racist systems, and vote for politicians who keep maintaining racist systems. There is too much relevant historical context for all of us to act as if we are innocent. We can’t be. Our actions and inactions have consequences. Even our choice to stay breathing or not has consequences for others. There is only one solution to that problem: At minimum, to make sure that one’s passive behaviors produce more good moral outcomes than bad ones. That is the minimum cost of integrity. And it requires more than just not being directly, physically, literally involved in overt, grotesque evil. That was Zinn’s point, straightforwardly, and I find it shocking that you seem to not be able to face it.
Also, Steven, the fact that libertarians think that only people who own stuff have human rights is kind of why their philosophy fucking sucks. (Didn’t notice that point the first time).
Alice’s Car Company sells Bob a car. Does Charlie need to have a house to complain that Alice and Bob are getting benefit from a transaction that makes the climate, the air he breathes and the society he lives in worse? (And the libertarian “solve” to this that one owns their own body is just a tacit admission that the property rights framework is gross).
Externalities exist. Decisions people make have consequences. People are impacted by our decisions. If we don’t act like they are morally relevant stakeholders in our decisions (yes, of course taking into account the practical factors like them not being as invested, their voices not being as easy to sample, etc.), we will be moral children if not moral monsters.
In fact, the property rights framework is misleading precisely in that it lets us think that we owe more duty to people who are invested into us. Uhhh, no, no we don’t. It’s one thing if I scam you in a contract. It is worse if I also scam someone else who I wasn’t even in a contract with. Yes, that person had no reason to trust me and I didn’t abuse their trust, which is awful, but the hypothetical “you” here also had at least a chance to see through the deception and had an investment into it. It’s one thing for a government to police its own citizens poorly; even libertarians will generally agree that for it to police other citizens of other countries poorly is much, much worse.
Yes, there are definitely contexts where people should need to “buy in”, so to speak, to be involved. But those situations need to be ones that actually don’t generate any externalities. In my opinion, the best framework is from the anarchist/libertarian left literature: Michael Albert argues we have a right to be involved in any decision that affects us to the degree it affects us. Which means in some cases that we get to make the decision unilaterally, some cases a small group should, and in some cases big groups need to, and in each case the way that the decision is handle (e.g. unanimous decision making, 50%+1, plurality, 60% or 66%, etc.) should be keyed thusly. I suspect you probably disagree with Albert on parecon and anarchism broadly, but I still suspect you will find his point about the right to influence decisions you are impacted by to be a strong one that naturally generates most human rights and also creates a basis for a community of free association.
And, of course, a property rights framework assumes a lot, the very reason that the Coase theorem never actually accrues in reality: That property rights are coherently defined, properly enforced (rather than costing money and time to enforce at the minimum), and fair in the first place. Who has money and power was never dictated by justice, so using it as a framework now to determine who has rights just means pushing forward the injustices of the past. One could have used the libertarian perspective back when women were either legally prevented from being fully part of the economy or were practically restricted from doing so to argue against noting the impact of a proposal on women, families and dependents. But that would be morally vile.
Missed Frederic Christie’s responses.
Trolley problems are designed to ignore systems in favor of individual actions/inactions; to consider only the local situation, not externalities; limit responses to either/or, rejecting the real complexities.
This is why trolleyology fosters the kind of analysis that can off-handedly assume that the Bolshevik Revolution was a disaster while the failure of the German revolution (ultimately resulting in Nazi Germany, for God’s sake) at the very same moment a confessed trolleyologist carries on about how trolleyology forces consideration of the things that it rules out in its premises! This is reactionary nonsense, indulged precisely by dint of pretending that the inaction of not having a revolution in Germany wasn’t a moral choice! That’s precisely what trolleyology allegedly teaches, but right here we see that it doesn’t.
As to your left wing aspirations? If you think Venezuela has had a socialist revolution that has enslaved it, implicitly endorsing, say, the death squads of Colombia, then I am not a left wing sympathizer. I think Venezuelan people are victimized more by not taking command of the economy than by the imaginary expropriation of the rich. And I think endorsing the economic warfare against the people under such cover is deeply immoral, because unlike the trolleyologyist I think an inaction, like blindly accepting the US government’s policies, is a moral choice.
Lastly, when someone is still trying to sell the need to conduct the Hospital Transplant dilemma as needed for rational thought, even though it is not, not, not a genuine dilemma, they’re trying to tell you it’s raining on your shoe. That provokes a little emotion. What it doesn’t provoke is any left-wing sympathies. Nor do I think this reaction is terribly emotional, or even irrational. I reject the implicit invitation to regard myself as smarter than the normies who need enlightenment via trolleyology.
No, they are not. They easily account for all these things. I have even shown examples of that. This has been explained to you multiple times now. So I cannot explain your broken-record head-in-sand behavior here.
Everything else in your last comment consists of irrational non sequiturs even more inexplicable than that. What’s going on with you?
Steven’s most recent post seems to just be talking past Richard and myself. Steven, my perception is that you seem to be discussing what you have seen as the general use of trolley problems and the state of the field of troileyology. If that’s true, that’s fair, but
a) you’re talking to people who are sympathetic to utilitarianism; and
b) it would help if you got specific about who you think is arguing in this way.
Richard’s perspective is that it all boils down to utilitarianism. I am not willing to go that far, but I do think that any philosophy that tries to pretend you can’t talk about consequences is both obtuse and actually ultimately grossly immoral. What people object to in the straw utilitarian isn’t that the person is counting consequences and acting, it’s the way the straw utilitarian does it. In my view, it is incredibly helpful to be able to think about the predictable consequences of one’s actions (how Chomsky frames morality and which I think is an excellent framework), one’s duties and promises, and what actions will do for one’s own virtues and what kind of person someone will make. Foot and Richard are right that, at a minimum, you can turn any supposed categorical imperative into a hypothetical one if you can a) have a perfect grasp of the facts and b) get another person to agree that they have a particular goal. Once a doctor agrees that their goal is to minimize suffering, for example, “You should sterilize your instruments” becomes an objectively rooted categorical imperative for them. The only way they can get out of that is if they express different goals… and meta-ethics then triggers, in that we then start asking about why they have the goals they do and how they sort those goals.
So I looked at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6642460/ . I think the article does make some points that are perhaps a little unfair to utilitarians, but it also is quite thoughtful and points out that Kantian perspectives have their problems too.
The other articles I read using trolley perspectives also didn’t automatically bias any particular direction. It seems clear to me that everyone is using it like Richard is: as a way of thinking about a problem.
To be clear, I find it’s all utilitarianism all the way down because our own self-feelings (deontology) and virtues (Aristotle) are also utility functions. The flaw in classical utilitarians is not that utilitarianism is wrong per se, but that they ignore a whole ton of outcome measures (whole categories of utility) when they run their utility calculations. Kantians are just utilitarians who count different things as having utility, while ignoring or down-weighting others, exactly as their “utilitarian” opponents do; these two competing camps just differ on “which things” they put in which basket (named “relevant” and “irrelevant”). And that’s why both are wrong: the correct calculus is to account for every utility. Not cherry-pick certain kinds. Likewise virtue ethics, and every other metaethical system: it’s all utilitarianism, differing solely in what’s being counted. And the correct move is to count them all, not some isolated selection.
Richard: I’m glad to hear you clarifying that! My read of you was that you were making an even stronger argument, one I think is actually still defensible: That not only are virtue ethics and deontic calculations still examining categories of utility, but also that you can even then count them easily against straight-forward utilitarian calculation. And I have thought that that’s probably ultimately true, but I would want to make sure I had explored every possible edge case.
What I have always thought and was happy to see someone else making this same kind of argument was that the virtue ethicists and deontological thinkers were just providing a different framework to approach problems, but that neither framework is actually very useful in and of themselves. Virtue ethics on its own can be selfishly myopic: It’s the equivalent of someone saying that they want to be happy at their job, so they will go to work doing only the things that would make them happy (and I guess procedurally better) at their job. A truly complete virtue ethicist becomes a utilitarian, because once you are acting morally proactively and caring about your community, you inevitably start trying to truly satisfy a utility equation. I’ve always felt a myopia as I read Plato and Aristotle talk about morality for that reason: They were just too primitive in their ethics to talk in the latter capacity.
Similarly, Kantian ethics seem beautifully austere at first glance, but they’re not actually very useful as an initial framework. In the vast majority of times in my life that I’ve had some kind of moral dilemma, I either had no especial duties that could be boiled down to a near-absolute or had competing duties that were not obviously conceptually of different magnitudes. In my mind, the Kantians really fall afoul in particular of the competing duties: What do I do if I have two choices, both of which would be intolerably bad if they were universalized? I have not yet read a Kantian perspective that answers that well, and the article I linked to Steven is an example of an article that I think just glosses right over that. What I have noticed is that in every textbook where an implicitly Kantian perspective is applied, whether that is being stated up front or not, when they get to the inevitable moral dilemmas section it… just boils down to utilitarianism (something I suspect you have found as well). Like once you are in a situation where, say, you are a therapist who has information that borders on admitting a crime or triggering some reporting requirement for abuse, so you are now trapped between two professional duties, every analysis I’ve seen professional textbooks suggest all effectively boils down to examining the specifics, erring on a specific side of caution, and basically counting the impact. That’s… utilitarianism. And it is really remarkable to see the supposed resolutions to those dilemmas move forward just by listing a bunch of abstract principles and reasons why the rules are in place: By not even being willing to admit a utilitarian calculus, they have to do one clumsily with blunt tools that weren’t made for the job.
I have found that Kantian perspectives end up mostly being about negative rather than positive duties, but even when that problem is avoided, you get your kind of bumper sticker responses: “Treat others as you would like to be treated”, “act when possible for the good of all people”, etc. Not only do most of those end up cribbing from virtue ethics anyways, but those are not very useful. Okay, I’m already doing that. Now what?
Only utilitarian perspectives, I have found, make people who are already acting as minimally moral agents be attentive to the actual tradeoffs of what they are doing. From a day to day perspective, the vast, vast majority of decisions that have many moral heft will be utilitarian.
What I have pointed out to the Kantians I have talked to, and I have mostly gotten concessions that this actually works, is that if you use rule utilitarianism and then just assign duty violations a near-infinite negative utility, you solve almost all of the classic utilitarian objections. (I think we’ve discussed this before as well). My favorite example to show why act utilitarianism can be flawed is, “What if you could torture one person to improve a million people’s TV viewing experience 1%?” Rule utilitarians would immediately object that the consequence of a rule where one is so callously willing to do hideously cruel things for minor benefits is itself so destructive that it vastly outweighs the TV benefit. The same applies to the classic “torture a terrorist for the nuke” scenario: I think it is inescapably true that
a) because torture is deeply unreliable (and so can generate no benefit or, worse, actually bad data) and so destructive to allow as procedure, in addition to being so harmful to the organizations and people that end up practicing it, it is still worth it to have as your organizational rule that you don’t and
b) if an individual person then exhausts all possibilities and now has to make that awful choice, you still end up, yes, taking them to court and checking, because in the instance that they were even slightly wrong, society has to account for that
Because the mundane reality that people like Sam Harris miss for ideological and personality-driven reasons is that there are no ticking nuclear time bombs. There are ticking bombs (both literal and figurative), yes, but the very nature of engaging with terror and war is that you never know enough to actually be sure that you have actually caught an opponent, that that opponent isn’t a double agent, that you couldn’t try another interrogation strategy and have it work better, that the data you will get will actually end up finding the bomb, that your opponents that don’t want you to find the bomb, etc. One of the reasons why Kantian and virtue ethics perspectives are useful is because the kind of perfect knowledge people like Harris smuggle into their thought experiments which end up hosing them never accrue in the real world. And when you can’t be sure about your actions’ total range of consequences, you have to be attentive to your promises, your duties of action and care, and the kind of person you are going to be if you constantly make callously utilitarian active choices.
So what I have always done is to suggest that people try looking at a problem from each of the perspectives and see if that answers everything. I think the mistake is to view them as competing and totally mutually exclusive ways of thinking, rather than as different ways you can model a problem.
Indeed. This is all more or less what I explain in Open Letter to Academic Philosophy: All Your Moral Theories Are the Same.
And you smartly and rightly combine that well with the risk model of moral reasoning I have also explained elsewhere (links in the article above): as the probability of actually realizing a claimed utility drops, while the competing utility remains effectively certain, there obviously comes a point when the latter always prevails as the wiser choice. In fact, that is logically necessarily the case: as P(A) approaches zero, there is always some point at which the value of A drops below ~A. One need merely ascertain when. And when one doesn’t know, they are probably already there.
Minor point: I’d avoid appealing to infinite utilities, only because the concept may be incoherent (see Pascal’s Wager and the Ad Baculum Fallacy). Value has a diminishing return, so more likely approaches a finite limit (i.e. you can have infinite increases or decreases in value, but never get to more than a fixed maximum value, such that at some point, even infinite increases or decreases in value have infinitesimal effect on any utility calculus). This doesn’t affect your point too much; you can make the same points using a utility estimation confined to a limit. So I only mention it to add a way to reframe your point to avoid objections to infinite utilities.
Frederic Christie response from Oct. 4, 11:26
“Steven’s most recent post seems to just be talking past Richard and myself. Steven, my perception is that you seem to be discussing what you have seen as the general use of trolley problems and the state of the field of troileyology. If that’s true, that’s fair, but
a) you’re talking to people who are sympathetic to utilitarianism; and
b) it would help if you got specific about who you think is arguing in this way.”
Of course I’m only talking about how trolley problems are used in popular discourse. That’s why talking about probabilities and risk analysis and so on are so irrelevant. The hospital transplant dilemma is not “solved” in the original post, for instance. The arguments for war and capital punishment one encounters in daily life are very much presented in the same fashion as the trolley problem(s): Life is such that there is no possibility of escaping the hard choices, that, in brief, somebody has to die. (That’s why death scenarios are so popular in trolley presentations, I think.)
I skip most links but I read some of yours. I wasn’t impressed by Foot’s creation of a “negative” duty that outweighed positive duties for obscure reasons. I thought it obvious that doing something is taking responsibility in a way not doing something doesn’t. There is an infinity of ways not doing, which makes taking not doing into account pretty uselesss…apart from scenarios specially designed to impart a God-like view.
As to your a) I am more of a utilitarian myself, except that I believe the real issues in utilitarianism are about computing all utilities and externalities and opportunity costs and so on. That’s why trolley problems that in popular use wish away all the real issues are so misleading, except to again, explain how somebody just has to die, because that’s life. Morals seems to me to be rules for living. Catastrophes where death is inevitable tend to reduce to luck. Popular discourse tends to attribute outcomes to fitness.
As to your b), again, the kinds of popular arguments that justify capital punishment or war usually invoke the trolleyology principles.
Steven: You say, “Trolley problems are designed to ignore systems in favor of individual actions/inactions; to consider only the local situation, not externalities; limit responses to either/or, rejecting the real complexities”.
Right, because what they are trying to get you to do is to see that you have a choice to make. People routinely respond to all that complexity by being myopic about it. They either descend into option paralysis or they stop being concerned about sorting the correct decision because all the decisions suck and there is so much uncertainty. A well-framed trolley problem model for your problem lets you see what “inaction” means in a context and what your utility calculations are.
You then say, “This is why trolleyology fosters the kind of analysis that can off-handedly assume that the Bolshevik Revolution was a disaster while the failure of the German revolution (ultimately resulting in Nazi Germany, for God’s sake) at the very same moment a confessed trolleyologist carries on about how trolleyology forces consideration of the things that it rules out in its premises!”
But… dude. I have repeatedly pointed out how leftist revolutionary praxis fits precisely into a trolley problem. On one track, you have Tsarist Russia. On the other track, you have the Bolsheviks. Acting like the Bolsheviks were mustache-twirling villains because they pulled that switch ignores that Tsarist Russia was intolerably bad too. Status quo bias leads people to ignore what Kant pointed out in response to the French Revolution: that even failed revolutions have to be treated differently than failed status quos, because people trying to fix a problem has a nobility about it and an opportunity for learning and action that
I am going to guess that you, Richard and I have set our standards for when a revolution is good and what kind of outcomes we want from one at different thresholds. But I suspect all of us can agree that the track we are currently on is littered with tied-up people, and to some extent by design. That doesn’t justify switching onto a worse track, or more critically one track that then prevents us from switching to a better track later (which is why Bolshevik-approaches are actually bad: using flawed and immoral tactics within a flawed methodology causes you to make a worse dungeon than what preceded it, and costs you momentum you could have used to switch to a better system later at much greater benefit).
Yes, you can’t easily model this as a trolley problem.
You say, “This is reactionary nonsense, indulged precisely by dint of pretending that the inaction of not having a revolution in Germany wasn’t a moral choice! That’s precisely what trolleyology allegedly teaches, but right here we see that it doesn’t”.
Track A has ongoing poverty, ethnic issues, etc. Real problems. Track B has the Holocaust, and those problems aren’t fixed.
Super easy trolley problem.
And, yes, it is super easy even if we want to introduce the calculus that hindsight isn’t 20/20 back in. Not only because the anarchists and socialists called it ahead of time, so no one had any excuse to think fascism wouldn’t end disastrously, but in any case even what the Nazis promised day one was so obviously evil and irrational, so ludicrously incapable of satisfying even their stated desires and goals let alone a moral set of desires and goals, that you would never throw the track.
But track A still leaves us with serious problems. What do you do when the economy is as fucked as Germany’s was?
You say, “As to your left wing aspirations? If you think Venezuela has had a socialist revolution that has enslaved it, implicitly endorsing, say, the death squads of Colombia, then I am not a left wing sympathizer. I think Venezuelan people are victimized more by not taking command of the economy than by the imaginary expropriation of the rich”.
I agree. But Maduro sucks ass, and in ways that hurt the rich. And what I find problematic in what you’re doing here is that you act like just because the rich are a lower-level stakeholder in most scenarios that they therefore aren’t stakeholders. They’re still people. I am leftist precisely because I am utterly convinced that even the rich would be infinitely better off with a lot less stuff clogging up their lives, working in equitable workplaces, and living in happy societies full of people who have solidarity with each other.
You then say, “And I think endorsing the economic warfare against the people under such cover is deeply immoral, because unlike the trolleyologyist I think an inaction, like blindly accepting the US government’s policies, is a moral choice”.
But… again… your disagreement with the trolleyologist isn’t about the trolley!
You are not saying that the person coming to you and saying, “Well, Maduro is bad, and so economic warfare intended to deter him at the cost of his people done by reactionary institutions is better” is wrong because they missed some element of the calculus.
You straightforwardly do not want to throw their switch!
In other words, every objection you’ve made here has been factual. When it comes to Venezuela, you clearly don’t want to empower the rich again, and that is the actual intent and design of the policies that you object to… which means you are claiming that they are lying or mistaken about what is on each track! You opened with the idea that trolley problems ignore externalities, focus on local context and reduce things to either-ors. But in none of the examples you’ve given did that actually really matter. I actually agree with your objection, and still am saying that your objection is like saying that a hammer is a bad parachute, but your attempt to argue that point has actually only furthered the objections Richard and I are giving you.
You have said that trolleyologists strawman utilitarians. Please stop strawmanning the problem. Richard pointed out that you can easily do a trolley problem with three tracks, if your goal is to model a situation with multiple solutions that are mutually exclusive and would be triggered at the same time (which actually isn’t how things work in the real world, and actually trolley problems are more realistic here because in the moment to moment calculus you are actually very often facing only two viable choices). Externalities are trivial to model: When you switch to track B, you damage the track which will require delays for the next five trains, delays with real consequences (including potentially more accidents).
And, yeah, trolley problems suck at modeling things outside a local context. You know what other tools suck at that? All of them. You can’t object to the drunk looking for his keys where the light is when everything else is pitch blackness and he has no chance of finding his keys there. No one has a framework that actually models global context with numerous stakeholders all resolving their utility calculations and then responding to the inevitable externalities that they perceive for the next cycle. Like Hari Seldon in Prelude to Foundation, it is a perfectly reasonable research project to start with tools that let you actually model the local context and then move forward. Make a microcosm before you make a macrocosm. Because starting with macrocosms first rarely does anything but give you the same answers your intuition would have given you in the first place, exactly because you don’t have anything else to use.
I think what you have found is that academia, despite liberal pretensions, often defaults to status quo bias in the way they use their tools. Stop the presses! But that isn’t an objection to trolley problems, any more than the same kind of faults coming out in, say, evolutionary psychology is a rebuttal to the broad notion of looking at evolutionary impacts on psychology and sociology (or, worse, using evo psych as an excuse to give up on social science entirely). It’s an objection to a specific research paradigm.
You finally say, “Lastly, when someone is still trying to sell the need to conduct the Hospital Transplant dilemma as needed for rational thought, even though it is not, not, not a genuine dilemma, they’re trying to tell you it’s raining on your shoe. That provokes a little emotion. What it doesn’t provoke is any left-wing sympathies. Nor do I think this reaction is terribly emotional, or even irrational. I reject the implicit invitation to regard myself as smarter than the normies who need enlightenment via trolleyology”.
But, again, not only are you wrong, it literally doesn’t matter. We discussed it here, from a left-wing perspective. What someone else may or may not be doing is moot.
I am telling you that I use trolley problem constantly when working with “normies”. Not because I think I am smarter, but because I did what philosophy does: Give you an interesting framework to respond to problems with. Again, the empirical data on this refutes you. Look at how people respond to trolley problems. They are almost never blase about it. People who haven’t gone through it and faced that part of themselves and answered what they would do find it morally challenging. And what people do to squirm out of answering the question is to try to bring back in a bunch of real world calculus…
Which is what they do when you discuss leftist praxis with them. They will muddy the waters with supposed complications and say that those with a revolutionary ethos are oversimplifying or that there are risks for action. While there are usually objections that are worth paying attention to, what you will find when you push back is that they are trying to justify pretending that therefore it is somehow by default better to not throw the switch. Because throwing the switch is scary. Forcing them to not have ready-made answers that don’t actually respond to the core dilemma (and are usually strawmen anyways) can make them recognize that, for example, the scale of the climate change crisis justifies us embarking on some pretty big experiments to fundamentally alter our economy (whether that be only through reformist technological and institutional changes or more revolutionary ones, which is where I suspect Richard and I would disagree). There is a problem to be solved, and we need proposals. Maybe someone like potholer54 is right that “conservative” (read: regulation of markets, so… not that) approaches are quite possibly sufficient. Great, but then we need to do them, and the Republicans need to stop shrieking that incentives for green power and a Green New Deal are socialism.
The fact that it then literally lets me invoke Zinn, using the same metaphor structure, is a gigantic bonus. I think Zinn pretty solidly refutes you: He used a very simple, accessible analogy to point out that social systems have a momentum to them and that “neutrality” in that context is incoherent. It’s like trying to pretend that because you’re in an inertial reference frame you’re not moving, and then jumping out of the train as a result. You can model his argument as a trolley problem: Yes, switching tracks has consequences (at minimum, a person has to spend time in activism to try to improve things), but the track in and of itself already has disastrous consequences, and inaction isn’t morally benign.
The men who murdered Liebknecht and Luxemburg were instinctively using trolleyology, where they rejected the inaction of not killing revolutionaries. Dirty Harry uses trolleyology when he blows away one bad guy rather than let the trolley mow down others. Consider another favorite trolley problem, the ticking time bomb. Everyone using that is using trolleyology to advocate torture.
The problems that matter are the factual problems. Trolleyology fixation on the premise that someone’s gotta die, and it’s moral to kill them, is useless in solving the true problems. That’s just a desired conclusion assumed as a premise.
It doesn’t matter if adding switches and probabilities and risk analysis can in principle be done, because nobody does that. They especially don’t do that in popular discourse. It would be stupid because trolleyology is not about how to analyze consequences, it’s about assuming the “necessity” to do immoral acts.
The original trolley problem is no more likely than the existence of God. But if by some evil miracle, someone faced a trolley problem, the conclusion is that anyone who flinched and failed to flip the switch is a mass murderer. But if they do, flip it, the victim who is killed is deemed a justifiable homicide, effectively someone whose continued life is blameworthy enough to deserve termination! The certainty trolleyology isn’t about the blame game seems to me unjustified.
The “socialists” who supposedly got it right about the Bolshevik revolutioin supported the wars against China, Korea, Vietnam, etc., etc., etc. This is deeply immoral in my opinion. As for the anarchists? The anarchists got their day in Spain in 1936and they got their way. The Stalinists weren’t even major players in Spanish politics in 1936.
Steven: You say “Of course I’m only talking about how trolley problems are used in popular discourse. That’s why talking about probabilities and risk analysis and so on are so irrelevant. The hospital transplant dilemma is not “solved” in the original post, for instance. The arguments for war and capital punishment one encounters in daily life are very much presented in the same fashion as the trolley problem(s): Life is such that there is no possibility of escaping the hard choices, that, in brief, somebody has to die. (That’s why death scenarios are so popular in trolley presentations, I think.)”
But… you keep not citing the popular discourse. This is a case where you’re going to need to be specific if you want to make a case. Who uses it this way?
But the bigger problem is that this isn’t the popular literature. Richard and I keep showing you that you can use a trolley problem framework and can poke at it, ask interesting questions, put in uncertainty, etc. Even if no one in the literature is currently doing that, that’s blaming the tool for it not being used correctly.
“I skip most links but I read some of yours. I wasn’t impressed by Foot’s creation of a “negative” duty that outweighed positive duties for obscure reasons”. I thought it obvious that doing something is taking responsibility in a way not doing something doesn’t. There is an infinity of ways not doing, which makes taking not doing into account pretty uselesss…apart from scenarios specially designed to impart a God-like view”.
There’s an infinity of ways of doing too. In fact, pretty much they line up one to one. So why is the fact that you can not do things in lots of ways relevant? I’d like you to actually lay out Foot’s case properly. Foot is actually on record in a very different position than you seem to be suggesting she is: https://academic.oup.com/jmp/article-pdf/3/3/245/2573953/3-3-245.pdf . She argues for both negative and positive duties, negative and positive rights. She has defended the doctrine of double effect. I do agree that I find her focus on that negative duty to be unconvincing, but there is an argument to be made that putting a burden against initiating causal sequences yourself that is higher than merely picking a track is a good one to maintain.
In fact, doing nothing at the right times can be extremely decisive. If you’re a President and there’s some social issue that the courts are settling, punting to the courts is a legitimate tactic, and not just for PR purposes, but because it is a good thing to reinforce that you’re not a tyrant and your opinion doesn’t matter everywhere and on everything. The best jazz involves not playing a lot of notes as well as playing some. Refusing to move when a police officer asks you to is very powerful. The entire civil rights movement used specific modes of inaction to devastating effect. Inaction, especially broadly advertised and tactically invoked inaction, is incredibly useful. It’s just not by default morally superior. So, again, I think something about your framework of thinking is just predisposing you to not think very carefully on this topic. The civil rights movement didn’t need Godlike certainty to see that violent action would not play well but civil disobedience would, and to engage with the media and discredit the racists that way. But their tactics had limits to what they could achieve, as people like Malcolm X and the Panthers pointed out.
You say, “As to your a) I am more of a utilitarian myself, except that I believe the real issues in utilitarianism are about computing all utilities and externalities and opportunity costs and so on. That’s why trolley problems that in popular use wish away all the real issues are so misleading, except to again, explain how somebody just has to die, because that’s life. Morals seems to me to be rules for living. Catastrophes where death is inevitable tend to reduce to luck. Popular discourse tends to attribute outcomes to fitness”.
But… there’s tons of places where death is inevitable and it has nothing to do with luck. When we bomb a country. When we starve people. When we let the homeless freeze. I am astonished to see you try to frame most lethal scenarios as being ones where people aren’t involved. Humans are so powerful, so in control of our environment, that lots of deaths are caused by our behavior.
And popular discourse reducing outcomes to fitness is an example of popular discourse ignoring the trolley problem. It doesn’t matter how fit the one person is: Trains kill people. Making it that extreme can remind us that fitness isn’t the only issue. Thinking it is stems from status quo bias.
Again, even if you were right, you would just be identifying people using the tool incorrectly. But… you have shown exactly no one reacting the way you’re talking about. No one likes the fact that the trolley problem makes you wish away real complexities. They buck and fight to try to get some other solution back into the game. But the point is that there isn’t one. You have to make a hard choice. And what I find bizarre is that you are so resistant to the idea that hard choices are almost default.
You say, “As to your b), again, the kinds of popular arguments that justify capital punishment or war usually invoke the trolleyology principles”.
Do they? I haven’t seen any. The link I showed you indicates people using the trolley problem for deontological approaches which go the other way! But, okay, let’s say it does. You’re on record here as being in favor of revolutions! You agree! So, again, your problem isn’t with the trolley problem!
Dr. Carrier I’m really curious as to whether you see the following scenario as a trolley problem.
Let’s say this Saturday morning you are trying to sleep in and you hear a persistent knock at the door that awakens you. You open the door and are greeted by a Jehovah Witness. He gives you his spill about going to Heaven and how you need to be saved or risk spending eternity in Hell. Now given the amount of research you’ve done on the subject you are not concerned about the prospect of spending eternity in Hell. He would obviously see this decision as a trolley problem for you. But would you see this as a trolley problem for yourself?
So he hands you one if his pamphlets and you hand him a complimentary copy of JFOS. And then he is on his way to your neighbor’s house.
He encounters a young lady who confides in him that she was raised in the Church but at some point in life had “strayed from the lord”. She listens to his spill and is admittedly concerned at the prospect/possibility of spending eternity in Hell. I think it would be accurate to say that at this point she sees this is as a trolley problem for herself.
Because once again it all starts with the mindset (knowledge and beliefs) of the subject (decision maker) for it to be a actual trolley problem for them to seriously entertain.
Of course.
For me the Trolley Problem is about the cost of ignoring the knock vs. answering it. If I know usually these are the nutters coming around then, the cost is greater to wake. Otherwise, I’ve wasted my time and inconvenience on someone else’s bullshit. But in reality, I almost never get such visits, and knocks usually indicate something legitimately important. Thus the cost of not waking is probabilistically high. It would therefore be unwise of me to ignore the knock. The cost of waking is probably going to be less. And risk theory entails taking the lowest risk when all is considered.
For the nutter at the door, what he “believes” is irrelevant. False models are not applicable to reality. We are only interested in models that actually correspond to reality. So, if the guy at the door is a fireman warning me of a very real impending case of getting burned to death lest I evacuate immediately, the Trolley problem (do nothing and burn; or do something and survive, albeit at a lower but still high cost of my home and nearly all my stuff) does indeed correspond to reality, and doing nothing is indeed the costliest choice.
As to the bullshit that is Pascal’s Wager, like any con grifters attempt to run (as they always set their cons up to look like Trolley Problems precisely for the same reasons they try to mimic real science with pseudoscience, and real legal documents with fake, and so on; none of which discredits real science, legal documents, and Trolley Problems—to the contrary, that deceivers and the deluded want to mimic them demonstrates the value of the real thing), see Pascal’s Wager and the Ad Baculum Fallacy; which, you’ll notice, I analyze the same way I suggest for Trolley Problems, with genuine risk theory.
You seem to be confusing “some scenarios are fake” with “therefore no scenarios are real.” That’s a non sequitur. That I can invent fake uses of deductive logic, does not allow concluding I have no need of and should never rely on or trust deductive logic. Likewise, Trolley models, Game Theory, science, statistics, or anything else. No one who is analyzing the Fat Man scenario, for example, is claiming this scenario happens in reality. They are claiming it models scenarios that do, by stripping away all distraction and cutting right to the chase of what the differential costs are, and thus asking why we make the choices we do, in those real world analogs.
In other words, the point of Trolley Problems is to analyze the coherence of one’s position. If one’s position is coherent, then it should produce the same outcomes in the contrived scenarios as in the real ones. And if it doesn’t, someone has some explaining to do. Someone might try to claim the model doesn’t match any real world cases, but that’s been disproved. It matches nearly all real world cases, differing solely in scale. So that is not an available “out” anymore. One has to confront the consistency of one’s own thinking instead. And that is why Trolley Problems are crucial to understanding any moral system.
Notice something here. It doesn’t matter if Richard thinks that it’s a trolley problem. What matters to the woman is that she feels like it is. Because that tells you something about how she is going to act, and what you may need to do if you want to interact with her respectfully let alone possibly convince her to rethink her ideas.
Richard argues that false models are irrelevant, but my point here is that the problem with the person knocking isn’t that they are deploying trolley problem reasoning, making some kind of calculus weighing courtesy norms versus their belief that their action might reduce harm. The problem is that they have a false set of assumptions that they are using a useful model through. Garbage in, garbage out. But, as Richard constantly points out, that applies to everything: Put in unsound premises into an invalid logical argument and you get garbage; count wrong and your Excel spreadsheet will give you statistics that don’t apply.
Moreover, notice how framing it this way makes us actually empathize with the door-to-door evangelist. That doesn’t mean agree. Because what I immediately start thinking about is
A) Does the evangelist actually have a broad enough, deep enough perspective on what they believe to truly be able to talk to me respectfully and informedly? Or is this process just a duty they feel that they have that they are thoughtlessly carrying out for the sake of conformity or virtue signaling? If so, then maybe that can be something they can learn how to deal with even if they remain Christian.
B) Do they actually have the value they think they do? That is, how would they react to a Muslim evangelist or an atheist evangelist doing the same thing? Would they ask someone else facing the same choice as them to value courtesy more highly than they are? If so, why aren’t they?
And even from the perspective of the person being knocked at the door… There’s other considerations beyond what Richard suggested. For example: Do I want to have the kind of society that encourages people to talk with each other? One thing that has always struck me when it comes to door-to-door evangelists is that the omnipresent annoyance around them (sincerely felt and very often justified as it is) to me may reflect a kind of atomism where we aren’t excited to hear our neighbors and interact with them. Might it be important enough for society to have a norm where we have friendly communities that I at least answer the door? Moreover, if I do answer the door, talk to them pleasantly, and create a space for us to have a conversation, might I at the least signal that the non-believers they interact with aren’t just passive uninformed entities but may be just as capable of being polite as they are? That could be improve the way that people down the line, not just Buddhists and pantheists and people with my specific concerns but also atheists and agnostics as well, are treated in general by the community. Is that worth ten minutes of my time and the risk that they might mug me or punch me? Answering the question is worthwhile either way.
To be fair, it’s not an avoidance of talking to neighbors, but of repetitious harassment. It’s not like this is the first missionary knocking on our door to sell us snake oil. We are tired of wasting time on snake oil sales pitches. And I can’t fault anyone for wanting to cut the fat there.
Oh, sure, and that applies especially when it’s literally the same people knocking next week after the conversation ended in a stalemate last week. But I would point out that it’s somewhat (though not totally) unfair to hold each Christian (or other person doing the door-to-door evangelism) accountable for what each other one does. If X group of missionaries are rude, insistent and disruptive, it is actually a good thing to either ignore them or tell them off, to contribute to a collective norm where such tactics don’t work. But it hardly will have that impact on Y group of missionaries who are, at least at that moment being civil (in the long term I have found, as have folks like Theramin Trees, that the interesting discussion will tend to eventually founder on some sticking point, at which point civility tends to go out the window).
Of course, all that pales in comparison to our duty to take care of ourselves. And if we know that we are not going to be in a good headspace to communicate well because we are annoyed (or, say, don’t have enough time to actually engage with them so all we’ll be doing is taking their literature and shooing them which will do nothing for anyone), engaging is not a good idea. So I definitely don’t mean to imply that we have a duty to always let in evangelists… just that the trolley problem there is a trolley problem because there are tradeoffs.
I just do think that it is telling that there is a broad tendency, that you can see in a widespread fashion (even most Christians are annoyed by most evangelists) and that you can see outside of religion, to be so resentful of others putting themselves and the things they are passionate about out there. I think it’s worth interrogating that feeling.
Frederick wrote:
“At the end of a normal trolley problem case, you can ask everyone what they would want the conductor to do if they were one of the five people who were being barreled down upon by a train. Even if they were uncomfortable with the thought of pulling the lever, they are also probably going to be uncomfortable with the thought of them dying because someone else was unwilling to.”
That is a fair questions but as a matter of principle you need to also flip it around and also ask everyone “what they would want the conductor to do if they were the person that was out of harm’s way until someone took it upon themselves to make the decision to sacrifice your life for others.”
I called it the invert test. The answer to the question might be the same but it still needs to be asked.
For example when you hear about a police shooting where the police purportedly felt threatened for his life and killed someone in self-defense (as claimed).
In those situations people always ask “What if it was your son/father/husband that was killed by the police”.
But the flip side to that question is “What if the cop was your son/father/husband that didn’t come home because he failed to protect himself in a potentially life threatening situation”.
It is just as fair a question.
I agree 100%! The thing is, though, is what you will find is that people almost never frame the question that way. Part of it is that the trolley problem is usually phrased asking you the question as if you are the one throwing the switch, but notice how by default we should be putting ourselves into the shoes of the people whose actions we effect. You will find that, commonly, this didn’t happen.
The reason I bring it up is precisely because the trolley problem, in my mind, is useful precisely because it makes the choice so extreme that it brings up in us irrational feelings of guilt. Notice how upthread Steven said that trolleyology is about assigning blame. This is straightforwardly false, and even if true in practice would not be an inherent outcome of the approach or model at all. But I have seen that reaction: The idea that people feel accused even by the question. This goes to something that I know has been studied in ethics but is well expressed in Ian Danskin’s “Angry Jack” videos (the work he did before the Alt-Right Playbook). Danskin argues that for many people ethics is purely about personal culpability and guilt. (Indeed, I think, as does Richard, that conservatives in particular are more likely to frame morality this way). It’s about whether or not you can say to St. Peter or Osiris and Anubis and Ammit and Ma’at that “I did good enough”, as if you can survive being judged. But this framing is not very useful. We shouldn’t be going into ethical considerations only thinking about whether or not we will be personally made to feel guilty. We should be going into it thinking what other people would want and need. The trolley problem helps as a tool to make this clear.
And you are right in that answering it the other way is important and useful, precisely because it can make someone start thinking. Would they want to be saved if they were the one person? The five? Would they want to live knowing that five people had to die for that to happen? Creating an opportunity for them to explore how each party in the situation can feel is critical, if you have enough time.
Let’s say that you ultimately get one person to say they want a conductor whose duty of care is such that they do not interfere and allow the train to do what it does, and another says that they want a conductor who would act decisively. Now you have the opportunity for those views to be teased out, with the people involved having had some informed thought about it.
I would only add, that in the fundamental analysis, it’s less about “what will people think of me,” but really about, “What sort of person do I want to be?” In other words, how would you feel about yourself.
I suspect a lot of this reframing of the question as “assigning blame” is more about an emotional fear of confronting oneself. Thus, people make it about “others” (whom they can then accuse of judging them) rather than, as should be the case, themselves (and confronting their own self-judgment).
I don’t agree that for everyone that is the fundamental analysis, or at least the only one. Yes, I do think that everyone is caring about their self-image, whether they are the kind of person they would like to be or not. But I think that a) people are often processing that from the implicit gaze of some unverified Other rather than themselves (getting out of that “looking glass self” mentality, as sociologists put it, takes hard work not everyone has done yet) and b) people are also very much imagining an external judge. That’s why you have the Egyptian notion of the afterlife, Christian ideas of a judge, the notion of karma… the idea that something out there counts a tally of what you did and didn’t do is quite common.
For people who haven’t managed to escape conventional morality, that means that admitting some kind of fault, doing self-examination, is to both have to admit to themselves that they have a serious flaw, and to admit that others who knew them and knew this flaw may think less of them as a result.
This leads to what Ian identifies in the Angry Jack videos: That tendency that people feel to respond to someone who just says something like “Oh, no thank you, I don’t drink” as if they are being sanctimonious, even when they’re not. To suggest that something, whether it be sexism in video games (the topic of the Angry Jack video), overconsumption in an ecological context, one’s own choices of what to eat or drink, etc. deserves some moral analysis implies that it’s a topic someone has to ask a question about, and risk being wrong about. Danskin makes the comparison to going to the dentist: If you can just pretend that the dentist didn’t tell you that you need to floss, you don’t have to.
Of course, you’re right that the process of recentering your morality to stop being so self-centered and to start thinking about what others need (which I think is a big part of the golden rule and its deployment) is then actually, ironically, the way you become the kind of person who can look at yourself in the mirror at night. I call this the third level of empathy: To bring back into thought that we, ourselves, deserve our own empathy, and are moral objects to take into account.
I think the value of the approach you have tried to develop from Socrates and Aristotle is that it then leads to jettisoning the garbage. We are trying to ask “What kind of behavior makes me the kind of person I want to be? What kind of behavior do I like in others, and how can I bring that behavior out in myself?” One thing mature moral agents have to ask is, “What should I care about what others think about me and my actions?” That response cannot be “nothing”, because that response is indistinguishable from sociopathy. It needs to be “I am not going to let arbitrary norms, misdirected shame or collective apathy or cruelty warp my morality, but I will care to at least try to act in such a way that others can feel as if I respect them”, and “I will not allow people who are misinformed or not thinking carefully to adjust my behavior or opinions, but I will take seriously the input of those who are at least generally properly informed and have thought about the issues as I recognize that I am fallible”. Answering those questions properly requires not being so anxious about what others think.
I concur. I don’t think we disagree all that much on these points.
Trolley problem: A trolley is heading towards five people: God strikes a man dead, whose falling body jars the switch, diverting the trolley into a different path that leaves the five unharmed. Is this a miracle demonstrating God’s providential goodness?
Fat man problem: God strikes dead a fat man whose body falls onto the track, which stops/diverts/derails the trolley, saving the five. Is this a miracle demonstrating God’s providential goodness?
Hospital transplant problem: There are five patients needing five vital organs to live. God strikes dead just one person, damaging only the brain, whose body then provides the five organs. Is this a miracle demonstrating God’s providential goodness?
Village hangman problem: A believer is told to choose one person to hang, whereas if they don’t, somebody will choose five people to hang. The believer prays for guidance. What does God say?
To repeat, this God is presumed to be both good and wise. If moral problems really are trolley problems, they must illuminate theology too, if only in hypotheticals.
And two extensions? All murderers are to be executed so they cannot kill again. And, wars should be waged against all Red Revolutions, as they are by definition evils.
God doesn’t exist. The frequency of scenarios involving his behavior is zero. And this is therefore not a relevant application of the model.
You are acting emotionally irrational again. Explain to us why.
The frequency of scenarios involving God is not zero. Just the other day I saw a woman on TV explaining that God saved her family from a disaster.
God’s hypothetical ability to see the consequences of all actions and inactions closely matches the premises of trolley problems, which elaborate a God’s-eye view of an alleged moral dilemma. The ability to know all consequences does not exist any more than God. At this point, as near as I can tell, all trolley problems are equivalent to asking, WWJD? This I think is a different expression of what is fundamentally useless about trolley problems.
My answer to that would be, a miracle, which is why I tend to avoid answering the question in general company. Refusing to answer foolish questions, or worse, loaded questions, is never irrational.
You keep missing the point.
No method gets true results when you plug in false premises. GIGO (Garbage In Garbage Out) can be layered onto all methods, from standard deductive logic to statistical science. So GIGO is the problem; not the methods.
Thus when I say “zero God scenarios exist” I am saying all premises claiming they do are false; consequently, even straightforward logic will not work for these people you are referring to. Because they have chosen GIGO. That does not denounce logic. Nor does it denounce any other method, from Game Theory to Trolley modeling.
So this is a non sequitur. You give examples of GIGO and claim to have refuted Trolley modeling. But Trolley modeling is not GIGO. You are not engaging in valid Trolley modeling if you go at it with GIGO. So you cannot point to GIGO and claim that negates all Trolley modeling. That’s simply illogical. And why you are so committed to being illogical here continues to perplex me.
Trolleyology offers zero enlightenment as to what is “GI.” The assumption that a God’s-eye view of consequences is useful when it is impossible to know what God sees, is useless at best. It’s misleading, a loaded question justifying the need for homicide, at worst. All premises involving God are false, you write. All premises involving God-like knowledge are false too, and equally GIGO.
Dude, you really are off the rails now. I already proved this strange conclusion of yours false multiple times now.
Trolleyology does not require and never even imagines God’s Eye knowledge of anything. If you do not understand this by now, you do not understand anything at all about Trolley modeling.
Steven: At this point, you’re not even talking to Richard. He said “The frequency of scenarios involving his behavior is zero”. Responding that the frequency of scenarios in which the poorly-founded belief in Its existence is involved is non-zero isn’t remotely responsive to his point. So the village hangman problem, on its own, isn’t possible to answer except if one accepts that by “God” one means “my hypothetical imagination of a person who knows the right thing to do”. You can get the same results in the Village Hangman experiment by imagining calling up Superman for his answer.
Yes, we know we can’t know the consequences of our actions. But Richard has repeatedly pointed out that uncertainty is easy to model. I’ve given you examples: The five people on the tracks look like the kind of thing that you might think is a scarecrow half the time and a person half the time. The calculus doesn’t fundamentally change. But what I find super telling is actually, we often can have a very good confidence level for predicting the consequences of our actions. I know I am very likely to kill someone if I fire a bullet at their skull at point blank. Trolley problems abstract out uncertainty because people use uncertainty in situations to not have to answer the question. The fact that that’s the case actually tells you something: That moral intuitions may automatically assume some level of uncertainty. Because when you bring uncertainty back in, the act-utilitarian-strawman evaporates, and utilitarian analysis more closely resembles what feels intuitively correct.
In other words, at least trying to frame the problem as a trolley problem forces you to talk about the uncertainty and why you think there is some. And because people can be wrong, sometimes that uncertainty is either much greater or much less than we initially thought. That’s the whole point of thinking topics out: We check our proverbial (and sometimes literal) math and sometimes we’re wrong!
Your hostility is starting to make more sense, though. But it’s still irrational. Replace Jesus with Superman if you want (as I see Tyler did below). It is eminently just fine to imagine a moral paragon, then try to ask how that person would act, then ask if one’s own behavior is different and if so why. We’ve been doing just fine exploring ethics that way for hundreds of years. But what’s even worse is the trolley problem doesn’t rely on any Godlike certainty! You know how fucking train tracks work, man! You know that if you pull a switch, you probably will change the track, and the five people probably won’t get run over and the one person will, and the one person will probably die! To pretend that there’s any meaningful uncertainty in that scenario is silly in the extreme. You refusing to answer the question just seems like you have some vested interest not to do so. It’s not a loaded question. It’s a hard question. There is a difference. Philosophy is supposed to ask us hard questions, questions whose answers we are uncomfortable with and think about and ruminate on, so that we can really drill down to our reasoning and improve it.
Your two extensions also don’t follow. Even if we were to know that 100% of murderers would kill again (which is not inherent to any of the problems you suggested, actually, so you’re just going off on a tangent), that still wouldn’t justify preemptive murder, because doing so would have such serious rule utilitarian consequences that it would be ludicrous, to say nothing of the fact that it is wrong for us to kill people who we have helpless in our power. We would instead need to imprison people well enough, and make all murder convictions into life sentences. But, of course, in the real world we can’t know that. So the trolley problem there can be expressed as something like “You come across a group of six men you have good reason to believe are innocent, such that you are confident that they are 83.3% likely to be innocent. How many do you let go?” Or any variation thereof. And I’ve pointed out to you countless times how Red Revolutions can be justified by a trolley problem (yes, you have to kill some people to get the revolution done, but the argument they are making is that you kill and hurt more people maintaining the status quo, so you throw the switch; them being wrong on that calculus is the argument, not whether it’s a trolley problem).
What’s galling here is that you’re now engaging in open fucking strawmen. Nowhere in the trolleyology literature is the argument made that you should always kill a murderer. If you’ve seen such a bonkers case, cite it.
I think you miss the point.
To use another example, use Superman instead of God. Since Superman has more capabilities then a human ( strength, speed, laser vision, frost breath) he has more options then a human to stop the trolley. Which changes which ones become the best all things considered. So he could use strength alone to physically stop the trolley instead of pulling the lever or heat vision to free all those trapped or speed etc. In his case the options might still have some negative outcome ( tracks or trolley get damaged) but with better options those negative options are less impactful ( property damage as opposed to harm to life).
God is generally conceived as being more powerful than Superman ( even in comic books) so his options would be significantly more( like just willing the train to stop before it hits anybody).
Same goes for the other examples. God can just magically cure the sick as opposed to needing a transplant
In other words once we know the true cost of those decisions it become a matter of choosing the best option even if it still has some negative outcome. If Gods best option really is to kill to save then Epicures dilemma comes into play.