A familiar philosophical conundrum goes roughly as follows:
You are standing by a trolley track which goes down a hill, next to a fork in the track controlled by a switch. You observe, uphill from you, a trolley that has come loose and is rolling down the track. Currently the switch will send the trolley down the right branch of the fork. Four people are sitting on the right branch, unaware of the approaching trolley, too far for you to get a warning to them.
One person is sitting on the left branch. Should you pull the switch to divert the trolley to the left branch?
The obvious consequentialist answer is that, assuming you know nothing about the people and value human life, you should, since it means one random person killed instead of four. Yet to many people that seems the wrong answer, possibly because they feel responsible for the result of changing things but not for the result of failing to do so.
In another version of the problem, you are standing on a balcony overlooking the trolley track, which this time has no fork but has four people whom the trolley, if not stopped, will kill. Standing next to you is a very overweight stranger. A quick mental calculation leads you to the conclusion that if you push him off the balcony onto the track below, his mass will be sufficient to stop the trolley. Again you can save four lives at the cost of one. I suspect fewer people would approve of doing so than in the previous case.
One possible explanation of the refusal to take the action that minimizes the number killed starts with the problem of decentralized coordination in a complicated world. No individual can hope to know all of the consequences of every choice he makes. So a reasonable strategy is to separate out some subset of consequences that you do understand and can choose among and base decisions on that. A possible subset is "consequences of my actions." You adopt a policy of rejecting actions that cause bad consequences. You have pushed out of your calculation what will happen if you do not act, since in most cases you don't, perhaps cannot, know it—the trolley problem is in that respect artificial, atypical, and so (arguably) leads your decision mechanism to reach the wrong answer. A different way of putting it is that your decision mechanism, like conventional legal rules, has a drastically simplified concept of causation in which action is responsible as a cause, inaction is not.
I do not know if this answer is in the philosophical literature, but it seems like one natural response from the standpoint of an economist.
Let me now add a third version. This is just like the second, except that you do not think you can stop the trolley by throwing the stranger onto the track—he does not have enough mass. Your calculation implies, however, that the two of you together would be sufficient. You grab him and jump.
The question is now not whether you should do it—most of us are reluctant to claim that we are obliged to sacrifice our lives for strangers. The question is, if you do do it, how will third parties regard your action. I suspect that many more people will approve of it this time than in the previous case, even though you are now sacrificing more, including someone else's life, for the same benefit. If so, why?
I think the answer may be that, when judging other people's actions, we do not entirely trust them. We suspect that, in the previous case, the overweight person next to you may be someone you dislike or whose existence is inconvenient to you. When you take an act that injures someone for purportedly benevolent motives, we suspect the motives may be self-interested and the claim dishonest. By being willing to sacrifice your own life as well as his, you provide a convincing rebuttal to such suspicions.
All of which in part comes from thinking about my response to the novel, Red Alert, on which the movie Doctor Strangelove was based. In both versions, a high ranking air force officer sets off a nuclear attack on the Soviet Union. In the movie, he is crazy. In the book, he is a sympathetic character. He has good reason to regard the idea of Soviet conquest with horror, having observed atrocities committed by Soviet troops in Germany at the end of WWII. He has concluded, for all we know correctly, that a unilateral nuclear attack by the U.S. will succeed—will destroy enough of the Soviet military so that the counterattack will not do an enormous amount of damage to the U.S. He has also concluded that the balance of power is changing, that in the near future the U.S. will not be able to succeed in such an attack and that in the further future the USSR will triumph.
Under those circumstances, his choice is not obviously wrong. It can, indeed, be seen as the consequentialist choice in the trolley problem—with the number of lives at stake considerably expanded.
But what makes it sufficiently believable to make him a sympathetic character is that part of his plot requires him to commit suicide in order to make sure he cannot be forced to give up the information that will let his superiors recall the bombers he has sent off. The fact that he is willing to pay with his own life to do something he considers of enough importance to justify killing a large number of people makes his reaching that judgement much more believable than it would otherwise be, and makes us feel as though his act is in consequence more excusable, perhaps even right.
As in my final trolley example.
One further point occurs to me. My guess is that, on average, people who think of themselves as politically left are more likely than others to accept the consequentialist conclusion to the trolley problem—and less likely than others to approve of the decision made by the air force officer in Red Alert. Readers' comments confirming or rejecting that guess are invited.
No comments:
Post a Comment