The Trolley Problem Has No Solution (And That's the Point)
The trolley problem was introduced in 1967. Nearly sixty years later, philosophers have not solved it. Utilitarians say pull the lever: five lives outweigh one. Deontologists say it depends on whether you're using the one person as a means or merely foreseeing their death as a side effect. Virtue ethicists say it depends on the character of the person deciding. Care ethicists say it depends on the relationships involved. No framework wins. No consensus has emerged, despite decades of sophisticated argument.[1]
This is not a failure of philosophy. It may be the most important thing the trolley problem teaches us.
The Impossibility Theorem of Ethics
In 1951, the economist Kenneth Arrow proved that no voting system can simultaneously satisfy a small set of reasonable fairness criteria.[2] Arrow's impossibility theorem showed that the problem wasn't finding the right voting system; it was that the criteria themselves conflict. You can have some of them, but not all of them at once.
Algorithmic fairness faces a strikingly similar constraint. In 2017, researchers demonstrated that three widely used definitions of fairness in machine learning, demographic parity, equalized odds, and predictive parity, cannot all be satisfied simultaneously when base rates differ between groups.[3] This is not a technical limitation waiting for a better algorithm. It is a mathematical proof that competing fairness definitions are incompatible under common real-world conditions.
The trolley problem operates in the same territory. Utilitarianism, deontology, virtue ethics, and care ethics are not competing answers to the same question. They are different questions, each capturing something morally real, each blind to something the others see. The philosopher Isaiah Berlin argued that some values are genuinely incommensurable: liberty and equality, justice and mercy, individual rights and collective welfare exist on different scales and cannot be reduced to a single metric.[4]
If Berlin is right, then the search for the "correct" moral framework is misguided in the same way that the search for the "correct" fairness metric is misguided. There is no single answer because the values themselves conflict.
What This Means for Algorithms
The practical consequence is significant. When we encode one moral framework into an algorithm and call it "fair" or "ethical," we are not resolving a moral disagreement. We are implementing one side's answer and hiding the choice behind technical language.
A medical triage algorithm that maximizes total life-years saved is utilitarian. One that gives every patient an equal chance regardless of prognosis is egalitarian. One that prioritizes the worst-off patients is prioritarian. Each reflects a defensible moral position. Each excludes the others. And the choice between them is not a technical decision that engineers can make by optimizing a loss function. It is a moral and political decision about whose values get encoded.
The same applies to content moderation. A platform that maximizes free expression and one that minimizes harm to vulnerable users are both pursuing legitimate goals. The tension between them is not a bug to be fixed; it is a genuine conflict between values that reasonable people weigh differently. Calling one approach "the policy" doesn't dissolve the disagreement. It just determines who wins.
The utilitarian temptation is particularly strong in algorithmic systems because utilitarianism is arguably the most computable moral framework. Maximizing a measurable outcome is what optimization algorithms do. But computability is not a moral argument. The fact that we can optimize for total life-years, engagement, or accuracy doesn't mean those are the right things to optimize for. The framework's tractability is a feature of mathematics, not a sign of moral correctness.
Moral Residue
The philosopher Bernard Williams explored what is sometimes called "moral residue": the idea that the appropriate response to a tragic choice is not satisfaction that you optimized correctly, but a recognition that something of value was lost, even if the choice was the best available one.[5]
In the trolley problem, even if pulling the lever is the right choice, one person still dies. That death is not erased by the fact that five were saved. Something was lost. The morally serious response is to feel the weight of that loss, not to treat it as a solved equation.
Algorithms don't feel moral residue. They optimize, execute, and move on. When a triage algorithm deprioritizes a patient, there is no grief in the system. When a content moderation algorithm removes speech, there is no regret. When a hiring algorithm screens out a candidate, there is no awareness that a person's future was altered.
This absence matters. If moral residue is part of what it means to take a moral choice seriously, then systems that make moral choices without experiencing moral residue are, in a meaningful sense, not taking those choices seriously. The question is whether the humans who build and deploy those systems are willing to carry the moral weight that the algorithms cannot.
Living with Moral Pluralism
If the trolley problem has no solution, and if competing moral frameworks capture genuinely different aspects of what matters, then the goal of algorithmic ethics cannot be to find the right answer. It must be something more modest and more honest: to build systems that acknowledge moral uncertainty, make their value trade-offs transparent, and remain open to revision.
This might mean presenting multiple options rather than one "optimal" answer. It might mean making the value trade-offs explicit rather than burying them in an objective function. It might mean allowing different stakeholders to weight values differently, or building in mechanisms for revisiting moral choices as circumstances and understanding change.
An algorithm that claims to have solved the trolley problem may be more dangerous than one that admits it cannot. The first hides a contested moral choice behind a veneer of technical authority. The second invites the scrutiny and deliberation that contested moral choices deserve.
The trolley problem has no solution. That is not a reason to stop thinking about it. It is the reason to keep thinking about it, carefully, honestly, and with the humility that genuine moral complexity demands.
References
[1] Judith Jarvis Thomson, "The Trolley Problem," The Yale Law Journal, Vol. 94, No. 6, 1985, pp. 1395–1415. https://doi.org/10.2307/796133
[2] Kenneth Arrow, Social Choice and Individual Values, John Wiley & Sons, 1951. Second edition, Yale University Press, 1963.
[3] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, "Inherent Trade-Offs in the Fair Determination of Risk Scores," Proceedings of Innovations in Theoretical Computer Science (ITCS), 2017. https://arxiv.org/abs/1609.05807
[4] Isaiah Berlin, "Two Concepts of Liberty," in Four Essays on Liberty, Oxford University Press, 1969.
[5] Bernard Williams, "Ethical Consistency," Proceedings of the Aristotelian Society, Supplementary Volumes, Vol. 39, 1965, pp. 103–124. See also Williams, Moral Luck: Philosophical Papers 1973–1980, Cambridge University Press, 1981.