Philippa Foot introduced the trolley problem in 1967 to probe a specific question about the doctrine of double effect.[1] Nearly sixty years later, the thought experiment has become something larger: a lens for examining how we reason about moral trade-offs in systems that affect millions of people. Philosophical tools that long predate it, from Aquinas's doctrine of double effect to Rawls's veil of ignorance to Williams's moral residue, were not designed for algorithmic systems. But they turn out to be remarkably well-suited to the moral challenges those systems create.

This post draws together six concepts that, taken together, suggest something about what it means to build technology responsibly in a world where moral questions resist clean answers.

What the Six Concepts Reveal Together

Each concept addresses a different dimension of the same underlying problem: how do we make moral choices when the stakes are high, the information is incomplete, and reasonable people disagree?

The doctrine of double effect asks whether the harm a system causes is a side effect of pursuing a legitimate goal or the mechanism through which the goal is achieved. The veil of ignorance asks whether the system's designers would accept its decisions if they didn't know which position they'd occupy. Moral luck reveals that we judge systems by their outcomes even when outcomes depend on factors beyond anyone's control. The ethics of inaction shows that choosing not to deploy a beneficial system is itself a moral decision with consequences. The question of moral patiency asks whether the systems we build might eventually deserve moral consideration of their own. And moral pluralism demonstrates that competing moral frameworks capture genuinely different aspects of what matters, with no meta-framework to adjudicate between them.

Six different colored threads converging into a single braid, each thread distinct but contributing to a stronger whole, representing the six moral concepts woven together
No single concept provides a complete answer. Together they point toward a disposition rather than a formula.

No single concept provides a complete answer. But together they point toward a disposition rather than a formula: the capacity to see the moral dimensions of technical decisions before those decisions become crises. That capacity has a name. Aristotle called it phronesis, practical wisdom.[2] In the context of technology, it might be called moral imagination.

Why Frameworks Are Necessary but Insufficient

A common instinct in engineering culture is to solve ethics the way we solve technical problems: find the right framework, encode it, deploy it, move on. The six concepts in this series suggest why that approach falls short.

The doctrine of double effect requires judgment about proportionality, and no formula can determine when collateral harm is "proportionate." The veil of ignorance requires genuine empathy, the ability to inhabit a perspective that is not your own, not merely run a thought experiment. Moral luck means outcomes will surprise you regardless of how good your framework is. The ethics of inaction means the framework must account for what you don't build, not just what you do. Moral patiency means the set of beings who deserve moral consideration may change over time. And moral pluralism means any single framework will miss something important.

A checklist on paper next to an open window showing a complex landscape beyond, representing the gap between frameworks and the capacity to see when they are needed
Frameworks give us tools for reasoning. They do not give us the capacity to see when reasoning is needed.

Frameworks give us tools for reasoning. They do not give us the capacity to see when reasoning is needed. A checklist can tell you which questions to ask. It cannot tell you that a question needs asking in the first place. That recognition, the moment when a technical decision reveals itself as a moral one, is where moral imagination begins.

Virtue Ethics for Builders

Aristotle argued that ethics is not primarily about rules or consequences. It is about character.[2] A person of good character doesn't need a rulebook for every situation. They have developed virtues, habitual dispositions to act well, through practice and reflection. The virtues are not innate. They are cultivated, the way a musician cultivates skill: through repetition, feedback, and attention.

Applied to the practice of building technology, this suggests several dispositions worth cultivating:

Intellectual humility: the recognition that you might be wrong, that your framework might be incomplete, that the people affected by your system might see things you don't. The veil of ignorance is a tool for cultivating this virtue. So is the experience of having your assumptions challenged by someone in a position you hadn't considered.

Moral attentiveness: the habit of noticing when a technical decision has ethical implications, before someone else points it out. This is not a skill that is easily automated. It requires the kind of pattern recognition that comes from studying cases where systems caused harm, and asking what the builders could have seen earlier.

A craftsperson's hands shaping clay with careful attention, representing the cultivation of moral virtue through practice and deliberate effort
Aristotle argued that virtues are cultivated the way a musician cultivates skill: through repetition, feedback, and attention.

Proportional judgment: the ability to distinguish between minor concerns and serious moral risks, and to respond appropriately to each. The doctrine of double effect is, at its core, a framework for proportional judgment. Not every side effect is a moral crisis. But some are, and the capacity to tell the difference matters.

Courage: the willingness to raise ethical concerns even when it is inconvenient, expensive, or professionally risky. Moral imagination without the courage to act on it is merely spectatorship.

Practical Cultivation

These dispositions are not abstract ideals. They can be practiced.

Before launching a system, conduct a pre-mortem with moral dimensions: ask not just "what could go wrong technically?" but "who could be harmed, and how?" Imagine the worst-case moral outcome and work backward to identify what would have to be true for it to occur.

Regularly and deliberately inhabit the positions of different stakeholders. Not just users, but non-users, people excluded by the system, people downstream of its effects, people who cannot opt out. The veil of ignorance is a structured version of this exercise, but the habit can be practiced informally in any design review.

Create processes for red-teaming value assumptions. Just as security teams probe for vulnerabilities, teams building consequential systems can probe for hidden value judgments: what does this system optimize for? Whose values are missing? What would someone with a different moral framework say about this design?

Study real cases where algorithmic systems caused harm, not to assign blame, but to develop pattern recognition. The more cases you have internalized, the more likely you are to recognize similar patterns in your own work. Moral imagination, like any form of expertise, is built on a foundation of studied examples.[3]

And normalize moral uncertainty within teams. Create cultures where saying "I'm not sure this is right" is valued rather than punished. Moral certainty can be a sign of insufficient reflection. The practitioners who arguably take ethics most seriously tend to be the ones who remain uncomfortable with their choices, not because they are paralyzed, but because they understand that comfort and correctness are different things.

The Ongoing Conversation

The trolley problem was never meant to be solved. Foot designed it to probe specific questions about the doctrine of double effect, but it has since become a tool for revealing the structure of our moral disagreements: where our intuitions conflict, where our frameworks diverge, and where we need to make choices we cannot fully justify.[1] That is exactly what it does for technology.

Algorithmic systems make moral choices at scale, speed, and opacity that Foot could not have imagined in 1967. The question is not whether we will face these dilemmas. We already do, constantly. The question is whether we will face them with moral imagination or moral blindness.

Moral imagination is not a solution. It is a practice: ongoing, imperfect, and essential. It does not tell you the right answer. It gives you the capacity to see that there is a question, to hold multiple frameworks in mind simultaneously, and to make choices with your eyes open to what is gained and what is lost.

The trolley is still moving. The tracks still diverge. And the most important thing we can develop is not a better algorithm for pulling the lever. It is the wisdom to see the lever, understand the stakes, and choose with the awareness that every choice, including the choice to do nothing, carries moral weight.

References

[1] Philippa Foot, "The Problem of Abortion and the Doctrine of the Double Effect," Oxford Review, No. 5, 1967, pp. 5–15. Reprinted in Foot, Virtues and Vices, Oxford University Press, 2002.

[2] Aristotle, Nicomachean Ethics, particularly Books II and VI on virtue and practical wisdom (phronesis). Translated by Terence Irwin, Hackett Publishing, 1999.

[3] Mark Johnson, Moral Imagination: Implications of Cognitive Science for Ethics, University of Chicago Press, 1993.