This is Part 1 of a 7-part series exploring how the classic trolley problem manifests in modern technology.

Imagine you're standing by a railroad switch. A runaway trolley is hurtling down the tracks toward five people who will certainly die if it continues. You can pull a lever to divert the trolley onto a side track—but there's one person on that track who will die instead. Do you pull the lever?

This is the trolley problem, a thought experiment that has vexed philosophers since 1967. For decades, it remained safely theoretical—a puzzle for ethics classrooms and philosophy journals. But today, we're no longer imagining. We're building the trolley, programming its decisions, and deploying it on our roads, in our hospitals, and throughout our digital infrastructure.

The trolley problem has escaped the classroom and entered the code.

The Original Dilemma

Philosopher Philippa Foot introduced the trolley problem in her 1967 paper "The Problem of Abortion and the Doctrine of Double Effect." Her goal wasn't to solve it—it was to reveal the complexity of our moral intuitions.

The basic scenario seems straightforward: five lives versus one. Simple math suggests pulling the lever. But Foot knew it wasn't that simple. She was exploring the difference between killing and letting die, between action and omission, between intended and foreseen consequences.

Most people, when presented with the classic trolley problem, say they would pull the lever. The utilitarian calculation is compelling: five lives saved outweighs one life lost. This is the principle of utility—the greatest good for the greatest number.

But Foot introduced variants that complicate this intuition.

The Loop Track: When Means Become Ends

Consider this variation: The trolley is heading toward five people, but the track loops back. If you divert the trolley onto the side track, it will hit one person, and that person's body will stop the trolley from looping back to kill the five.

The outcome is identical to the original scenario—five saved, one killed. But something feels different. In this case, you're not just diverting the trolley; you're using the one person as a means to save the five. The person becomes an instrument of rescue rather than an unfortunate side effect.

This distinction matters philosophically. Immanuel Kant argued that we must never treat people merely as means to an end—they are ends in themselves. The loop track variant violates this principle in a way the simple diversion doesn't.

Yet the consequences are identical. Does intention matter if outcomes are the same? This is where utilitarian and deontological ethics diverge.

The Transplant Surgeon: When Utility Becomes Absurd

Philosopher Judith Jarvis Thomson pushed the trolley problem further with a medical variant:

A surgeon has five patients, each dying from failure of a different organ—heart, lungs, kidneys, liver, pancreas. A healthy patient comes in for a routine checkup. The surgeon realizes she could kill this one healthy person, harvest their organs, and save the five dying patients.

The utilitarian math is identical to the trolley problem: five lives saved, one life lost. But almost no one thinks the surgeon should kill the healthy patient. Why not?

This variant reveals the limits of pure utilitarian thinking. It shows that we don't actually believe "greatest good for greatest number" justifies any action. Context matters. Intention matters. The distinction between killing and letting die matters.

The transplant surgeon scenario also introduces the concept of rights. The healthy patient has a right not to be killed for their organs, even if it would save five others. This right constrains what we can do in pursuit of utility.

Why These Distinctions Matter for Technology

For decades, these were thought experiments—interesting puzzles with no practical application. Then we started building systems that must make these exact decisions.

Self-driving cars must decide who to save in unavoidable crashes. Medical AI must allocate scarce resources like ventilators and organs. Content moderation algorithms must choose which harms to prevent. Hiring algorithms must decide who gets opportunities. Predictive policing systems must balance public safety against civil liberties.

These aren't hypothetical trolleys. They're real systems making real decisions that affect real lives, thousands of times per second.

And here's what makes it urgent: unlike the thought experiment, these systems must decide with imperfect information, in milliseconds, at massive scale, with consequences that compound over time.

The Three Core Questions

The trolley problem and its variants reveal three fundamental questions that technology must now answer:

1. Action vs. Omission: Is There a Moral Difference?

In the original trolley problem, you must act (pull the lever) to save five but kill one. If you do nothing, five die but you didn't actively kill them. Is there a moral difference between killing and letting die?

Technology forces this question constantly. Should a self-driving car swerve to avoid pedestrians if it means crashing and killing its passenger? Should a content moderation algorithm remove legal but harmful content? Should a medical AI recommend aggressive treatment with severe side effects?

Doing nothing is still a choice. But is it a different kind of choice?

2. Means vs. Side Effects: Does Intention Matter?

The loop track variant asks whether it matters if harm is intended or merely foreseen. This is the doctrine of double effect: an action with both good and bad effects may be permissible if the bad effect is not intended, even if it's foreseen.

Technology struggles with this distinction because algorithms don't have intentions—they have objectives. When an AI hiring tool discriminates, is that an intended outcome or a side effect of optimizing for other metrics? Does the distinction even make sense for machines?

3. Rights vs. Utility: Are There Limits to Optimization?

The transplant surgeon variant shows that we don't actually accept pure utilitarian logic. We believe in constraints—rights, duties, prohibitions—that limit what we can do even in pursuit of good outcomes.

But technology is built on optimization. Machine learning literally means finding the solution that maximizes some objective function. How do we encode rights and constraints into systems designed to optimize?

Why Tech Trolley Problems Are Harder

The original trolley problem is difficult enough. But technology introduces complications that make these dilemmas even more challenging:

Scale: One algorithm affects millions of people, not five on a track. Small biases compound into massive disparities.

Opacity: Unlike the thought experiment, real algorithmic decisions are hidden. Users don't know they're on the tracks.

Complexity: Real scenarios involve multiple stakeholders, probabilistic outcomes, and cascading effects. It's not "save five or one" but "maybe save five, probably save three, possibly harm ten others downstream."

Irreversibility: You can't undo algorithmic harms as easily as you can stop a thought experiment.

Responsibility Diffusion: When an algorithm decides, who's accountable? The programmer? The company? The user? The algorithm itself?

What This Series Will Explore

Over the next six days, we'll examine how the trolley problem manifests in real technology systems:

  • Monday: Self-driving cars that must choose who lives and dies in unavoidable crashes
  • Tuesday: Medical AI that allocates scarce resources like ventilators and organs
  • Wednesday: Content moderation algorithms that choose which harms to prevent
  • Thursday: AI hiring systems that decide who gets opportunities
  • Friday: Predictive policing that balances safety against civil liberties
  • Saturday: A synthesis of what we've learned and frameworks for moving forward

Each case reveals different aspects of the trolley problem. Each forces us to confront questions we'd rather avoid. And each shows that we're no longer debating philosophy in the abstract—we're encoding ethics into systems that shape millions of lives.

The Urgency of the Question

Philippa Foot introduced the trolley problem to explore moral intuitions, not to solve practical problems. But we've made her thought experiment real. We've built the trolley, laid the tracks, and programmed the switch.

The question is no longer "what would you do?" It's "what should the algorithm do?" And unlike the thought experiment, we can't just discuss it—we must decide, deploy, and live with the consequences.

The trolley problem was designed to reveal the complexity of ethics. Technology has revealed something else: that we're making these decisions constantly, often without realizing it, at a scale that would have been unimaginable to Foot in 1967.

Tomorrow, we'll see how this plays out on our roads, where self-driving cars must make trolley problem decisions in milliseconds. The stakes are no longer theoretical. They're life and death, encoded in silicon and deployed on our streets.


Series Navigation

  • Part 1: The Original Trolley Problem (You are here)
  • Part 2: Self-Driving Cars (Monday, Feb 9)
  • Part 3: Medical AI (Tuesday, Feb 10)
  • Part 4: Content Moderation (Wednesday, Feb 11)
  • Part 5: AI Hiring (Thursday, Feb 12)
  • Part 6: Predictive Policing (Friday, Feb 13)
  • Part 7: Synthesis and Frameworks (Saturday, Feb 14)