This is Part 2 of a 7-part series exploring how the classic trolley problem manifests in modern technology.

A self-driving car is traveling at 40 mph when its sensors detect an unavoidable collision. Five pedestrians have stepped into the crosswalk ahead. The car can continue straight, killing all five, or swerve into a concrete barrier, killing its single passenger. It has 300 milliseconds to decide.

What should the algorithm do?

This isn't a thought experiment. It's a real engineering problem that autonomous vehicle developers face today. The trolley problem has left the philosophy classroom and entered the code that controls vehicles on our roads.

The Moral Machine Experiment

In 2016, MIT researchers launched the Moral Machine, an online platform that presented people with autonomous vehicle dilemmas. The results were staggering: 40 million decisions from people in 233 countries, revealing how humans think self-driving cars should behave in impossible situations.

The scenarios varied: young versus old, many versus few, pedestrians versus passengers, humans versus animals, lawful versus unlawful behavior. Each scenario forced a choice about who should live and who should die.

The findings revealed deep patterns—and troubling disagreements. Most people preferred that cars save more lives over fewer, spare the young over the old, and protect humans over animals. But preferences diverged sharply across cultures, particularly on questions of social status, gender, and fitness.

More disturbing: people wanted cars to minimize casualties in general, but preferred cars that would prioritize their own safety as passengers. We want utilitarian ethics for everyone else's car, but self-preservation for our own.

The Impossibility of Neutral Algorithms

Self-driving car manufacturers face an impossible task: program ethics into machines when humans can't agree on what's ethical.

Should the car prioritize its passenger? That's what most people want when they're inside the car. But it means the car might kill five pedestrians to save one passenger—a choice most people reject when they're the pedestrians.

Should the car minimize total casualties? That sounds fair until you realize it means sometimes sacrificing the passenger. Would you buy a car programmed to kill you under certain circumstances? Would you let your family ride in one?

Should the car follow traffic laws? Pedestrians jaywalking have broken the law; the passenger following traffic rules hasn't. But should legal compliance determine who lives and dies? Should a car prioritize law-abiding pedestrians over jaywalkers?

These aren't just philosophical puzzles. They're engineering specifications that must be written into code, tested, and deployed on public roads.

Cultural Divides in Moral Judgments

The Moral Machine revealed that moral preferences aren't universal—they're culturally contingent.

Western countries showed stronger preferences for sparing the young over the old. Eastern countries showed less age-based discrimination. Southern countries showed stronger preferences for sparing women and higher-status individuals. Northern countries showed more egalitarian preferences.

These differences matter because autonomous vehicles are global products. A car programmed with Western ethical preferences will make different decisions than one programmed with Eastern preferences. Should ethics be localized? Should a car's moral framework change when it crosses borders?

The alternative—imposing one culture's ethics globally—raises its own problems. Who decides which culture's values get encoded into the machines that will make life-and-death decisions worldwide?

Real-World Consequences

These aren't just hypothetical scenarios. Autonomous vehicles are already making decisions with fatal consequences.

In 2018, an Uber self-driving car struck and killed Elaine Herzberg in Tempe, Arizona. The car's sensors detected her but classified her incorrectly, then failed to brake in time. The safety driver was watching a video on her phone. Who was responsible? The algorithm? The safety driver? Uber? The pedestrian?

Tesla's Autopilot has been involved in multiple fatal crashes. In some cases, the system failed to detect obstacles. In others, drivers over-relied on the technology, assuming it was more capable than it was. Each crash raises questions about how much autonomy we should grant to systems that aren't fully autonomous.

These incidents reveal a gap between the trolley problem's clarity and reality's messiness. The trolley problem assumes perfect information: you know there are five people on one track and one on the other. Real autonomous vehicles operate with sensor noise, classification errors, and uncertainty. They must decide not just who to save, but whether the situation even requires a decision.

The Mercedes-Benz Controversy

In 2016, Mercedes-Benz's manager of driver assistance systems made a statement that sparked outrage: in unavoidable collision scenarios, Mercedes vehicles would prioritize protecting their passengers over pedestrians.

The logic was straightforward: passengers are customers who chose to trust the vehicle. The company has a duty to protect them. Pedestrians, while innocent, aren't in a contractual relationship with Mercedes.

The backlash was immediate. Critics argued this was exactly the wrong approach—it meant expensive cars would be programmed to kill poor pedestrians to save wealthy passengers. It turned ethics into a luxury feature.

Mercedes quickly walked back the statement, clarifying that their vehicles would attempt to minimize harm to all parties. But the controversy revealed the core tension: someone must program the ethics, and whoever does so will face criticism no matter what they choose.

Waymo's Approach: Avoiding the Choice

Waymo, Google's self-driving car project, takes a different approach: design vehicles that never face trolley problem scenarios.

Their strategy focuses on defensive driving, maintaining safe distances, and operating conservatively enough that unavoidable collisions don't occur. If the car can't guarantee safety, it doesn't proceed.

This sounds ideal, but it has costs. Ultra-conservative driving means slower adoption, reduced efficiency, and frustrated human drivers sharing the road. A car that always yields, always slows, always chooses the safest option might be ethical, but it's also impractical.

Moreover, "avoiding the choice" is itself a choice. Driving conservatively means accepting some level of inefficiency and inconvenience to minimize risk. That's a value judgment—prioritizing safety over efficiency—that not everyone shares.

The Problem of Imperfect Information

The trolley problem assumes certainty: you know exactly what will happen if you pull the lever or don't. Self-driving cars don't have that luxury.

Sensors can misclassify objects. A plastic bag might be identified as a pedestrian, triggering an unnecessary swerve. A pedestrian might be missed entirely. The car must decide based on probabilistic assessments: "80% confident there's a pedestrian, 60% confident swerving will avoid collision, 40% confident the swerve path is clear."

How should a car decide when it's uncertain? Should it err on the side of caution, potentially causing unnecessary accidents? Or should it require high confidence before acting, potentially missing real threats?

These aren't trolley problems—they're trolley problems with fog, where you can't see clearly what's on either track.

Who Programs the Ethics?

Perhaps the most fundamental question: who should decide how autonomous vehicles behave in these scenarios?

Manufacturers? They have engineering expertise but also liability concerns and profit motives. They might prioritize decisions that minimize lawsuits over decisions that maximize ethical outcomes.

Regulators? They can impose standards, but regulations lag technology, and different jurisdictions will impose different requirements. A car legal in California might be illegal in Germany.

Users? Some suggest letting passengers choose their car's ethical framework—utilitarian, passenger-protective, or law-abiding. But this creates a market for unethical cars. Would you want to share the road with vehicles programmed to prioritize their passengers at all costs?

Algorithms themselves? Machine learning systems could be trained on human moral judgments. But whose judgments? The Moral Machine showed we disagree profoundly. Training on biased data produces biased algorithms.

There's no good answer because there's no neutral position. Every choice about who programs the ethics is itself an ethical choice about who gets to decide.

The Illusion of Autonomy

The term "autonomous vehicle" suggests independence—cars that drive themselves without human input. But the trolley problem reveals that these vehicles aren't autonomous in any meaningful sense. They're executing human decisions, made in advance, by programmers who encoded specific values into the system.

When a self-driving car chooses to save five pedestrians by sacrificing its passenger, that's not the car's choice. It's the choice of whoever programmed the car's decision framework. The car is just executing instructions.

This matters for accountability. When an autonomous vehicle kills someone, we can't blame the car. We must trace responsibility back to the humans who made the decisions the car executed. But those humans made their choices years earlier, in conference rooms, without knowing the specific circumstances where their code would run.

What This Reveals About Tech Ethics

The self-driving car dilemma reveals something broader about technology ethics: we're encoding moral decisions into systems that will execute them at scale, in contexts we can't predict, with consequences we can't foresee.

Every autonomous vehicle on the road is a trolley problem waiting to happen. Every one embodies someone's answer to impossible ethical questions. And unlike the thought experiment, these answers have real consequences.

Tomorrow, we'll see how similar dilemmas play out in healthcare, where algorithms must decide who gets scarce medical resources. The stakes remain life and death, but the context shifts from roads to hospitals, from split-second decisions to systematic allocation.

The trolley problem on wheels shows us that we're not just building self-driving cars. We're building moral agents that will make life-and-death decisions according to values we must choose, encode, and deploy—whether we're ready or not.


Series Navigation

  • Part 1: The Original Trolley Problem (Sunday, Feb 8)
  • Part 2: Self-Driving Cars (You are here)
  • Part 3: Medical AI (Tuesday, Feb 10)
  • Part 4: Content Moderation (Wednesday, Feb 11)
  • Part 5: AI Hiring (Thursday, Feb 12)
  • Part 6: Predictive Policing (Friday, Feb 13)
  • Part 7: Synthesis and Frameworks (Saturday, Feb 14)