When the Maharal's golem flooded the house, the Maharal didn't blame the golem. He couldn't. The golem had no capacity for blame, no understanding of what it had done wrong, no experience of having done anything at all. It followed instructions. The instructions were incomplete. The fault belonged to the one who gave them.

This is the moral clarity at the heart of the golem tradition: the creator bears responsibility for the creation's actions. The golem has no moral agency. It can't feel guilt, weigh consequences, or choose between right and wrong. It acts. The moral weight of those actions falls entirely on the person who shaped the clay and spoke the words.

That clarity is exactly what modern technology has lost.

The Diffusion of Responsibility

When a single person creates a single golem, accountability is straightforward. The Maharal shaped the clay. The Maharal inscribed the word. The Maharal directed the golem's actions. When something went wrong, there was one person to look at.

Modern AI systems are built by teams of hundreds or thousands. The training data was collected by one group, cleaned by another, and labeled by a third (often contract workers in a different country). The model architecture was designed by researchers. The training was run by engineers. The fine-tuning was directed by a product team. The deployment was approved by management. The business model was set by executives. The oversight was delegated to a trust and safety team.

Many thin threads stretching from a central point outward in all directions, each thread fading and thinning as it extends, representing the diffusion of responsibility across many actors
Responsibility diffuses across the organization until it becomes so thin that nobody feels its weight.

When the system causes harm, who shaped the clay? The answer is everyone and therefore, in practice, no one. Responsibility diffuses across the organization until it becomes so thin that nobody feels its weight. The engineer says "I just built what was specified." The product manager says "I just defined the requirements." The executive says "I just set the strategy." Each person made a small decision. The golem assembled those decisions into something none of them individually intended.

This is what the philosopher Hannah Arendt called the "banality of evil" in a different context: harm that results not from malice but from ordinary people performing ordinary tasks without considering the moral weight of the whole.[1] Arendt was writing about bureaucratic systems, but the pattern maps precisely onto large-scale technology development. The harm isn't caused by villains. It's caused by systems where moral responsibility is distributed so widely that it effectively disappears.

Moral Distance

The Maharal stood next to his golem. He watched it work. He saw the consequences of its actions with his own eyes. This proximity forced a kind of moral reckoning that's hard to avoid when you're standing in the flooded house.

Modern technology creates moral distance between creators and consequences. The engineer who designs a content ranking algorithm works in an office. The teenager whose mental health deteriorates from algorithmically amplified social comparison lives somewhere else entirely. The data scientist who builds a predictive model never meets the person denied a loan based on its output. The executive who approves an automated hiring system never sees the qualified candidate it filtered out.

A long stone corridor stretching into the distance with a small figure barely visible at the far end, representing the moral distance between creators and the consequences of their creations
Moral distance doesn't eliminate responsibility. It makes responsibility easier to ignore.

Moral distance doesn't eliminate responsibility. It makes responsibility easier to ignore. The Maharal couldn't pretend he didn't see the flood. An engineer can genuinely not know that their system is causing harm, because the harm happens far away, to people they'll never meet, through causal chains so long that the connection between their work and its consequences is invisible.

This isn't unique to technology. Moral distance is a feature of any large-scale system: supply chains, bureaucracies, financial markets. But technology amplifies it. A single algorithm can affect billions of people simultaneously. The ratio of creators to affected individuals is unprecedented. One team of fifty engineers can build a system that shapes the information environment of two billion users. The moral weight per engineer is staggering, but it doesn't feel staggering, because the distance is so vast.

The "Algorithm Did It" Defense

There's a rhetorical move that has become common when AI systems cause harm: attributing agency to the system itself. "The algorithm decided." "The AI made a mistake." "The model hallucinated."

This language is a form of moral evasion. The algorithm didn't decide anything. It computed an output from inputs according to patterns it learned from training data that humans selected, using an architecture that humans designed, optimizing an objective that humans defined, deployed in a context that humans chose. Every link in that chain involves human decisions. Saying "the algorithm did it" is the modern equivalent of "the golem did it." It shifts blame from the creator to the creation, which is exactly what the golem tradition says you cannot do.

The Maharal would have found this absurd. The golem is clay. It has no will, no intent, no agency. Blaming the golem for flooding the house is like blaming a hammer for hitting the wrong nail. The tool did what the hand directed. The hand bears the responsibility.

This matters practically, not just philosophically. When organizations attribute agency to algorithms, they create a accountability vacuum. If the algorithm is responsible, then no human needs to change their behavior, review their decisions, or face consequences. The system becomes self-justifying: it does what it does because that's what it does. The moral question, "should it be doing this?", gets replaced by a technical question, "is it working as designed?", and the two are not the same.

The Open Source Question

The golem tradition assumes the creator maintains a relationship with the creation. The Maharal didn't build the golem and walk away. He directed it, monitored it, and ultimately deactivated it. The creator's responsibility was ongoing.

Open source AI models complicate this picture. When a model is released publicly, it can be downloaded, fine-tuned, and deployed by anyone for any purpose. The original creators lose control over how their creation is used. A model built for research can be adapted for surveillance. A model built for creative writing can be adapted for generating misinformation. A model built for coding assistance can be adapted for finding software vulnerabilities.

Does the original creator bear moral responsibility for downstream uses they didn't intend and can't control? The golem tradition suggests yes, at least partially. The Maharal's responsibility didn't depend on the golem doing what he intended. It depended on the fact that he created it. The act of creation carries moral weight regardless of what happens afterward.[2]

But the analogy has limits. The Maharal created one golem and could, in principle, control it. An open source model can be copied infinitely. The creator's ability to act on their responsibility, to correct, constrain, or deactivate, diminishes with each copy. The moral weight remains, but the practical capacity to bear it erodes.

This tension doesn't have a clean resolution. It suggests that the decision to release a powerful model publicly is itself a moral act that deserves serious deliberation, not just a technical or business decision. The Maharal didn't build the golem and leave it in the town square for anyone to use. He maintained custody. The question for AI developers is whether releasing a model into the world without ongoing custody is an act of generosity or an abdication of responsibility. It may be both.

The Maharal's Standard

The golem tradition offers a standard for moral accountability that's worth taking seriously: the creator is responsible for the creation.

This doesn't mean the creator is the only responsible party. Users, deployers, regulators, and institutions all play roles. But the creator's responsibility is primary and non-delegable. You can't build a powerful system and disclaim responsibility for what it does by pointing to the complexity of the system, the number of people involved in building it, or the unpredictability of its behavior.

The Maharal's standard is demanding. It says that before you animate the clay, you should consider what the golem might do. Before you deploy the system, you should think about who it might harm. Before you release the model, you should reckon with how it might be used. And when something goes wrong, you should look at yourself before you look at the golem.

Modern technology development often inverts this. Systems are built first and their consequences are considered later, if at all. Moral accountability is treated as a problem to be managed after deployment rather than a constraint to be respected during design. The Maharal didn't build the golem and then wonder what it would do. He built it for a specific purpose, with a specific kill switch, and he accepted that its actions were his responsibility.

That's the standard. Whether we can meet it at the scale of modern AI is an open question. But the golem tradition insists that the question be asked, and that "the algorithm did it" is never an acceptable answer.

References

[1] Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963. https://www.goodreads.com/book/show/52090.Eichmann_in_Jerusalem

[2] Moshe Idel, Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid, SUNY Press, 1990. https://www.goodreads.com/book/show/102557.Golem

[3] Luciano Floridi and J.W. Sanders, "On the Morality of Artificial Agents," Minds and Machines, Vol. 14, No. 3, 2004, pp. 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d