Learning to Live with Golems: Wisdom for an Age of Artificial Servants
We started this week in a rabbi's workshop in Prague, watching the Maharal shape clay into something that could walk, work, and protect a community. We end it in a world where billions of people interact with golems every day, mostly without thinking of them that way.
The golem tradition is roughly two thousand years old. The technology it describes, creating powerful servants from raw material and animating them with language, is roughly two years into its most dramatic acceleration. The gap between the tradition's wisdom and our current practice is where the most important questions live.
What the Week Revealed
Each post in this series mapped a different face of the golem pattern in modern technology.
The spec-vs-intent problem showed that the golem's literal obedience is the oldest bug in existence. Software does what you said, not what you meant. The Mars Climate Orbiter, the Flash Crash, every edge case that slipped through a specification: each is a version of the Maharal telling the golem to fetch water and watching the house flood. The gap between instruction and intent is structural. Better specifications narrow it. Nothing closes it entirely.
The alignment problem showed that the gap becomes dangerous as systems become more capable. Goodhart's Law, reward hacking, the off-switch problem: each is a consequence of building powerful systems that pursue objectives without understanding them. The Maharal inscribed "emet" on the golem's forehead and built the kill switch into the design. Modern AI development often builds capability first and safety second, inverting the Maharal's priority.
The automation and labor post showed that the golem was built to work, and the question of who benefits from that work is as old as the golem itself. The Luddites understood that the issue isn't whether the machine can do the job. It's who captures the value. The productivity paradox, the transfer of labor to consumers, the gig economy's algorithmic management, the displacement of creative workers: each raises the question the Maharal answered before he shaped the clay. Who is this golem serving?
The many-golems post showed that the most interesting frontier isn't a single, more powerful golem. It's workshops of golems that collaborate, check each other's work, and produce emergent capabilities. Ensemble methods, multi-agent coding systems, constitutional AI, orchestrated research pipelines: each demonstrates that many limited golems, well-coordinated, can exceed what any single golem achieves alone. The potter's craft is evolving from shaping clay to designing collaboration.
The moral accountability post showed that the Maharal's clarity, the creator bears responsibility for the creation, has been lost in modern technology development. Responsibility diffuses across teams, organizations, and supply chains until nobody feels its weight. The "algorithm did it" defense is the modern equivalent of blaming the golem, and the golem tradition says that's something you cannot do.
The Golem Pattern
Across all five cases, the same pattern appeared.
The golem is powerful. It can do things humans can't: work without rest, process information at scale, optimize objectives with tireless precision. This power is real and valuable. It has reduced dangerous labor, accelerated scientific discovery, and enabled capabilities that genuinely improve lives.
The golem has no understanding. It follows instructions without grasping their purpose. It optimizes metrics without knowing what the metrics represent. It produces outputs without comprehending what the outputs mean. The absence of understanding isn't a temporary limitation that better engineering will overcome. It's the golem's nature.
The gap between power and understanding is where harm lives. A powerful system that understands its purpose can adapt when circumstances change, recognize when its actions are counterproductive, and exercise judgment in ambiguous situations. A powerful system without understanding can only do what it was told, with increasing force and decreasing relevance, until someone intervenes.
The creator bears responsibility. The golem can't be blamed, corrected, or held accountable. It has no moral agency, no experience of having done anything right or wrong. The moral weight of its actions falls on the people who built it, directed it, and chose to deploy it. This responsibility doesn't diminish with the complexity of the system or the number of people involved in creating it.
What the Tradition Teaches
The golem story isn't a cautionary tale about technology. It's a story about the relationship between creators and their creations, and it contains practical wisdom that's more relevant now than at any point in its two-thousand-year history.
The Maharal built the off switch first. Before the golem took its first step, the mechanism for stopping it was inscribed on its forehead. "Emet" to "met": truth to death, life to clay, one letter's difference. This wasn't an afterthought. It was foundational. The ability to stop the creation was built into the act of creation itself.
Modern AI development would benefit from the same priority. The question "how do we stop this if it goes wrong?" should be answered before "how do we make this more capable?" The kill switch should be designed before the system is animated, not bolted on after deployment when problems emerge.
The Maharal knew his golem's limitations. He didn't expect it to understand. He didn't anthropomorphize it. He didn't treat its obedience as wisdom. He directed it with the awareness that it was clay animated by language, powerful but empty. He worked within its limitations rather than pretending they didn't exist.
We often do the opposite. We describe AI systems as "thinking," "understanding," "wanting," "hallucinating," language that projects human qualities onto systems that have none. This anthropomorphism isn't just imprecise. It's dangerous, because it obscures the golem's fundamental nature and makes us worse at governing systems we've built.
The Maharal accepted responsibility. When the golem's behavior drifted from its purpose, the Maharal didn't blame the golem, the clay, or the language of creation. He recognized that the creation's actions were his responsibility, and he acted accordingly, ultimately deactivating the golem when it became more dangerous than the threats it was built to counter.
This is perhaps the hardest lesson for modern technology. The decision to stop, to deactivate, to choose not to deploy, requires a kind of moral courage that the incentive structures of the technology industry often discourage. Shipping is celebrated. Restraint is not. But the Maharal's most important act wasn't creating the golem. It was knowing when to return it to clay.
Practical Wisdom for Builders
The golem tradition, distilled into guidance for people who build technology:
Close the intent gap deliberately. Invest as much effort in understanding what you want as in building what you specified. Adversarial testing, red teams, edge case workshops, and "what could go wrong?" sessions are all forms of asking the question the golem can't ask itself: is this really what you meant?
Build oversight into the design, not around it. Explainability, audit trails, human-in-the-loop checkpoints, and meaningful override mechanisms should be architectural decisions, not compliance afterthoughts. If you can't explain why your system made a decision, you can't meaningfully control it.
Assign responsibility before deployment. Don't wait for harm to figure out who's accountable. The Maharal knew he was responsible before the golem took its first step. Organizational accountability structures should be designed alongside the systems they govern.
Respect the golem's limits. Don't deploy systems in domains that require understanding, judgment, or moral reasoning they don't possess. The question isn't "can the AI do this task?" but "should a system without understanding do this task?"
Ask who the golem serves. Every automation decision is a decision about who benefits and who bears the cost. The Maharal built the golem to serve the community. Default to transparency about these choices rather than obscuring them behind technical complexity.
Design workshops, not just golems. The future of AI is increasingly about orchestration: how multiple systems collaborate, check each other, and produce emergent capabilities. The craft of designing these collaborations, defining roles, communication protocols, and human oversight points, is becoming as important as the craft of building individual models.
Remember that you can choose not to build. The Maharal destroyed his golem when it was no longer needed. Not every capability needs to be deployed. Not every automation needs to exist. The wisdom to refrain is as important as the skill to create.
The Potter and the Clay
The golem tradition begins with raw matter and sacred language. Clay shaped by hands. Words spoken with intention. A creation that rises, serves, and eventually returns to dust.
We are all building golems now. The clay is silicon. The language is code. The creations are more numerous, more powerful, and more consequential than anything the Maharal imagined. But the relationship between creator and creation hasn't changed. The golem is still clay. It still follows instructions without understanding them. It still requires a creator who accepts responsibility for what it does.
The Maharal's workshop in Prague was small. One rabbi, one golem, one community to protect. Our workshop is global. Millions of builders, billions of golems, a civilization being reshaped by creations that can't comprehend what they're reshaping.
The scale has changed. The wisdom hasn't. Build the off switch first. Know your golem's limitations. Accept responsibility for what you create. Ask who the golem serves. And remember that the measure of a creator isn't the power of the golem, but the care with which it's directed.
The clay is in our hands. What we shape from it is our responsibility.
References
[1] Moshe Idel, Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid, SUNY Press, 1990. https://www.goodreads.com/book/show/102557.Golem
[2] Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control, Viking, 2019. https://www.goodreads.com/book/show/44767248-human-compatible
[3] Byron L. Sherwin, Mystical Theology and Social Dissent: The Life and Works of Judah Loew of Prague, Littman Library of Jewish Civilization, 1982. https://www.goodreads.com/book/show/1724830.Mystical_Theology_And_Social_Dissent