Law and Code: The Twin Symbolic Engines of Civilization
Why the next stage of civilization depends on merging consensus systems with computational systems.
We tend to think of language as nothing more than communication — words exchanged between individuals, sentences strung together to express thought. But beneath the surface of daily speech, language has always served a deeper, civilizational role: it is the raw material from which we build order.
Within this hidden machinery of civilization, two vast symbolic universes stand out. The first is law, which encodes social consensus, resolves disputes, and defines the boundaries of collective life. The second is code, which directs machines, structures computation, and governs the digital infrastructures we increasingly depend on.
At first glance, law and code could not be more different. One is slow, ambiguous, and entangled with history and morality. The other is fast, precise, and ruthlessly mechanical. Yet at their core, they are engaged in the same project: how can symbols create order? How can marks on paper or lines of text, once agreed upon, compel humans or machines to act in predictable, reliable, and structured ways?
This question is not only philosophical; it is existential. Our ability to compress chaos into symbols, and then enforce those symbols as systems of order, is what separates civilization from disorder, coordination from anarchy, execution from mere intention. And now, with the rise of large language models — machines that manipulate symbols at industrial scale — the boundary between these two universes is becoming more porous than ever before.
The Two Largest Symbolic Universes: Consensus Systems × Computational Systems
When we talk about civilization, we usually describe it in terms of institutions, governments, markets, or technologies. But beneath all of these lies something more fundamental: symbols that generate order.
The first great symbolic universe is law. Law is not simply a set of written statutes or case precedents. It is the highest form of social consensus, the symbolic machinery that allows millions of strangers to coexist under shared expectations. Law transforms the uncertainty of human behavior into something predictable: contracts that bind, rights that protect, and obligations that constrain. Its power comes not from brute force, but from the shared recognition that these words mean something, and that meaning will be upheld.
The second great symbolic universe is code. Unlike law, which governs people, code governs machines. It is the highest form of artificial computation — a language of pure precision. Code instructs systems to execute tasks exactly as specified, without ambiguity, without delay, without debate. Where law is slow and interpretive, code is instant and deterministic. It is through code that our infrastructures, networks, and algorithms are orchestrated at planetary scale.
On the surface, law and code appear worlds apart. Law thrives in ambiguity and interpretation; code demands strict syntax and exactness. Law requires judges and juries; code requires compilers and processors. And yet, when viewed at a deeper level, they are twins: both are engines of order powered by symbols.
Law takes words and turns them into obligations. Code takes commands and turns them into execution. Both reduce the chaos of the world into structures we can act upon. Both are attempts to answer the same civilizational question: how do we transform symbols into systems, and systems into predictable realities?
This recognition — that law and code are parallel symbolic universes — is the foundation for what follows. For the first time in history, advances in computation and language models are bringing these two universes into proximity. And as they draw closer, the gravitational pull between consensus systems and computational systems will reshape not just governance or technology, but the very architecture of civilization itself.
Entropy and Structure: From Language to Executable Forms
When you ask a large language model a question, what it gives you is not a finished product but raw symbolic material— vast, generative, and high in entropy. The output is abundant in language, but it is not yet structure.
Institutions and software live on the other end of the spectrum. They are low-entropy systems, built to constrain possibility, enforce predictability, and deliver decisions that can be executed and trusted. A tax rule, a court ruling, a banking protocol — all of them compress ambiguity into a form that can be applied consistently.
The real question for our time is whether these two symbolic universes — law and code — can be brought into dialogue. LLMs, as industrial-scale symbolic generators, supply the abundance; institutions and computational systems supply the rigor. The pathway that bridges them is not automatic but must be carefully engineered:
Compression. High-entropy outputs from LLMs must be narrowed down. Draft legal texts, regulatory interpretations, or policy proposals produced by the model need to be compressed into precise categories, terms, and logical structures.
Layering. Not every part of law can or should be made executable. Some provisions belong to the discretionary domain, requiring human judgment; others belong to the computable domain, where strict logic can apply. LLMs can help propose the initial layering, but the governance of those layers must be systematically designed.
Verification. Once compressed and layered, these structures must be tested. Just as code goes through unit tests and audits, so too must “rule as code” systems undergo simulation, case testing, and public review. Verification ensures that symbolic abundance is not simply translated into brittle automation.
If this cycle can be institutionalized, LLMs may become a kind of front-end engine for the symbolic universe, generating vast possibilities at scale. Legal engineers, policymakers, and technologists can then apply systematic design to compress, layer, and verify — gradually transforming high-entropy symbolic material into executable, auditable governance.
The advantage is obvious: rather than treating law and code as two isolated universes, we begin to see them as different layers of the same symbolic continuum. With LLMs feeding the symbolic abundance, and structured design ensuring accountability, the long-imagined vision of Rule as Code moves closer to practical reality.
Law as Institutional Language, Code as Machine Language
If civilization is built on symbols, then law and code are its two most refined dialects. Each has evolved to serve a different domain, but both are languages designed not merely to describe reality, but to shape behavior.
Law is the language of institutions. It is:
Normative — grounded in values, morality, and social consensus.
Value-laden — carrying the weight of justice, fairness, and cultural expectation.
Defeasible — meaning its application can be overridden, reinterpreted, or contested in specific contexts.
Because of these traits, law is inherently ambiguous. Words like reasonable, proportionate, or significant are not bugs but features. They give institutions the flexibility to adapt rules to circumstances that cannot be foreseen in advance. Ambiguity, in law, is a way of encoding human judgment into the system.
Code, by contrast, is the language of machines. It is:
Deterministic — the same input always produces the same output.
Testable — programs can be checked against edge cases, validated, and debugged.
Executable — the text of code does not merely describe, it performs.
Where law leaves room for interpretation, code eliminates it. Machines cannot operate on “reasonable doubt” or “fair balance.” They demand exact instructions and follow them to the letter. Ambiguity in code is not a feature — it is a bug that causes systems to fail.
The tension between these two languages creates what might be called the bandwidth of translation. Some legal rules — for example, “if income exceeds $100,000, apply tax rate X” — translate cleanly into code. Others, like “provide reasonable accommodation in the workplace,” resist mechanization because they embed contested values and situational judgment.
The bandwidth, then, is not uniform. At one end, highly structured legal provisions can be encoded directly; at the other, open-textured rules require a blend of human discretion and computational support. The challenge of our era is to map this bandwidth carefully, deciding what belongs to the computable domain, what must remain discretionary, and how the two can be layered together.
This is where large language models can play a bridging role. As vast symbolic engines, they can help translate ambiguous institutional language into candidate formalizations — not final code, but drafts, mappings, and prototypes. Human experts can then refine these outputs, ensuring that the translation respects both the precision of machines and the values of institutions.
If successful, this process would not erase the difference between law and code. Instead, it would establish a shared symbolic infrastructure where institutional language and machine language coexist, each doing what it does best. Law provides flexibility and legitimacy; code provides precision and enforceability. And between them lies the possibility of a new symbolic order.
The Origins of the Boundary: Ambiguity vs. Precision
The line between law and code is not arbitrary. It arises from the very different ways these two symbolic systems deal with uncertainty.
In law, ambiguity is a feature, not a flaw. Legal language often relies on terms like reasonable, substantial, or proportionate. These words are not meant to yield a single rigid outcome. Instead, they create space for interpretation, allowing judges, regulators, or communities to weigh context, balance competing values, and adapt rules to circumstances no legislator could have fully anticipated. Ambiguity is how law encodes human discretion into the system, ensuring that the pursuit of justice does not collapse into mechanical uniformity.
In code, the logic is the opposite. Precision is its strength. A machine cannot operate on “reasonable doubt” or “fair opportunity.” Instructions must be explicit, syntax exact, outcomes predictable. The determinism of code is not only desirable but essential; without precision, systems break, errors cascade, and execution halts. Where ambiguity strengthens law by keeping it flexible, it weakens code by making it unusable.
This tension defines the boundary between the two symbolic universes. Some rules are structured enough to be made computable: “if income exceeds $100,000, apply tax rate X.” Others are inherently discretionary: “provide reasonable accommodation in the workplace.” Most exist in between, containing both computable and discretionary elements.
The challenge, then, is not to force one domain into the mold of the other, but to design systems that acknowledge layered domains:
The computable domain, where rules can be expressed precisely and executed automatically.
The discretionary domain, where rules must remain open to human judgment, interpretation, and contestation.
Seen this way, the boundary is not a wall but a seam — a space where law and code meet, overlap, and complement each other. Large language models may widen this seam by helping us prototype translations: generating candidate formalizations of legal language, flagging which parts are computable and which must remain discretionary.
The future of Rule as Code will depend not on erasing ambiguity, but on managing the interface: letting precision and ambiguity coexist, each in its proper place. The real art lies in knowing which rules to automate, which to leave to human discretion, and how to stitch the two together into a coherent, layered system of governance.