What Are We Computing?
From ads to governance, what we decide to compute will write the blueprint of tomorrow’s world.
From Power to Purpose
Are we really spending billions in energy and silicon to compute ads — to nudge someone into buying another pair of jeans they don’t need?
Is that the highest use of humanity’s most powerful machines?
That is the irony of our age: we’ve built engines of cognition vast enough to simulate weather systems, translate languages, and model the genome — and yet we often use them to optimize impulse.
So before asking how much compute we can build, we should ask the only question that matters:
What should we compute?
Every civilization begins by mastering a new form of power, and then spends generations deciding what to do with it.
Steam gave us movement.
Electricity gave us light.
Compute gives us cognition — the ability to think at planetary scale.
But progress always arrives before purpose.
We learn to build before we learn to aim.
Steam built factories before it built fairness.
Electricity illuminated cities before it illuminated minds.
Now compute — the newest, most abstract form of energy — is being consumed faster than it is being understood.
Every watt flowing into a data center encodes a decision: what kind of intelligence do we want this civilization to create?
We can compute ads, predictions, and distractions, or we can compute cures, climates, and new forms of collective reason.
The same machines that train chatbots could just as easily design vaccines or simulate ecosystems.
The difference lies entirely in what we decide is worth computing.
This is the real shift from power to purpose.
Compute has become too vast, too costly, and too consequential to remain aimless.
Like electricity, it will define every economy and every mind that depends on it.
But unlike electricity, its product is not motion or light — it is meaning.
And meaning demands direction.
Without intention, compute degenerates into noise; without purpose, it becomes a mirror for confusion.
We are no longer limited by hardware — we are limited by imagination.
The frontier of intelligence is not about scale but selectivity: knowing what deserves to be computed, and why.
In the end, civilizations are not judged by how much power they harness,
but by what they choose to compute with it.
I. The Scale Illusion
Over the past five years, artificial intelligence has been narrated almost entirely through the language of scale: larger parameter counts, vaster datasets, denser clusters, faster accelerators. Record-setting numbers have become shorthand for progress, allowing companies to signal momentum and governments to claim technological prestige. Yet as these metrics have multiplied, the assumption that “bigger is better” has gradually substituted for a more basic inquiry into purpose; scale has become a proxy for advancement, even when the relationship between additional compute and additional understanding is tenuous.
Scale unquestionably expands capability, but capability does not automatically translate into insight. When computation is directed without a clearly articulated objective, systems excel at producing more of what they already produce—tokens, recommendations, impressions—rather than knowledge that reduces uncertainty or strengthens institutions. The result is a persistent confusion of volume with value: a civilization can pump unprecedented energy into model training and still find itself optimizing the marginal click while underinvesting in climate modeling, epidemiology, scientific simulation, or governance analysis. Each megawatt delivered to a data center represents an opportunity cost; power devoted to engagement engines is power not available to high-order public problems.
The economic logic behind this allocation is straightforward. Tasks that yield immediate, measurable returns attract capital and talent, while domains whose benefits are diffuse, long-term, or public-good in nature struggle to clear private investment hurdles. In such an environment, compute becomes the infrastructure of short-term optimization: it refines advertising markets and content delivery with extraordinary efficiency, yet leaves complex social systems—law, health, education, ecological management—relatively under-computed. As recommendation algorithms mediate attention and language models mediate information, the content of computation begins to shape collective cognition as decisively as traditional institutions once did.
For policymakers and leaders, this shifts the question from engineering to strategy: what should be computed? Compute has moved from a technical resource to a national asset, comparable to energy or transportation networks, and its allocation quietly encodes societal priorities. Choosing to route capacity toward engagement rather than resilience is not a neutral engineering decision; it is a value decision that distributes cognitive power across the economy. The practical distinction is stark: computing ads and clicks generates activity but little durable capability, while computing structure, governance, and shared reasoning builds institutional competence and long-term problem-solving capacity.
If the last phase of AI was dominated by quantitative growth, the next must be defined by semantic direction—not how large a system can become, but whether its outputs advance understanding, coherence, and trust. That transition will require rebalancing incentives (public procurement, mission-oriented R&D, and standards that privilege verifiability and social impact), developing demand-side institutions that can specify high-order computational tasks, and adopting evaluation frameworks that measure contribution to public knowledge rather than benchmark theatrics. Otherwise, additional scale will continue to accelerate what we already have—more computation, more content, more noise—without moving us closer to the purposes that justify the infrastructure in the first place.
The evolution of compute is not just a technological story—it is the story of how governance itself is being rewritten in code.
II. From Energy to Language to Structure
To understand where the trajectory of compute is leading, it helps to trace its evolution as a three-stage relay: from energy, to language, to structure. Each stage transforms not only what machines can do, but what societies must govern.
The first stage, energy, built the physical foundation of machine intelligence: data centers, chips, cooling systems, and the power grids that convert electricity into computation. It resembled previous industrial eras, where growth was measured by physical capacity—more horsepower, more bandwidth, more throughput. In this stage, the problem of governance was largely technical: ensuring reliability, safety, and equitable access to infrastructure. Governments regulated power generation, data center zoning, and grid capacity much as they once oversaw factories and transportation networks. Energy governance was about supply and distribution.
The second stage, language, began when computation became interactive. Large-scale models transformed natural language into a programmable interface, allowing humans to communicate with machines through meaning rather than syntax. This made intelligence widely accessible, but it also shifted governance into a new dimension: governing interpretation itself. Language models mediate information, recommendation, and decision-making, subtly shaping public discourse and perception. Whoever controls these models does not simply operate a technology—they influence how societies talk, think, and reason. Governance in this stage moves from regulating hardware to regulating cognition: transparency of data, accountability of outputs, fairness of representation, and the boundaries of speech itself.
The third and decisive stage is structure. Here, compute stops functioning as a passive tool and becomes the architecture of institutions. Governance no longer sits above technology—it runs through it. Law, finance, education, and administration increasingly rely on executable systems that translate policy into code: smart contracts that execute automatically, algorithms that allocate credit or benefits, regulatory systems that monitor compliance in real time. In this environment, code becomes law, not metaphorically but operationally. Governance itself begins to compute.
This convergence transforms both the purpose and the boundary of governance. It no longer concerns only oversight or enforcement, but design—deciding which principles are embedded into algorithms, who verifies them, and how they evolve. The act of writing policy becomes the act of writing code. Rules that once relied on interpretation by human institutions now run deterministically within systems that leave little room for discretion. This promises efficiency and consistency, but also introduces rigidity and opacity. The boundary between government and infrastructure—between policy and platform—starts to dissolve.
The implications are profound. Traditional governance developed through laws, procedures, and institutions built to manage complexity through deliberation. Programmable governance seeks to manage complexity through automation and verification. Each model reflects a different theory of trust: one grounded in judgment, the other in computation. The challenge for societies will be to balance these two forms—to retain space for human reasoning and moral agency while using computational systems to enhance coordination, compliance, and foresight.
When compute reaches this structural layer, progress is no longer measured by technical performance but by institutional coherence: whether digital architectures reinforce or erode the principles of accountability, fairness, and adaptability. The key question is not how fast machines can process rules, but whose rules they process, and how those rules can be revised when the world changes.
If the first era of compute was about energy—the capacity to act—and the second about language—the capacity to communicate—this third era is about governance: the capacity to decide and to sustain order in a computational world. It is the moment when technical systems and political systems fuse, and societies must determine what kind of governance they are willing to inscribe into the code that will outlive them.
Governance is becoming computation—and computation is becoming governance.
Keep reading with a 7-day free trial
Subscribe to Susan STEM’s Entropy Control Theory to keep reading this post and get 7 days of free access to the full post archives.