My view of systems: gardens, nature, systems, computers, AI—everything is interconnected.
我的系统观,花园,自然,系统,计算机,AI,万物皆互联 (中文在后面)
Following my previous piece, in which I clarified that my research focus is on “building AI-native private systems for families and small businesses,” I feel it is necessary to articulate something more fundamental—and also harder to describe directly: my view of systems.
The term may sound abstract, but it is not constructed out of thin air. Over the past few years, as AI has advanced rapidly, several friends with whom I have long-standing discussions repeatedly suggested that I systematically study fields such as systems engineering, complex systems, and both the “old” and “new” cybernetics. At the time, these directions—seemingly theoretical, even somewhat “academic”—did not feel closely connected to my own practice. Perhaps that was because my experience and capabilities had not yet truly reached the boundaries of those problems.
Now, however, I see it differently. Systems engineering is not a specific discipline, but a pervasive way of thinking. Most of the time, we are “building systems” without even realizing that we are doing so.
I want to explain my understanding of systems thinking in a way that anyone can grasp, drawing on my own ten years of experience in garden-making.
On one hand, gardening is concrete enough that almost anyone can perceive it: whether plants survive, whether the environment is suitable, how water and light are balanced—these are not abstract ideas, but outcomes directly reflected in reality. On the other hand, this has been a field of long-term, hands-on practice for me. Over the past decade, most of my time has been spent in trial and error—and “trial and error” itself is the most authentic mode of operation in systems engineering.
Every judgment here, every adjustment, is not derived purely from reasoning, but paid for with time, money, and real labor. Compared to purely textual reasoning or logical deduction, this kind of experience is a more “physical” expression of a system.
In a sense, such experience is scarce. Today, writing code has become easy, and writing essays has become easy. But to truly grow a plant from the soil, you must step out of abstraction, walk into the sunlight, and enter a real system that is uncontrollable and irreducible. You must accept its constraints, feedback, and uncertainty. It is precisely this kind of long-term, concrete, and non-skippable practice that has gradually shaped my current view of systems.
Therefore, for a programmer or researcher, it is essential to respect entrepreneurs who are able to create and sustain profitable businesses in the real world. Within complex economic systems and physical reality, building a company and maintaining profitability is something that cannot be taught in full—it can only emerge through harsh market selection, combined with talent and other intangible factors.
This systems view does not belong only to gardening. It is becoming the underlying cognitive framework through which I approach the construction of AI-native private systems. This piece is closer to an essay—a reflection and summary of experience—rather than a formal theoretical derivation. Yet it remains important to me. In an era where AI can write code, many programmers who once occupied the lower layers of the production chain must transform. Writing code used to embody a kind of “translation intelligence.” Now that AI has effectively become the translator from intent to code, humans must reposition themselves at the level of system architecture.
Any “Reset-to-Zero” Approach That Wipes an Existing System Carries Endless Risk
Let me start with the first—and most fundamental—systems insight I distilled after repeated trial and error in my garden: I oppose “reset-to-zero development.”
By “reset-to-zero,” I mean taking a system that is already running in the real world—regardless of how well it performs—tearing it down entirely, reducing it to nothing, and then attempting to rebuild a supposedly better, more perfect system on a “clean” foundation.
In gardening, this has a very typical manifestation: large-scale land clearing, removing all existing vegetation, scraping away weeds (which in reality can never be completely eliminated), and then investing heavily to plant a carefully designed “ideal garden.”
I once did exactly this.
At the beginning, I envisioned this garden as an “ornamental rose garden,” aiming to collect rare rose varieties and create a space that would attract visitors during peak bloom. From a design standpoint, this was a very “correct” plan—arguably close to perfect on paper.
But real systems never operate according to paper designs.
Roses are indeed among the most commercially valuable flowers, but they come at a high cost: they require intensive labor for maintenance, continuous watering and fertilization; they are extremely sensitive to environmental conditions, highly susceptible to diseases such as black spot and powdery mildew, and frequently attacked by pests like aphids. To maintain their ornamental quality, one must also continuously enforce “clean boundaries” between plants—constant weeding, mulching, and upkeep.
None of this is a one-time effort. It is continuous and effectively endless.
More importantly—none of these problems are visible before the reset.
Once you remove the original system, you lose all the “cognitive anchors” of the existing ecology:
You no longer know what the soil’s microbial structure was, which weeds were suppressing which diseases, how water circulated, how sunlight moved through the space, or even how wind patterns shaped the system.
You simply reset everything and begin rebuilding in what you assume is a controllable environment.
The result is that chaos begins to emerge from places you could never have anticipated.
At first, it works. It looks beautiful. It even seems to validate the design. But over time, problems begin to appear—and they compound nonlinearly:
Maintenance costs spiral out of control, pests and diseases recur, labor input keeps increasing, and yet you cannot form stable expectations—you don’t know what will happen next, nor how much more resource input will be required to sustain the system. And before large-scale planning and construction, you can never be 100% certain whether the environment itself can support such a system long-term.
Eventually, the system may slowly degrade, or collapse suddenly when the cost structure becomes unsustainable.
This experience led me to a very clear systems judgment:
Any system that is already running in the real world should not be lightly “reset to zero.”
Because the mere fact that it runs means it has already formed a structural equilibrium that you do not yet fully understand.
A better path is:
Make observable, controllable, localized modifications on top of the existing system;
Allow the structure to evolve gradually while maintaining system continuity.
Better to move slowly than to break the system.
This insight extends far beyond gardening.
I have seen developers attempt to rebuild entire cities “from scratch”: leveling vast areas of land and constructing a complete urban system according to a master plan. But these often fail due to massive costs, inability to attract population, and lack of real feedback—becoming “ghost cities,” systems that are valid in design but non-functional in reality.
In contrast, more mature approaches focus on existing communities: identifying aging assets—cheap houses, declining neighborhoods—and renovating or partially rebuilding them. Because the original community is a “living system,” with people, demand, and flow, real buyers often enter even before construction is fully complete. I know several small developers in the U.S. who operate this way, and under their accumulated experience, they rarely have failed projects.
At its core, the difference between these two paths is not “design capability,” but whether one respects a system that is already in operation.
This also becomes a foundational principle in my approach to building AI-native private systems:
Do not attempt to build a perfect system from scratch. Instead, evolve structure continuously on top of a system that is already running.
You can see from the photos how stunning my rose garden looked at its peak when it was first built.
Innovation and system maintenance are not in conflict—but the success of innovation is always highly contingent, and greatness can never be planned in advance.
The world is, of course, continuously driven forward by innovation. Looking back across the long arc of human history, we have almost always been innovating. The wheat, corn, fruit trees, vegetables, and flowers on our tables today—none of them exist in their original “natural” form; nearly all bear the marks of long-term human domestication, selection, and modification. Early wheat did not look like modern wheat, and wild roses were nothing like today’s cultivated roses. In modern society, electricity, engines, computers, and most recently large language models that have swept across the globe—all are the result of humans gradually understanding the laws of nature and then actively rewriting existing conditions based on our values, technical capabilities, and practical needs. Whether reshaping plant forms, altering material properties, or reorganizing information and energy at the microscopic level, the essence of innovation is this: after understanding underlying laws, we attempt to push the world toward forms that did not previously exist.
But innovation is not a straight line. It is almost always paved by massive amounts of trial and error. The successes that ultimately materialize are often just a tiny fraction among countless failures, deviations, misjudgments, abandoned attempts, and accidental hits. In other words, innovation can be pursued, but it cannot be precisely planned; it can be continuously invested in, but it cannot be made to bloom on command. In many cases, 99.9% of the process consists of filtering, elimination, exhaustion, and sunk cost, while the remaining 0.1% still carries a strong element of chance. What humans can do is prepare the soil, accumulate samples, sharpen judgment, and endure long periods without results—but the moment when something truly “new” emerges, and the form it takes, is not fully controllable. This is precisely why system maintenance and innovation are not opposites. Without long-term maintenance—of systems, samples, environments, and observational capacity—there is no foundation upon which innovation can occur. And innovation itself is not a rejection of maintenance, but rather the occasional fruit that maintenance yields over long time scales.
My own process of breeding roses in the garden is a concrete example. Since first encountering botany in high school, I have had a simple and deeply personal wish: to create a plant of my own in this lifetime. When I finally had the chance to pursue this seriously, my goal was to cultivate a “perfect rose”: fully disease-resistant, tolerant to drought and heat, continuously blooming, with large flowers, requiring no delicate care, no frequent fertilization or chemical treatment—just basic sunlight and rain, yet capable of reliably producing enough blooms each month for cut flowers (roses are unique in that very few woody plants in the world can bloom repeatedly across multiple months in a year). In other words, I was not trying to create a fragile plant that requires constant human care, but a garden plant that combines ornamental value with autonomy.
To approach this goal, over the years I have relied on extensive purchasing, collecting, and the help of friends, repeatedly selecting through large-scale propagation and cross-pollination. In total, I have worked with over 300 varieties and propagated thousands, possibly tens of thousands, of plants. My method was simple and ruthless: high-elimination selection. Under the same conditions, out of fifty plants, perhaps only one or two would be retained for further observation; the rest would either be given away or discarded. In the early years, I tried to maintain relatively rigorous scientific records—tracking lineage, observing traits, conducting systematic comparisons. But over time, I grew increasingly exhausted and overwhelmed, especially because roses as a genus are highly susceptible to diseases like black spot and powdery mildew. Once I observed clear disease susceptibility in a lineage, I would usually abandon it immediately and stop investing further effort.
What became truly interesting was that near the end—when I was almost ready to conclude the entire experiment—I suddenly discovered that, in this field of constant elimination and clearing, one plant had somehow remained. In the area where it was found, all its sibling plants had already been eliminated. Its flowers were as large as peonies, its leaves had a distinct waxy texture, and it was almost never affected by disease. Ironically, because I was so exhausted in the later stages, I had not maintained complete records, and I could no longer fully trace its parent lineage with certainty. It did not emerge when I was most in control or most methodical, but rather appeared almost accidentally, at the moment I was about to give up.
Now, I have cleared out most of the other varieties and focused my efforts on this unique variant: on one hand, observing and stabilizing its traits through self-pollination; on the other, cloning its current genetic expression through cuttings, and distributing it to friends for cultivation so it can be tested and validated across diverse real-world environments. Some people have suggested that I consider patenting it, but for now that process feels both complicated and expensive, so I have not pursued it.
This entire journey has led me to a systems perspective I did not have before: innovation is not a betrayal of system maintenance—on the contrary, innovation can only grow out of long-term maintenance. At the same time, however, the success of innovation is never a guaranteed return on linear effort. You can plan experiments, selection processes, investment, and direction—but you cannot plan greatness itself. Greatness cannot be executed as a task; it is something that, after countless cycles of maintenance, trial and error, elimination, and persistence, eventually—and somewhat accidentally—lands in your hands.
See Reality Clearly, and Go Where Innovation Actually Happens
The changes you now see in the research world—and the different strategic choices made by major countries—are not merely policy fluctuations. They reflect a deeper restructuring of how research itself is organized. Increasingly, frontier research is spilling out of traditional university systems, while large corporations, leading companies, and private labs are becoming the new pioneers in certain key domains. The reason is straightforward: they have capital, talent, and compute resources, and they have been shaped by long-term market selection. This gives them the conditions necessary to sustain high-density trial and error, high-cost investment, and rapid iteration.
Take SpaceX as an example. NASA has, in recent years, increasingly shifted parts of its lunar program toward commercial systems. Reports have pointed out that NASA is opening more missions to commercial bidding, while traditional contractors are under growing pressure due to high costs, low launch frequency, and aging technical pathways.
This shift is even more pronounced in computing and AI. Frontier models, massive-scale compute, engineering deployment, real user feedback, and sustained capital investment mean that much of today’s “real innovation” is no longer primarily happening in traditional academic labs. Instead, it occurs within companies that control data, chips, cloud infrastructure, and product feedback loops. Reports like the Stanford AI Index Report have repeatedly highlighted that the cost and compute thresholds for training cutting-edge models continue to rise rapidly, naturally concentrating frontier research within the most resource-rich institutions.
Meanwhile, the traditional ivory tower faces multiple pressures. First, the marginal efficiency of innovation within the paper-based system is declining; evaluation mechanisms increasingly depend on publication counts, funding metrics, and peer competition, making it harder for high-risk, high-failure-rate innovation to survive within academia. Second, academic scandals—paper retractions, data fabrication, authorship disputes—have eroded the moral authority of the academic community. Third, universities historically held a monopoly on interpreting knowledge; but in the AI era, that monopoly is weakening, as access, compression, retrieval, and interpretation of knowledge are no longer confined to universities. As a result, universities are not only losing part of their research monopoly, but also the scarcity that once justified high tuition models.
This helps explain why traditional academic strongholds such as the U.S. and the U.K. are experiencing increasing structural tension. In the U.S., organizations like American Council on Education have tracked and criticized policy changes affecting federal funding and institutional structures. In the U.K., financial pressures have translated into real layoffs, with significant reductions in university staffing and ongoing revenue declines.
At a deeper level, what we are witnessing is not simply “universities declining and companies rising,” but a more fundamental shift: innovation increasingly depends on environments with high energy density, high capital concentration, and high tolerance for trial and error. Traditional universities excel at knowledge organization, talent training, academic continuity, and relatively stable long-term research. But in emerging fields that require massive compute, enormous investment, rapid feedback, and tolerance for failure, private companies and industrial labs are gaining dominance.
As for large-scale, centrally planned “national innovation systems,” their chances of success are even more limited.
From my photos, it’s clear that most of these propagations and seedlings don’t make it—only about one in ten thousand ultimately becomes a true “creation.”
Complex Systems Cannot Be Fully Controlled: There Are Always “Weeds” (Bugs), and Physical Removal Is Not the Best Solution
Anyone who has truly practiced gardening—or any experienced farmer—will tell you this: weeds never disappear. Not temporarily, but continuously, repeatedly, and indefinitely. You pull them out today, and they return tomorrow. If you take a more aggressive approach and use herbicides, the problem simply changes form: how do you ensure the soil is not chemically damaged? How do you guarantee that the plants you actually want are not harmed as well? If you cannot tolerate the existence of weeds, then unfortunately, all your time will be consumed by removing them—and that task itself has no real endpoint.
What I gradually came to understand is this: in a complex system, every individual has its own “genetic logic” and agency. In a field, every plant, every insect, every rabbit operates as an independent entity, each following its own internal rules. A company is no different—no matter how many employees it has, each person has their own interpretation and judgment; it is not a machine that can be fully programmed. A country even more so: different ethnicities, cultures, religions, and regions form a highly diverse collection of individuals, each autonomous, thinking, and inherently unpredictable. Any attempt to control all information, eliminate all anomalies, or prevent any “weeds” or bugs from emerging through centralized control, administrative planning, or rigid “hard coding” fundamentally violates the nature of complex systems.
Here, I must mention a theoretical source I have repeatedly studied: Ilya Prigogine, whose core idea in Order Out of Chaos can be summarized as:
As long as a system is far from equilibrium, structures, disturbances, and “weeds” will inevitably continue to emerge.
Why is this the case? Because systems are never static—they are continuously dissipative. A dissipative system constantly takes in energy while inevitably producing entropy. A garden absorbs sunlight and water, but also generates competition, disorder, and weeds. A company absorbs capital and information, but also produces errors, conflicts, and inefficiencies. A nation, in its operation, continuously generates noise and uncertainty. A system that only absorbs energy without producing any disorder does not exist in reality—it exists only in myth, like Eden.
Therefore, the key is not “how to eliminate weeds,” but how you understand them. For programmers, this is particularly intuitive: if you cannot tolerate even a single bug, you will remain trapped in an endless loop of debugging—with no endpoint. From another perspective, weeds are not purely anomalies in computation; they are byproducts of system operation. The problems we encounter daily—organizational friction, errors in AI-generated code—belong to this category of “irreducible byproducts.” No complex system can be entirely noise-free or bug-free.
If Not by Removal, Can We Use Smarter Forms of Constraint?
In gardening, I gradually abandoned brute-force removal and chemical control, and instead adopted a more effective strategy: temporal balancing.
For example, weeds often have stronger vitality than the plants you deliberately cultivate. But this does not mean you are powerless. You can reshape the timing of the system. By planting perennials early—such as tulips or chrysanthemums—these plants occupy key ecological niches (light, water, space) before weeds emerge in spring. When weeds appear, they lack the conditions to grow. You have not eliminated them—you have simply removed their opportunity.
If we treat pests as a form of “bug,” we can go further by introducing targeted counterbalances. A friend of mine who grows vegetables plants marigolds next to every tomato. Tomato + marigold is a classic companion planting model—essentially embedding a “native countermeasure unit” within the system. Marigolds suppress soil nematodes through root secretions, disrupt pest detection through scent, and attract beneficial insects. There is no act of “eliminating pests,” but their survival pathways are continuously weakened.
When we extend this perspective beyond gardening—to human society, organizational structures, or complex systems in the AI era—the same logic holds.
In the face of complexity, the goal is not to eliminate anomalies entirely, but to understand what causes system degradation:
Do not force a reset to zero. Starting over destroys existing stable structures; once ecological niches are emptied, problems return even faster.
Do not attempt total centralized suppression. The cognitive capacity of any individual, organization, or central system is limited; trying to control all variables leads to exponentially increasing control costs. Many highly centralized systems in history have collapsed for this reason (for example, the Soviet Union collapse).
Do not rely on “herbicide-like” over-optimization—reducing complex systems to a single KPI. This often destroys both “good” and “bad” structures simultaneously, degrading the system as a whole.
In my view, a good system is not designed to be perfect—it is designed to be guided toward health. That is, based on your values, you gradually shape the system so that certain structures become easier to grow, while others lose space over time.
From Prigogine’s perspective, this is not about “controlling” a system, but about selectively introducing structures so that some evolutionary paths become more stable while others naturally decay. In complex systems far from equilibrium, order does not arise by eliminating fluctuations—it emerges over time, through the continuous stabilization of certain structures.
The Complexity of Reality Far Exceeds the Computational Threshold of Any Individual or System—No Plan Can Capture All Information; Only Feedback Can Gradually Adjust the System
The complexity of the real world far exceeds the computational threshold of any individual or system. A plan is not a tool of control—it is merely an initial guess. What ultimately determines the trajectory of a system is not your original design, but the successive cycles of information feedback that follow. At the starting point, no one can grasp all variables: sunlight is seasonal, soil carries historical residues, microorganisms evolve dynamically, materials decompose, and individuals have genetic differences. Not only are these variables vast in number, they are constantly changing. Any “top-down, one-shot perfect design” will quickly fail in reality. The only viable path is to let the system run first, and then iteratively adjust its structure through feedback.
Feedback is most directly visible in plants. Whether something works or not, the plant tells you immediately. Failure to fruit, lack of growth, disease, gradual death—these are all signals. And these signals have a critical property: they are real and cannot be falsified. They do not explain causes, they do not rationalize, and they do not conceal problems. They simply present outcomes.
If you are a programmer, the development process is essentially: error → fix → run → error again → fix again → converge. But there is a crucial prerequisite—errors must be trustworthy. If a system does not produce errors, or if errors are delayed, missing, misleading, or “packaged,” then you are not debugging—you are being consumed by the system. You lose the ability to judge causality. A system whose error signals are not trustworthy is not debuggable. If even the error messages are fake, you would want to smash the machine.
Natural systems are almost “zero-deception” feedback systems. A plant is extremely honest. You may think a location has good sunlight and drainage and should thrive—but it doesn’t. Only when you trace backward do you realize the relevant variable was not even within your original model. In spring, before trees leaf out, light is abundant; by early summer, when light is most needed, the canopy closes and blocks it. You may not know that alkaline material was buried underground (I have actually encountered this—perhaps a previous owner used it to deal with moles), causing the pH to spike; most plants tolerate slight acidity, but very few tolerate alkalinity—so they die immediately. You may not realize that wood chips were not fully composted, and during decomposition they aggressively absorb nitrogen, killing entire batches of young woody plants. I have encountered such situations countless times, and almost every time, the cause lay outside my initial understanding. Each plant is searching for a precise point where sunlight, water, soil, and genetics align. The problem is: humans cannot know all of this in advance. When you scale this complexity up to companies, organizations, or nations, the information load grows exponentially.
So what I really want to emphasize is not complexity itself, but the feedback mechanism. Plants do not give you answers—they give you extremely clean feedback signals. And precisely because these signals are trustworthy, I have been able, after repeated failures, to reconstruct the causes step by step—sometimes down to micronutrient levels (I once encountered a magnesium deficiency).
In complex systems, whether a system can be governed depends on whether feedback is real, timely, and tamper-proof.
As a System Designer: Design for Real Feedback—or Walk Away
In systems where even error signals are fake, what you invest is not time but cognitive cost from constant misdirection. You cannot establish causality, cannot accumulate experience, cannot form stable structures—you are simply trapped in noise until you burn out.
Similarly, I no longer accept any form of hidden information or unwritten rules. Systems based on guesswork—where people try to read each other, circle around issues, and rely on inference—are essentially breaking the feedback channel. They are not adding complexity; they are actively creating unobservability. You may appear to participate in such a system, but you cannot access its true state—you can only guess. Such systems cannot be optimized.
Feedback is not something that emerged in the AI era—it has always been the core of complex system design. Cybernetics, industrial systems, organizational management, ecosystems, even markets—all are fundamentally built on feedback. Without feedback, there is no regulation; without regulation, there is no real system operation. A thermostat regulates temperature through feedback, a company adjusts operations through feedback, an ecosystem maintains dynamic balance through feedback. Complex systems are not “designed once and done”—they must continuously adjust themselves during operation.
AI Does Not Change the Principle of Feedback—It Expands Its Domain
Systems in the past also had feedback, but they mainly absorbed signals that could be mechanically captured: temperature, speed, inventory, error logs, clicks, output, death, stalled growth. These belonged to the physical layer, behavioral layer, or metric layer—capturable through sensors, tables, and logs.
But a vast amount of feedback has always existed outside formal systems—floating within human semantic space. Hesitation, deflection, misunderstanding, complaints, ambiguous responsibility, repeated explanations, uncertainty in language, ambiguity in documents, friction in collaboration, contradictions in requirements—these were never unimportant. Machines simply could not process them, so they remained in human minds, meetings, conversations, emails, and all those “you figure it out” spaces.
This is where AI introduces a fundamental shift: it can now process text, images, speech, and context—it can begin to capture feedback at the semantic level. Feedback is no longer limited to physical signals and hard metrics; it expands into areas previously accessible only through human intuition and interpretation. In other words, AI does not change the fact that feedback is central to complex systems—it changes what can now be considered feedback.
The Challenge Is No Longer Capability, but System Judgment
In the past, the primary limitation was capability. Machines could not understand semantics, so systems could not absorb these floating layers of feedback. Many things were not done simply because they could not be done.
Now the situation has changed. Technology has crossed a threshold: text can be read, images processed, speech transcribed, logs summarized, context assembled, weak signals extracted. Systems can now absorb a far broader range of feedback than before.
And this is precisely where the real difficulty begins:
What do you choose to absorb, and what do you reject?
What counts as real feedback, and what counts as noise or contamination?
How do you design boundaries, validation, and scheduling?
How do you prevent systems from mistaking “readable semantics” for “trustworthy reality”?
Because once semantic feedback enters the system, both capability and risk expand dramatically. What you ingest is not just information, but also bias, emotion, framing, narrative, power structures, organizational rhetoric, and historical noise. The fact that something can be processed does not mean it should be trusted. The fact that it can enter the system does not mean it should drive decisions.
What lies between these distinctions is precisely your systems view.
For this generation of system designers, the challenge is no longer just engineering complexity—it is feedback governance complexity.
In Any System Designed to Serve Humans, Human-Centered Thinking Must Be Everywhere
Whenever humans interact over the long term with any container or information system, a form of energy complementarity or exchange emerges. Humans are not simply “using” a system—they continuously input attention, time, emotion, and judgment into it. The system, in turn, responds through feedback (flowers blooming, data changing, outcomes appearing, behaviors shifting). Over time, this forms a closed loop: humans are either energized or depleted, while the system is either sustained or degraded. This relationship goes far beyond simple “tool usage.”
Any entity that appears unrelated may, in fact, be part of the system. System boundaries are often artificially drawn for the sake of control and modeling. But in reality—especially in complex systems—the participants extend far beyond the few core nodes we define.
Human-centered thinking is itself a method for dealing with complexity. It is not sentimentality, but methodology—a way to find a reliable compression anchor in a world of infinite variables, unmodelable states, and delayed, noisy feedback.
Let me give a concrete example. When I first built my garden, I had a “hidden objective”: to provide sufficient nectar for bees throughout the year. Especially in late autumn, bees often face shortages because most flowers do not bloom during that season. So I deliberately designed the garden with a large number of cold-season flowering plants. As you can see in photos taken last November, the garden was still in full bloom. In horticultural terms, this is called a “cool-season garden,” which is quite rare in this region. At that time, the number of bees in my garden actually exceeded that of spring. I captured many bumblebees sleeping inside flowers, as well as activity from local solitary bee species.
If I continue, it may start to sound a bit abstract. But I personally acknowledge the complexity of the world. For phenomena that cannot yet be rigorously proven but can be consistently perceived, I accept them and incorporate them into my daily practice. I believe that the relationship between humans and systems is not one of “usage,” but of “energy exchange.”
Take the relationship between people and their environment—or between people and the homes they inhabit over long periods. I once discussed a pattern with an architect friend: during my time working in real estate, I visited many older neighborhoods in the United States and noticed a consistent pattern—houses that are continuously occupied, regardless of age, tend to remain in decent condition; once left vacant for long periods, they deteriorate rapidly. This is not merely a matter of maintenance. My friend, despite having a rational engineering background, strongly agreed that there is a form of exchange between humans and buildings that is difficult to fully quantify.
Translated into engineering terms, this is not mysterious at all: when humans interact continuously with a system (a garden, software, organization, knowledge base, community, or house), what is fundamentally happening is the input of attention, the reception of feedback, and the circulation of energy. Once this input stops, the system loses structural vitality and begins to decay.
This is why I must also care about other “users” of the system—such as whether bees are thriving. That is part of the system’s integrity.
System Boundaries Are Artificial Constructs
“Any seemingly unrelated entity may be part of the system.”
At first glance, this may sound unscientific, but it is not. Nature itself does not have clear boundaries; boundaries are tools introduced for engineering control. In technical systems, we must define boundaries to manage complexity. But the real question is: within the scope of what we can influence, why not allow more entities to have space to exist?
This idea already manifests at the societal level. Should companies care for vulnerable employees? Should people with disabilities be systematically excluded? Should disadvantaged groups receive public support? From a purely efficiency-driven perspective, all of these could be removed. But in reality, we increasingly recognize that they are not “outside the system”—they are part of it. Excluding them is not optimization; it is a reduction of system resilience.
Human-Centered Thinking Is Not Morality—It Is a Strategy for Handling Complexity
In complex systems, variables are infinite, states cannot be fully modeled, and feedback is delayed and noisy. Such systems are everywhere in human society: organizations, nations, companies, markets—each of them is at least as complex as a large-scale engineering project.
And “humans” are currently the only nodes with the following capabilities:
Multimodal perception (vision, emotion, experience)
Fuzzy judgment
Long-term value assessment
Therefore, in such systems, being human-centered is not merely a moral choice—it is an engineering optimum:
Humans are the most effective nodes for compressing complexity.
Yet many real-world systems are doing the opposite: removing people for efficiency, cutting key contributors for metrics, concentrating wealth in the name of optimization. These decisions often appear “correct” in the short term, but they quietly sever a critical layer—the human feedback layer.
Any system that serves humans, if it reduces human participation in the name of efficiency, is fundamentally trading long-term system stability for short-term certainty.
What Happens When a System Becomes “Dehumanized”?
Phase 1: Efficiency Gains
Metrics improve
Costs decrease
Decisions appear cleaner
Phase 2: Feedback Loss
Weak signals disappear
Anomalies cannot be detected early
The system becomes “blind” to emerging problems
Phase 3: Structural Fragility
Reduced resilience to shocks
Local errors can no longer be absorbed
Small issues begin to amplify
Phase 4: Systemic Collapse
Sudden failures
Inability to recover
Even the root cause becomes unclear
The key question is whether a system still retains continuous human input as the source of its structural vitality. Just like a human garden—even though bees do not plant or cultivate crops, they remain a critical indicator of system health. Sustaining them is, in fact, a way of activating the system itself.
Once this input is cut off, even if the system appears more “efficient” in the short term, it has already entered an irreversible path of decline.
Final Note
A system that truly runs well is never built overnight. It emerges gradually through repeated trial and error, a measure of luck, and sustained effort from many people. Yet its decline follows the opposite pattern—it requires neither complex conditions nor dramatic shocks. Often, the mere absence of maintenance is enough for rapid deterioration. Take my garden as an example: after ten years of work, it still hasn’t fully reached the state I envision. But if I stop maintaining the soil, fertilization cycles, weed control, and the seasonal care of woody plants and perennials for just one year, the entire system will quickly fall into disorder—within a single cycle, it can become almost unrecognizable.
The rise of AI has led many programmers to believe this is some kind of “final battle,” as if everything will be rewritten. In reality, there is clearly a significant bubble in current AI valuations. Many startups will face the same fate seen across all industries—struggling to generate sustainable profits and eventually failing. This is a normal phase in the diffusion of any new technology. But one thing is irreversible: large language models have permanently altered the trajectory of the human technology tree.
If you zoom out, you’ll notice that post-war technological progress in the United States was not driven by sudden leaps, but by steady and continuous improvements in productivity. Real technological transformation rarely comes from “replacing everything.” Instead, it enters at the edges of existing systems, gradually infiltrating and reshaping them without breaking their original structure.
This pattern is already evident in how AI is being adopted today. Many traditional industries are not choosing full-scale “AI transformation.” Instead, they introduce large language models at entry points—such as customer service systems. But customer service is not simply “chatting.” It is fundamentally a node for information intake, classification, and the triggering of downstream processes. The value of AI here is not to replace humans, but to improve the structural quality of how information enters the system—and from that entry point, gradually extend into deeper layers.
The real question is not whether to go “All in AI”—that’s just a slogan. The real question is whether you understand the design of the entry point: How is information captured? How is it structured? How is it handed off to downstream processes? Is the connection between customer service and backend operators seamless, or fragmented? Without solving these questions, there is no such thing as system-level AI integration.
By contrast, a strategy that starts with the goal of “replacing all human labor” is fundamentally mechanical and non-human-centered. Such systems tend to ignore the essential role humans play in the real world—as sources of feedback, judgment, and regulation—and therefore struggle to remain stable in complex environments.
A more viable path is always to serve people first: embed the system into human workflows, take on part of the cognitive and operational load, and evolve through real feedback—rather than attempting to rebuild everything from scratch. Only in this way does AI cease to be an external tool and instead become a capability that grows organically within the system itself.
继上一篇明确了我将研究重点聚焦在“为家庭与小企业构建 AI 原生私有系统”之后,我认为有必要把一个更底层、也更难被直接描述的东西讲清楚,那就是“我的系统观”。
这个词听起来抽象,但它并不是凭空构建的。过去几年,随着 AI 的迅速发展,有几位长期交流的朋友反复建议我去系统性地研究诸如系统工程、复杂系统、老三论与新三论这些领域。在当时,这些看似偏理论、甚至有些“学院派”的方向,并没有让我觉得与自身实践高度相关,或许也是因为,那时我的经验和能力,还没有真正触及这些问题的边界。
但现在反而认为,系统工程并不是某个特定学科,而是一种无处不在的认知方式。只是大多数时候,我们是在“做系统”,却没有意识到自己正在做。
我想用一种人人都听得懂的方式,结合我自己过去10年造园的经历,来讲清楚我所理解的系统观。
一方面,园艺足够具体,几乎每个人都能感知:植物会不会活,环境是否合适,水和光如何调配,这些都不是概念,而是直接反馈在现实中的结果。另一方面,这也是我亲身经历了超过十年的长期实践场域,在这十年里,大部分时间都在试错,而“试错”本身,正是系统工程最真实的运行方式。
这里面的每一个判断、每一次调整,都不是推导出来的,而是用时间、金钱和真实劳动换来的。相比纯粹的文本推理、逻辑演绎,这种经验是一种更“物理”的系统表达。
某种意义上,这样的经验是稀缺的。今天写代码变得容易,写文章也变得容易,但要真正把一株植物从土里养出来,你必须离开抽象的环境,走到阳光下,进入一个不可控、不可简化的真实系统之中,接受它的约束、反馈与不确定性。而正是这种长期、具体、不可跳过的实践,让我逐渐形成了现在这套系统观。
所以其实对于一个程序员和研究者来说,你要尊重在现实中能够创造成功运营和盈利的企业家。在复杂的经济系统中,在物理现实中,创造企业并且持续盈利,企业家才是完全无法被培养,只能靠残酷的市场筛。
这套系统观,并不只属于园艺,它正在成为我构建 AI 原生私有系统时的底层认知框架。这篇文章更像是一篇杂文,一篇经验总结。而不是正经的理论推导。但是对我来说仍然重要,在AI写代码的今天,许多以前处于生产链上基层的程序员都必须要转型。因为以前写代码是承担一种“翻译”的职能。现在AI完全成为从意图到代码的翻译者,那么人就一定要将自己定位到“系统”架构这个角度。
任何清除现有运作中系统的清零式运作,都蕴含无尽的风险
我们说说我在花园反复试错之后,沉淀出来的第一个、也是最核心的系统经验:我反对“清零式开发”。
所谓清零式开发,就是把一个已经在真实世界中运行的系统——不管它运行得好不好——整体推倒、归零,然后试图在一片“干净”的基础上,重新搭建一个自以为更优、更完美的体系。
这在园艺中有一个非常典型的表现:大面积开荒,把原有植被全部清除,杂草铲平(事实上也根本无法真正清除干净),再投入高成本,种植一套精心设计的“理想花园”。
我曾经就是这么做的。
最初,我为这片园子设想的是一个“观赏型玫瑰园”,目标是收集大量珍稀月季品种,打造一个在盛花期可以吸引人专门前来观赏的空间。从设计上看,这是一套非常“正确”的方案——甚至可以说,在纸面上是接近完美的。
但现实系统从来不会按照纸面运行。
月季确实是商业花卉中的佼佼者,但它的代价极高:
需要高密度的人力维护,持续浇水、施肥;对环境极其敏感,极易感染黑斑病、白粉病,虫害(如蚜虫)也非常频繁;为了保证观赏性,还必须长期维持植株之间的“干净边界”,不断除草、铺设木屑抑草层。
这些工作不是一次性的,而是持续性的、无穷无尽的。
更重要的是——这些问题,在“清零之前”,是不可见的。
当你把原有系统全部移除之后,你失去了对这个空间原有生态结构的所有“认知锚点”:
你不知道土壤原本的微生物结构是什么样的,不知道哪些杂草在压制哪些病害,不知道原本的水分循环、光照路径、甚至风的流动是如何在这个系统中起作用的。
你只是把一切归零,然后在一个“你以为可控”的环境里开始重建。
结果就是:混乱从你完全无法预期的地方开始涌现。
一开始,它确实很成功,很漂亮,甚至可以说“验证了设计的正确性”。但随着时间推进,各种问题开始出现,并且是非线性地叠加:
维护成本失控、病虫害反复、人工投入越来越重,而你却无法建立稳定的预期——不知道下一阶段还会发生什么,不知道还要投入多少资源才能维持这个系统。而且这个环境本身是否能长期支持这个系统,在你大规划大建设之前,是不能100%确定的。
最终,这种系统有可能“慢慢变差”,或者是因为成本结构失衡,维护费用过高,在某一个点上突然崩溃。
这段经历让我形成了一个非常明确的系统判断:
任何系统,只要它已经在真实世界中运行,就不应该被轻易“清零”。
因为它之所以能运行,本身就意味着它内部已经形成了一套你尚未完全理解的结构平衡。
因此,更优的路径应该是:
在现有系统之上,进行可观测、可控制的局部改造;
在保持系统连续性的前提下,让结构缓慢演化。
宁可慢,不要断。
这种认知,其实不仅适用于花园。
我见过一些开发商,试图“从零开始”重建一座城市:推平大片土地,按照规划一次性建成一个完整的城市结构。但最终往往因为成本巨大、人口导入失败、系统缺乏真实反馈,这些地方变成了空城,一个在设计上成立,但在现实中无法运转的系统。
相反,一些更成熟的开发方式,是在已有社区中寻找老化资产——便宜的房子、衰退的街区——然后进行翻新或局部重建。因为原有社区本身是“活的系统”,有人口、有需求、有流动,所以哪怕房子还没完全建好,就已经开始有真实买家进入。在美国我认识不少这种极小的开发商,他们在长期积累的眼光之下,几乎没有什么失败的项目。
本质上,这两种路径的区别,不在于“设计能力”,而在于是否尊重一个已经运行中的系统。
这也是我在后续构建 AI 原生私有系统时,一个非常底层的原则:
不要试图从零构建一个完美系统,而是要在一个正在运行的现实系统上,持续演化出结构。
你可以从图片上看到,我在刚打造这个玫瑰园的时候,盛花期有多么惊艳。
任何现实中的复杂工程,都需要靠慢慢演变,渐进式介入,耗费漫长的时间完成
现在几乎所有人都在谈“All in AI”,但是AI是估值泡沫这个说法现在已经被很多,甚至我这种AI乐观主义人士认同:一方面是资本市场对AI的高估值与高预期,另一方面是它在真实世界中落地时所面临的结构性困难。
问题不在于AI能力不够,而在于系统接入方式是错的。
AI并不是一种可以“一键替换”的技术,它更像是一种会改变系统内部结构的“慢变量”。如果用错误的方式引入——例如一刀切地替换掉原有的会计系统、管理系统、运营流程——那本质上就是在做我前面所说的“清零式开发”:你不仅丢掉了已有系统的稳定性,还在一个高度不确定的基础上,引入了更大的不确定性。
这也是为什么很多AI项目看起来技术先进,但最终落地困难,甚至半途而废。AI创业团队一开始都是大明星,短短24个月就销声匿迹了。因为它被放在了错误的系统位置上。
真正可行的路径,恰恰相反。
AI原生系统,不是“从零开始构建一个AI系统”,而是在已有系统之中,逐步渗透、逐步接管、逐步重构。
它的起点,往往不是那些看起来“最重要”的核心系统,而是一些最不起眼、但最具结构价值的环节,例如:
繁杂的信息录入
原始单据的整理与结构化
OCR + 分类 + 初步语义理解
重复性的人工处理流程
这些地方有一个共同特征:
低决策风险 + 高重复性 + 高结构价值
从这些“边缘入口”开始,AI可以先承担最基础的工作:把现实世界中的非结构化信息,转化为结构化输入。这一步一旦稳定下来,系统的“数据入口”就发生了变化。
而入口一旦被改变,系统就开始被“重新定义”。
接下来,才是逐步往内渗透:
从记录 → 到理解 → 到辅助判断 → 到部分自动决策 → 再到调度系统。
整个过程,是一种渐进式侵入(progressive infiltration),而不是替代。
这和我在花园中的经验是完全一致的:
你不是把整个生态拔掉重种,而是先改变水流、光照、土壤中的某一个变量,让系统在保持连续性的情况下,逐渐朝新的结构演化。
所以,从系统观来看,“All in AI”本身就是一个误导性的口号。
真正应该发生的,不是“All in”,而是:
AI slowly grows into the system, until the system itself becomes AI-native.
不是替换系统,而是让系统,在不被打断的情况下,被重新生长。一个企业只有在维持他传统业务盈利的情况下,因为AI慢慢的提高了效率,才会渐进式的推进AI原生系统的改革。
你看我一开始玫瑰园非常惊艳的,现在变得不可维护,只能重新规划。
创新与系统维护并不冲突,但创新的成功始终带有极大的偶然性,伟大更不可能被提前计划。
这个世界当然是被创新持续推进的。回头看人类漫长历史,我们几乎一直在创新。今天餐桌上的小麦、玉米、果树、蔬菜、花卉,没有哪一样完全是“天然原样”;它们几乎都带着人类长期驯化、筛选与改造的痕迹。最早的小麦不是今天的样子,最初的野蔷薇也不是今天月季的样子。再看现代社会,电力、发动机、计算机,直到这几年席卷全球的大语言模型,也都是人类在逐步理解自然规律之后,以自己的价值判断、技术能力和现实需求,对既有状态所做的主动改写。无论是改写植物的形态,金属的性能,还是微观世界中信息与能量的组织方式,创新的本质,都是在理解规律之后,尝试把世界推向一种此前并不存在的形态。
但创新不是一条直线,它几乎总是由大量试错铺出来的。真正落下来的成功,常常只是无数失败、偏差、误判、半途而废和偶然撞中的极少数结果。也就是说,创新可以被追求,却不能被精确规划;可以长期投入,却不能保证按人的意志定点开花。很多时候,99.9%都是筛选、淘汰、疲惫和沉没,最后留下的0.1%,往往仍然带着浓厚的偶然性。人能做的是准备土壤、积累样本、提高判断力、承受漫长的无果期,但那个真正成立的“新东西”,什么时候出现、以什么形式出现,并不完全受控。正因为如此,系统维护和创新从来不是对立关系。一个人若没有长期维护系统、维护样本、维护环境、维护观察能力,就不会有创新发生的条件;而创新本身,也不是对维护的否定,而是维护在长时间尺度上偶然结出的果。
我自己在花园里的月季育种过程,就是一个很具体的例子。自从高中接触植物学开始,我就一直有一个很朴素很个人化的愿望:这辈子想亲手创造出一株属于我自己的植物。后来真正有机会实践时,我想做的是一株“完美月季”:完全抗病,耐旱耐热,持续开花,花朵硕大,不需要精细照料,不需要频繁施肥打药,只要阳光和雨水基本到位,就能每个月稳定给你一把足够插花的花枝(月季的独特性,全世界几乎所有的木本植物都做不到一年中多个月份反反复复开花)。换句话说,我想培育的不是一株需要人长期伺候的娇贵植物,而是一株兼具观赏性与自立性的园林花卉。
为了逼近这个目标,这些年我依靠自己广泛购买、采集,也得到不少朋友的帮助,在大量扦插繁殖与异花授粉育种中反复筛选。前前后后,我接触过三百多个品种,繁育过几千,甚至可能上万株植株。我的方法很简单很残酷,就是高淘汰率筛选:在同等条件下,五十株里面也许只留下两株,甚至一株继续观察,其他绝大多数不是送人,就是直接淘汰。最开始几年,我还尽量维持比较严谨的科学记录,追踪谱系、观察性状、做比较系统的筛选。但随着时间推移,我越来越疲惫,也越来越感到力不从心,因为月季本身就是一个极容易感染黑斑病和白粉病的属类。只要我在某个谱系中看到明显病害,我通常就会立刻放弃,不再继续投入。
真正有意思的是,到最后,我几乎已经准备结束这场试验的时候,才突然发现,在这一大片不断淘汰、不断清理的试验田里,居然有一株自己留了下来。你可以在下图中看到最初发现他的这一片田,其他的兄弟姐妹全部被淘汰了。它的花开得如牡丹一般硕大,叶片带有明显的蜡质感,而且几乎从不染病。讽刺的是,也正因为后期我实在太累,很多早年的记录没有完全维持下来,以至于连这株植物的父系母系谱系,我都已经无法百分之百追溯清楚。它不是在我最有掌控感、最严密的时候出现的,反而是在我几乎要放弃的时候,以一种近乎偶然的方式浮现出来。
现在,我已经清除了其他大部分品种,把精力集中在这一株特异变种上:一方面通过自花授粉去观察和稳定它的性状,另一方面通过扦插繁殖去复制它现有的基因表达,并不断送给朋友种植,让它在更多真实环境中扩散、验证。也有人建议我以后去考虑专利注册,只是这件事目前在我看来既麻烦又昂贵,所以暂时没有推进。
整个过程让我形成了一个以前没有的系统观:创新并不是对系统维护的背叛,恰恰相反,创新只能生长在长期维护之中。但与此同时,创新的成功也绝不是线性努力的必然回报。你可以计划试验,计划筛选,计划投入,计划方向,却无法计划伟大本身。伟大不能作为一项任务被执行,它更像是在无数维护、试错、淘汰与坚持之后,由偶然亲手落到你面前的东西。
看清现实,去真正创新会发生的地方呆着
所以你现在看到科研界发生的很多变化,以及一些大国正在做出的不同选择,并不只是政策摇摆,而是科研组织形态本身正在重构。越来越多前沿研究开始从传统高校体系中外溢出来,大厂、头部公司和私人实验室,正在成为某些关键领域的新先锋。原因并不神秘:它们有钱、有人、有算力,也经过了长期市场筛选,因而更有条件维持高密度试错、高成本投入和快速迭代所需要的科研环境。以 SpaceX 为例,NASA 近年的月球计划本身就在越来越多地转向商业系统,Reuters 2026 年的报道直接指出,NASA 已开始把后续任务向商业竞标开放,传统承包商体系则因高成本、低频率和技术路径老化承受越来越大压力。
尤其在计算机与 AI 领域,这种转移更加明显。前沿模型、超大规模算力、工程化部署、真实用户反馈和持续资本投入,决定了今天很多“真正的创新”已经不再主要发生在传统学院式实验室,而是发生在拥有数据、芯片、云基础设施与产品闭环的公司体系里。斯坦福 2025 AI Index 也反复强调,前沿 AI 模型的训练成本和算力门槛仍在快速攀升,这天然会把最前沿研究推向资源最集中的机构。
相反,传统象牙塔正在同时面临几重压力。第一,论文体系本身的边际创新效率在下降,评价机制越来越依赖发表数量、基金指标和同行博弈,导致真正高风险、高试错、高失败率的创新越来越难在学院体制内生存。第二,学术界近年丑闻频发,从论文撤稿、数据造假到署名与同行评审争议,都在削弱学术共同体原本的道德权威。第三,大学过去最核心的一个位置,是掌握知识的解释权;但在 AI 时代,这种权力正在被削弱,因为知识获取、压缩、检索、解释的入口,已经不再只属于大学。于是,大学不仅失去一部分科研垄断地位,也开始失去过去支撑高学费体系的认知稀缺性。
这也是为什么美国和英国这些传统学术高地,都出现了越来越明显的结构性紧张。在美国,2025 年以来,高教组织 ACE 公开跟踪并批评特朗普政府一系列针对高校和联邦教育系统的重组与经费压缩动作,明确提到教育部裁员、研究拨款冻结、联邦角色收缩,以及对大学体系的持续施压。在英国,大学财政危机已经转化为实质性裁员。2025 年初,《卫报》援引业内估计称,英国高校系统中可能出现多达一万个岗位的裁撤或流失;同年 5 月,英国高教监管机构的财务健康检查又显示,英格兰高校收入已连续第三年下滑,国际学生招收不及预期正在继续推动裁员、削减项目和出售资产。
所以更深一层地看,今天科研组织的变化,不只是“高校不行了、公司起来了”这么简单,而是创新越来越依赖高能量密度、高资本密度和高试错容忍度的环境。传统大学擅长的是知识整理、人才训练、学术传承与相对稳定的中长期研究;但在某些需要巨量算力、巨额投入、极快反馈和高失败承受力的新领域,私人公司与产业实验室越来越具备主导权。
一些举国体制的“计划型创新”,那更是成功希望比较渺茫了。
从下图看,这里大量的杂交幼苗全部都没有成功。因为成功就是偶然的。成功的那一株倒是挺神的,我都不知道他哪里来的。但是完全抗病,春季开花如牡丹那么大。
复杂系统无法掌控一切:永远都有杂草(bug),物理拔除不是最佳方案
只要真正做过园艺的人,或者一位经验老道的农民,都会告诉你:杂草永远存在。不是暂时存在,而是持续、反复、不可终结地存在。你今天拔掉,它明天还会长出来;你如果更激进一点,使用除草剂,那么问题就变成了另一种形式:你如何确保土壤不被化学破坏?你又如何保证那些你真正想要的植物不会一起被伤害?如果你无法容忍杂草的存在,那么很遗憾,你的全部时间都会被消耗在拔草这件事上,而且这件事本身几乎没有终点。
我这里想说一个我逐渐悟出的道理:在一个复杂系统中,每一个个体都拥有自己的“基因”和能动性。一片田野里,每一株草、每一只昆虫、每一只兔子,都是独立的个体,它们各自按照自身的逻辑在行动。一个公司也一样,不管有多少员工,每个人都有自己的理解和判断,并不是一个可以完全被编程的机器。一个国家更是如此,不同的人种、文化、宗教、地域,构成了高度多样的个体集合,每个人都是独立的、有想法的、不可完全预测的。任何一种试图通过中央控制、行政规划或者“硬代码”来掌控一切信息、阻止一切异常、不让任何“杂草”或bug出现的设想,本质上都是违背复杂系统规律的。这里不得不提到一位我反复学习的理论来源——伊利亚·普里高津 在《Order Out of Chaos》中的核心观点:
只要系统远离平衡(far from equilibrium),结构、扰动和“杂草”就必然不断涌现。
为什么会这样?因为系统不是静态的,而是“持续耗散”的。所谓耗散结构,本质上就是系统不断输入能量,同时不可避免地输出熵。一个花园在输入阳光、水分的同时,也会产生竞争、混乱和杂草;一个公司在输入资金和信息的同时,也会产生误差、冲突和低效;一个国家在运行过程中更是持续地产生各种噪声与不确定性。一个只输入能量而不产生任何混乱的系统,在现实世界中并不存在,那只存在于神话中的伊甸园。
因此,关键不在于“如何消灭杂草”,而在于你如何看待杂草。对于程序员来说,这一点尤其容易理解:如果你无法容忍任何一个bug,那么你将永远处于修bug的循环之中,而且这个循环没有终点。换一个角度来看,杂草并不是全都代表着计算的异常,而是系统运行的副产物。我们每天面对的各种问题——无论是组织中的摩擦,还是AI生成代码中的错误——本质上都属于这一类“不可消除的副产物”。你不可能指望任何一个复杂系统是完全无噪声、完全无bug的。
如果不物理拔除杂草,是否可以采用更聪明的“制衡”方法?
在园艺中,我逐渐放弃了粗暴的“拔掉”或者依赖除草剂的方式,而开始采用一种更有效的策略:时间上的制衡。比如说,大多数杂草的生命力往往强于你精心种植的植物。但这并不意味着你只能被动应对。你可以改变时间结构,比如提前种植多年生植物,例如郁金香、菊花的根系等。当春天到来时,这些植物已经率先占据了光、水和空间这些关键生态位,杂草即使出现,也缺乏生长的条件。你并没有消灭杂草,而是让它“没有机会”。
如果把害虫也看作一种“bug”,那么还可以进一步引入定向制衡。我有一位很喜欢种菜的朋友,她在每一颗番茄旁边都会种上一株万寿菊。番茄 + 万寿菊,其实是一个经典的伴生种植模型,本质上是在系统内部嵌入一个“天然对抗单元”。万寿菊能够通过根系分泌物抑制土壤中的线虫,通过气味干扰害虫定位,同时吸引有益昆虫。这里没有任何“消灭害虫”的动作,但害虫的生存路径被持续削弱。
当我们把视角从园艺扩展到更大的系统——人类社会、组织结构,乃至AI时代的复杂系统——这个逻辑同样成立。面对复杂性,最重要的不是试图彻底清除异常,而是理解哪些做法会导致系统退化。不要强行清零,因为推倒重来意味着你同时摧毁了已有的稳定结构,而生态位一旦空出来,问题只会更快回来。不要试图全局压制,因为任何个体、任何组织、任何中央系统的认知能力都是有限的,试图掌控所有变量只会让控制成本指数级上升,历史上许多高度集权的系统最终都崩溃在这一点上(比如苏联)。也不要依赖类似“除草剂”的过度优化手段,把复杂系统压缩成单一KPI进行优化,这种做法往往会同时破坏“好结构”和“坏结构”,最终导致系统整体质量下降。
按照我的理解,一个好的系统,并不是被设计成完美的,而是可以被“引导”成良性的。也就是说,你需要基于自己的价值取向,去慢慢推进一个系统,使某些结构更容易生长,而另一些结构逐渐失去空间。从普利高津的视角来看,这甚至不是在“控制系统”,而是在通过选择性地引入结构,使某些演化路径变得更加稳定,而另一些路径自然衰退。在远离平衡的复杂系统中,秩序不是通过消除涨落获得的,而是在时间中,通过不断稳定某些结构,逐渐形成的。
现实的复杂度远超个人或者系统的计算阈值,任何人为计划都无法掌握全部信息,只有靠系统的信息反馈来慢慢调整。
现实世界的复杂度远超任何个体或者系统的计算阈值:计划不是控制工具,它只是一个初始猜测;真正决定系统走向的,不是你一开始的设计,而是后面一轮一轮的信息反馈回路。人在起点不可能掌握全部变量——光照是季节性的,土壤有历史遗留,微生物在动态变化,材料在分解,个体有各自的基因差异——这些变量不仅数量巨大,而且一直在变化。所以任何“自上而下、一次性设计完美”的方案,在现实里都会迅速失效。唯一能走通的路径,就是让系统先跑起来,然后靠反馈一点点修正结构。
信息反馈这件事,在植物上是最直接的。它好不好,它会直接告诉你。不结果、不生长、生病、慢慢死掉,这些都是信号。而且这种信号有一个极其重要的特征:真实、不可伪造。它不会解释原因,不会帮你合理化,更不会替你掩盖问题。它只是把结果摆在那里。
如果你是程序员,开发过程本质就是:报错 → 修改 → 运行 → 再报错 → 再改 → 收敛。但这里面有一个前提——报错必须可信。如果一个系统不报错,或者报错是延迟的、缺失的、误导性的,甚至是“包装过”的,那你就不是在调试,而是在被系统消耗。你会完全失去判断能力。一个连错误信号都不可信的系统,是不可调试的。一个系统要是连报错都是假的,那你肯定想直接砸机。
自然系统,它几乎是一个“零欺骗”的反馈系统。一棵植物非常诚实。你看着一个地方,阳光好、排水好,你觉得它一定长得好,但它就是长不好。你往回推,才发现变量根本不在你当初考虑的那一层。春天树叶没长出来,光很好;等到初夏,它最需要光的时候,树冠一合,光被挡死了。你不知道地下被埋过碱(我真的遇到过,以前的某个房主可能因为鼹鼠问题埋了碱),pH直接爆增,这个星球耐弱酸的植物多,耐碱的很少,立即死;你不知道木屑没熟化,在分解过程中疯狂吸氮,把整批木本植物的幼苗拖死。这种情况我遇到无数次,而且几乎每一次,原因都在你原本认知之外。每一株植物都在找一个点:阳光、水分、土壤、基因刚好匹配的那个点。但问题是,人类无法提前掌握这些全部信息。你把这个复杂度放大到公司、组织、国家,那信息量是指数级上升的。
所以我这里真正想讲的不是复杂度本身,而是反馈机制。植物给你的,不是答案,而是一个极其干净的反馈信号。也正因为这个信号是可信的,我才能在一次次踩坑之后,通过分析,把问题一点点还原出来,甚至细到土壤微量元素这一层(我还真碰到过一次“缺镁元素“的)。
在复杂系统中,系统能不能被治理,取决于反馈是否真实、及时、不可篡改。
作为系统设计者,去设计真实的反馈;无法获取真实反馈的,直接不玩了
那些连报错都是假的系统,你在里面投入的不是时间,而是被不断误导的认知成本。你无法判断因果,无法建立经验,无法形成稳定结构,只能在噪音里反复试错,最后把人拖垮。
同样,我也不再接受任何形式的“隐藏信息”和“潜规则”。那种你猜我、我猜你、绕圈子、靠揣测推进的系统,本质上就是在破坏反馈通路。它不是在增加复杂度,而是在主动制造不可观测性。你看似在参与一个系统,实际上你拿不到系统的真实状态,只能靠猜,这种系统是无法被优化的。
其实反馈并不是AI时代才出现的东西,反馈一直都是复杂系统设计里的核心环节。控制论、工业系统、组织管理、生态系统,甚至市场,本质上全都建立在反馈上。没有反馈,就没有调节;没有调节,就没有真正的系统运行。一个恒温器(比如你家空调)靠反馈调温,一个企业靠反馈调整经营,一个生态系统靠反馈维持动态平衡。复杂系统之所以不是一次性设计完就结束,就是因为它必须在运行中不断根据反馈修正自己。
AI改变的不是反馈原则,而是反馈疆域。机器一旦开始读懂语义,系统能接住的现实就突然扩大了。
过去的系统当然也有反馈,但它们主要吸收的是那些容易被机械化接住的反馈:温度、速度、库存、报错、点击、产量、死亡、生长停滞。这些反馈大多属于物理层、行为层、指标层,是可以直接被传感器、表格、日志捕捉的。但还有大量反馈,其实一直存在,只是漂浮在人类语义系统里,进不了正式系统。比如犹豫、推诿、误解、抱怨、模糊责任、反复解释、话语中的不确定、文档中的暧昧、协作中的摩擦、需求里的自相矛盾。这些东西过去不是不重要,而是机器读不懂,所以只能留在人脑、会议、闲聊、邮件和各种“你自己体会”的空间里。
现在AI最根本的变化就在这里:它开始可以直接处理文本、图像、语音、上下文,开始能够从语义层接住反馈。于是反馈第一次不再只局限于物理信号和硬指标,而是扩展到了原来只能靠人类经验去感受、去解释、去闻出来的那部分现实。也就是说,AI没有改变“反馈是复杂系统核心”这件事,AI改变的是:什么东西现在也可以算作系统反馈了。
不是技术做不到,而是技术已经开始做得到之后,你到底用什么系统观去约束它、组织它、落地它。
过去很多时候,设计者的主要问题是能力不足。机器看不懂语义,系统接不住这些漂浮在人类解释层里的反馈,所以很多事情不是你不想做,而是你根本做不了。但现在不一样了。现在的问题恰恰是,技术能力已经越过了一个门槛:文本能读,图像能读,语音能转,日志能归纳,上下文能拼接,弱信号能提取。也就是说,系统开始有能力吸收比过去大得多的反馈范围。而这时候真正困难的地方才出现:你到底吸收什么,不吸收什么;你到底把什么当作真实反馈,什么当作污染;你到底如何设计边界,如何设计验证,如何设计调度,如何防止系统把“可读懂的语义”错当成“可信的现实”。
所以这一代系统设计者面对的,不再只是工程复杂度,而是反馈治理复杂度。因为语义一旦能进系统,系统的能力会暴涨,但系统的风险也会暴涨。你接进来的不只是信息,还有误导、情绪、包装、叙事、权力痕迹、组织修辞、历史噪音。技术上能读懂,不等于系统上该吸收。能处理,不等于该相信。能进入,不等于该调度。这中间差的,正是系统观。
只要是为人服务的系统,人本思想,就必须无处不在。
1)人与任何长期交互的容器、信息体,都会形成一种能量互补或者交换的关系。人不是在“使用”系统,而是在不断向系统输入注意力、时间、情绪与判断;系统则以反馈回应这种输入(花开、数据变化、结果显现、行为改变)。长期来看,这构成一个闭环:人被激励,或被消耗,系统被维持,或被衰败。这种关系,远远超出了“工具使用”的范畴。
2)任何看似不相关的主体,也许都是系统的一部分。系统的边界,很多时候只是人为切出来的,是为了方便控制与建模。但在真实世界中,尤其是在复杂系统中,参与者远不止我们定义的那几个核心节点。
3)人本思想,本身就是处理复杂性的一部分。它不是情怀,而是方法论,是在无限变量、不可建模状态、延迟且带噪反馈中,找到一个可依赖的压缩锚点。
打个比方,我这个花园,最初建设有一个“暗目标”:尽量在全年提供足够的花蜜给蜜蜂采集。尤其是在深秋阶段,蜜蜂容易出现储备不足,因为大多数花卉并不在这个季节开花。所以我刻意设计了大量冷季开花的植物。你可以看到,我去年11月拍的照片里,整个花园依然处在盛花期。从园艺角度,这叫“冷季花园”,在这个区域并不多见。所以我的花园,这个时候蜜蜂数量甚至超过春季。我拍到大量熊蜂在花里“睡觉”,还有一些本地特有的独居蜂在活动。
好,如果继续说下去,可能听起来会有点“玄”。但我个人是承认世界复杂性的。对于那些暂时无法被严格证明、但可以被稳定感知的现象,我是认可并在日常中使用的。我认为,人与系统之间不是“使用关系”,而是“能量交换关系”。
比如人与环境,人与长期居住的房子。我和一位建筑师朋友聊过一个现象:我在房地产公司工作时,去过很多美国老城区,我发现一个很稳定的规律——只要房子一直有人居住,不管楼龄多大,看起来都“还行”;一旦长期空置,就会迅速衰败。这不仅仅是“有没有人维护”的问题。我朋友非常认同这一点,他虽然是一个理性工程背景的人,但也认为人与建筑之间存在一种难以完全量化的“交换关系”。
其实把它翻译成工程语言,也并不神秘:当人长期与一个系统交互(花园、软件、组织、知识库、社区、房子),本质上发生的是——注意力输入、系统反馈,以及能量循环。一旦输入中断,系统就会失去结构活性,开始退化。
所以,我当然应该关心其他“系统使用者”,比如蜜蜂活得好不好。这是系统完整性的一部分。
系统边界是人为切出来的
“任何看似不相关的主体,也许都是系统的一部分”
这句话乍一听有点反科学,但仔细想,并不是。自然界本身并没有清晰的边界,边界是为了工程控制而引入的工具。在技术系统中,我们必须划边界,否则无法管理复杂度。但问题在于:在我们“能力所及”的范围内,在那些可以顺手设计的部分,为什么不让更多主体有生存空间?
这在社会层面其实早已有体现。比如企业是否应该照顾弱势员工?残障人士是否应该被系统性排除?弱势群体是否应该得到公共资源支持?如果从极端效率角度,这些都可以被剔除。但现实是,我们逐渐意识到:他们并不是“系统之外的人”,而是系统的一部分。剔除他们,并不是优化,而是在削弱系统的韧性。
“人本思想”不是道德,而是复杂性处理策略
在复杂系统中,变量是无限的,状态不可完全建模,反馈具有延迟且充满噪声。这类系统在人类社会中无处不在:组织、国家、公司、市场,每一个单独拿出来,其复杂度都不低于一个大型工程项目。
而“人”,是目前唯一具备以下能力的节点:
多模态感知(视觉、情绪、经验)
模糊判断能力
长期价值判断能力
因此,在这种系统中,以人为本,并仅仅不是“善意选择”,而是一个工程上的最优解:
人,是压缩复杂度的最优代理节点。
然而现实中,很多系统在做相反的事情:为了效率,去掉人;为了指标,裁掉骨干;为了优化资源配置,让财富极度集中。这些决策在短期内往往是“正确的”,但它们在悄悄切断一个关键层——人的反馈层。
任何为人服务的系统,如果为了效率而削弱人的参与度,本质上是在用短期确定性,换取长期系统不稳定性。
当系统开始“去人化”,会发生什么?
第一阶段:效率提升
指标变好
成本下降
决策更“干净”
第二阶段:反馈丢失
弱信号消失
异常无法提前感知
系统开始“看不见问题”
第三阶段:结构脆化
抗冲击能力下降
局部错误无法被吸收
小问题开始被放大
第四阶段:系统性崩塌
突发性失败
无法恢复
甚至找不到原因
系统是否仍然保有持续的人类输入,作为其结构活性的来源。就如一个人类的花园,就算蜜蜂不参与种花种菜,仍然是重要的系统健康指标。养活他们,实际上是活化系统。
一旦这个输入被切断,系统即使在短期内运行得更“高效”,也已经在走向不可逆的衰退路径。
下图是我秋季的冷季花园。园艺有Cool flowers这一说法,比如菊花,金鱼草等,都属于冷季开花,吸引大量蜜蜂。
写在最后
一个系统,如果真的能够稳定运转,它的建立不是一蹴而就的,而是在反复试错、运气叠加、以及大量人的持续投入之中,慢慢长出来的。但系统的衰败却完全相反,它不需要复杂条件,也不需要剧烈冲击,往往只要失去维护,就会以极快的速度退化。就像我做了十年的花园,到现在都还没有完全达到理想状态,但只要停止一年对土壤的管理、施肥节奏、杂草控制,以及木本植物和多年生草本在正确季节的补充,这个系统几乎会在一个周期内迅速失序,甚至变得不可看。
AI 的出现,让很多程序员误以为这是某种“终局之战”,仿佛一切都会被重写。但从现实来看,这一轮 AI 的估值中显然存在巨大的泡沫,大量创业公司最终会面临和其他行业一样的问题——无法盈利、无法持续,最终被淘汰。这是正常的技术扩散路径。但大语言模型已经不可逆地改变了人类的技术树。
如果把视角拉长,你会发现,美国战后的科技进步,并不是靠跳跃式革命完成的,而是一种缓慢但持续的生产效率提升过程。技术真正改变世界的方式,往往不是“替代一切”,而是从系统的边缘切入,在不打断原有结构的前提下,逐步渗透、重构。
这一点在当前 AI 的落地路径中已经非常明显。许多传统行业并没有选择“全面 AI 化”,而是在“入口”位置引入大语言模型,比如客服系统。但客服从来不是“聊天”这么简单,它本质上是信息的采集节点、分类节点,以及后端流程的触发器。AI 在这里的价值,不是替代人,而是优化信息进入系统的结构质量,然后通过这个入口,逐步向后端渗透。
真正关键的问题不在于“All in AI ”这只是一个好听的口号罢了,而在于:你是否理解了这个入口的设计机制?信息是如何被接住、如何被结构化、如何被传递到后续环节?客服与后端服务人员之间的衔接是否是连续的,而不是断裂的?如果这些问题没有解决,就谈不上系统级的 AI 化。
相反,那种一开始就以“替代所有人工”为目标的路径,本质上是一种机械式的、去人本的设计思路。这种系统往往忽略了真实世界中人作为反馈源、判断者和调节器的作用,因此很难在复杂环境中稳定运行。
更可行的路径,始终是先服务于人——让系统先嵌入人的工作流,承接人的一部分负担,在真实反馈中逐步演化,而不是试图在一开始就重构一切。只有这样,AI 才不是一个外部工具,而是一个可以在系统内部“长出来”的能力。








