In 1998, Netscape's FTP client was 5,000 lines of code. It handled 60 different types of FTP servers. A rewrite team looked at it and saw bloat.

They threw it away and started fresh.

What they threw away wasn't code. It was three years of accumulated discovery: the specific quirks of FTP servers that only reveal themselves when real users hit real servers. Every line was a failure mode that had been found and handled. None of that context survived. When Netscape 6.0 shipped three years later, basic behaviours the old browser had done without thinking were gone. Joel Spolsky wrote the definitive post-mortem in 2000: "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

That was before AI existed.

Two kinds of knowledge in every codebase

Every software system builds two kinds of knowledge.

The first is systems knowledge: the code, the architecture, the features, the integrations. This used to be hard to replicate. Building something equivalent required a team of engineers and months of work. That difficulty is what made software feel defensible.

AI has collapsed this. A working prototype that used to take three months now takes an afternoon. A well-resourced competitor can match your feature set in weeks. This is real, it is significant, and it is irreversible.

The second is production knowledge: the specific understanding that only comes from running your system with real users under real constraints. Every incident, every customer constraint that reshapes a design, every edge case discovered the hard way adds to it. A competitor cannot buy this. They cannot generate it. They have to earn it by running their own system through the same gauntlet.

AI can help you analyse production knowledge once it exists. It has not changed how it is earned.

The moat that opens when AI enters
Barrier for a competitor trying to match your system
AI enterscomplex, hard to copyeasy to replicateReplicating production knowledgeno shortcut exists yetYour moatLaunchYear 3Year 6+HardEasy
Difficulty of replicating systems knowledge
Difficulty of replicating production knowledge

Early on, a competitor can match your system without much effort. Both kinds of knowledge are shallow. As the system grows, both become harder to replicate. Systems knowledge grows with complexity. Production knowledge grows with experience a competitor hasn't had.

Then AI enters. Systems knowledge becomes free to replicate. Production knowledge keeps rising. A competitor who starts today can build what you built in weeks. They cannot shortcut the years of production knowledge behind it. That gap is the moat.

What the second line is made of

In June 2000, PayPal was losing $12 million a month to fraud. They didn't design their way out of it. The attack patterns didn't exist before the attackers invented them. Max Levchin built the Igor algorithm from live production data, named after a specific Russian fraudster who had taunted the team publicly. The algorithm learned from actual attacks as they happened. By 2001, PayPal's fraud rate was 0.37% of payment volume, better than every competitor in the industry. No company entering the payments market after PayPal could buy that knowledge. They would have had to lose the same money first.

Convoy raised $920 million and built genuinely impressive technology. In October 2023, it shut down, unable to find a buyer. What its algorithm didn't have: the knowledge that freight brokers build over decades of actual cycles. Backhaul economics that determine whether a load is actually profitable. Pricing structures that account for enterprise customers paying on 90-120 day terms. The calibration that comes from trading through a freight recession. C.H. Robinson had been through prior downturns. Convoy's model had only ever seen growth. When spot rates collapsed 40% in 2022, incumbents held. Convoy didn't.

We at LatentForce had a customer running an image analysis pipeline at serious scale: 100 million images, 500 concurrent users all connecting from a single physical location. The PoC took a day to build. What took months: discovering that serving images at full precision choked the network at that user density, that only one specific compression algorithm preserved enough fidelity for the analysis to produce valid results, that ten particular AWS data transfer optimizations were the difference between a viable unit cost and a money-losing one. None of that was in any documentation. All of it was learned by running the system against the real constraints of that deployment.

What production knowledge is made of
Four signal types. None exist before you ship.
Failure modes
Bugs, edge cases, and attack patterns that only surface when real users hit the system at scale.
PayPal fraud
Operational constraints
Performance, cost, and scaling limits invisible in testing but unavoidable under real load.
Image pipeline
Domain cycles
Pricing, market behaviour, and customer dynamics that only emerge over real time and real cycles.
Freight recessions
Decision context
The why behind every architecture choice: which production failure created which handler, which constraint shaped which design.
Not encoded in code

This is what makes production knowledge unassailable. Not clever engineering or good architecture. The specific signals that only come from running a real system against real constraints, and that a competitor cannot acquire without running the same gauntlet themselves. None of it exists before you build. All of it gets harder to replicate the longer you run.

Why AI can't generate this

AI coding tools are trained on what's publicly available: open source repositories, documentation, technical blogs, Stack Overflow. That corpus is enormous and it makes models genuinely impressive on general software problems.

Your production knowledge is not in that corpus. It is private by definition: your logs, your incident postmortems, your customer-specific constraints, the Slack thread from the night something failed in a way no one had anticipated. The compression algorithm that worked for your images. The edge case in your payments flow that only surfaces for a specific currency and tax jurisdiction combination.

This is not a temporary limitation that better models will close. Production knowledge is encoded in your code, but incompletely. The code shows what the system does. It does not show why each decision was made, which production failure created which handler, or what constraint shaped which architecture choice. A competitor who clones your repo has the what. They still have to earn the why by running their own system. That process cannot be skipped.

The compounding moat

The difficulty of replicating your production knowledge keeps growing as long as your system is running. Every incident, every customer constraint, every edge case adds to the gap a competitor would have to close.

A competitor who starts today, even with AI and your codebase in front of them, starts at zero on production knowledge. They have your architecture. They do not have your history. They will hit the same walls, fail in the same ways, and discover the same non-obvious solutions.

Building a system that does what yours does now takes weeks. Earning the production knowledge behind it still takes years. That gap grows wider every day you are running and they are not.

The gap that's actually being lost

AI makes you faster at shipping, which means you reach production sooner, which means you could accumulate production knowledge faster. In theory, AI should widen your moat.

But most teams are not capturing it. The production knowledge is produced and not preserved. It lives in the incident postmortem that nobody reads six months later, in the commit message that says "fix edge case," in the head of the engineer who was on-call that night and has since moved on.

The moat is being built. It is not being kept.


This is the problem we're working on at LatentForce. Building software is getting cheaper. The production knowledge behind it is the barrier that compounds, and right now, most of it is evaporating.