I built a functional agent system in a weekend, a working system with tool use, memory, structured outputs, and error handling. The kind of thing that would have taken me a solid month last year. I felt great about it for approximately forty-eight hours.
Then I realized nobody was going to use it.
The building was the easy part, but getting anyone to care was a problem that hadn't changed at all. If anything, it had gotten worse, because while I was building my agent system over the weekend, so were hundreds of other people. The supply of software exploded. The attention available to discover it didn't.
That weekend started a thread of thinking I haven't been able to let go of: what creates durable value when the cost of building approaches zero? The answer is uncomfortable, especially for someone who has spent his career optimizing for technical quality.
Distribution is the only durable moat. The rest of this post is my attempt to show why.
WhatsApp had 55 engineers serving 900 million users when Facebook acquired them for $19 billion.
What was technically hard about WhatsApp in 2014? The message queue architecture. The encryption flow. The connection management for hundreds of millions of simultaneous users. Real engineering problems requiring deep systems expertise.
Here's what's changed: large chunks of that technical work could now be scaffolded by AI in days, not months. Not because the problems are trivial, but because the solutions are well-understood. They've moved from "requires deep expertise to solve" to "requires good judgment to implement correctly."
So what was the $19 billion actually for?
It wasn't the code. It was the 900 million users. And each of those users wasn't just a number, they were a node in a network that got more valuable with every addition. When WhatsApp added a user, every existing user's experience improved. You can't scaffold a network effect. You can't prompt your way into a billion people's habits.
This points to something structural about why distribution survives even as technical moats erode. Technical moats are about artifacts. Distribution moats are about people. And people resist automation in ways that code doesn't.
Trust accumulates slowly and can't be shortcut. You trust a tool because you've used it for months and it hasn't broken. You trust a community because you've interacted with its members and found them helpful. There's no AI prompt for "make people trust me."
Attention is finite and winner-take-most. Once you've captured attention, displacement costs are much higher than initial acquisition, because switching has friction and attention has limits. When developers have built integrations, workflows, and mental models around a tool, those are human investments, not technical ones. Organizational inertia is a moat that no AI can erode.
Each of these mechanisms operates at the level of people, not code. As software supply increases, the competition for people's attention and trust intensifies.
I used to believe in technical moats. If you built something technically superior, that was your advantage. But why did technical superiority ever function as a moat? Three pillars.
Building was expensive. A complex system might take a team of skilled engineers months or years. That investment itself was a barrier.
Good engineering was scarce. Not many people could build complex systems well. If you had a team who understood distributed systems or real-time processing, that team itself was hard to replicate.
Complexity was a barrier. Even if you open-sourced your code, a competitor would need to comprehend the design decisions, the trade-offs, the accumulated knowledge embedded in the architecture.
AI is eroding all three. Building is becoming cheap (my weekend agent system required a small team and a month of sprints two years ago). Good-enough engineering is becoming accessible (you don't need deep message queue expertise to build a messaging system anymore, just the judgment to evaluate AI-generated implementations). And complexity can be managed by AI, which doesn't get overwhelmed or lose track of component interactions.
The moats are draining, not instantly or completely, but the trend is clear and accelerating.
This pattern isn't new. AI is accelerating it, but the history of open-source software already demonstrated the principle.
MongoDB had well-documented consistency issues. Experienced database engineers winced at the query model. Data integrity problems were real, not theoretical. By most technical measures, it was an inferior database.
It won massive adoption anyway. "Just put JSON in and get JSON out" was an easy mental model with near-zero onboarding friction. That ease of entry created a flywheel: more users meant more tutorials, more Stack Overflow answers, more libraries, more job listings. The technically flawed product built a distribution advantage that superior alternatives took years to overcome.
The product that's easiest to adopt and builds the strongest community wins the first round. That round doesn't always determine the outcome (PostgreSQL's comeback proves that), but it creates advantages that compound for years. Building well is necessary but nowhere near sufficient.
Here's where this gets personal.
I've been tracking my own time allocation on side projects over the past several months. The pattern is stark: I spend roughly 80% of my time on building and 20% on everything else, meaning documentation, community engagement, putting the work in front of people, understanding what potential users actually need.
If distribution is the only moat, I've got the allocation exactly backwards. The value-creating distribution would be 80% on reaching people, 20% on building. And not just reaching people in the marketing sense. Talking to potential users. Understanding their problems. Building in public. Engaging with communities where the people who'd benefit are already gathering.
The 80/20 inversion feels deeply wrong to me. The engineering is the part I love. The distribution work feels like a different skill entirely, one I haven't developed and don't have strong instincts for.
I wrote about a similar discomfort in my essay on engineering judgment and saying no to AI suggestions. There, the uncomfortable realization was that my value wasn't in writing code anymore, it was in knowing what code should exist. Here, the discomfort is one level up: even knowing what code should exist might not matter much if nobody encounters it.
Part of me wants to believe that quality speaks for itself. But the evidence I keep encountering suggests that's a comforting story engineers tell themselves, not a reliable description of how the world works.
Maybe the resolution is to expand what "building" means to include building the pathways by which people discover and adopt what you've created. Building the community. Building the trust. Building the habit. But I'm aware that might be rationalization, a way to make the uncomfortable conclusion feel more comfortable by redefining terms.
I've been building the case that distribution is the only moat. But I want to pressure-test that.
Some technical work might still be a moat. Training large ML models requires data, compute, and expertise that AI coding assistants don't collapse. Physical systems, robotics, hardware-software integration: the cost of replication remains high. The "technical moats are dead" argument applies primarily to software within well-understood patterns. I'm curious about where the boundary sits between "well-understood enough for AI to scaffold" and "genuinely requires human insight." My instinct says the boundary is moving fast and in one direction, but instinct isn't evidence.
AI-mediated discovery could invert everything. This is the counter-argument I take most seriously, and I want to walk through it carefully.
Imagine AI assistants that evaluate tools, make purchasing decisions, and recommend solutions on behalf of developers. Not some distant future: this is starting to happen with coding assistants that suggest libraries. If an AI evaluates 50 competing tools and recommends the one with the best technical metrics (lowest latency, highest reliability, cleanest API design), then technical quality IS the moat again. Distribution to humans doesn't matter if the decision-maker is an algorithm optimizing for measurable quality.
In that world, my argument collapses. The engineer who builds the best system wins, because the discovery mechanism bypasses human attention limits and goes straight to technical evaluation. No community needed, no onboarding optimization. Just an AI saying "this one is objectively better for your use case."
I'm not fully convinced this kills the argument. AI assistants are trained on data that reflects existing distribution: popular tools appear in more training examples, more documentation, more discussions. The AI's recommendations would inherit the distribution advantages already in its training data. And the AI assistants themselves are distributed through... distribution channels.
But I want to be honest: if AI-mediated tool discovery becomes the dominant way developers find software, the balance shifts back toward technical quality. This is the scenario most likely to prove me wrong.
"Distribution" might be too broad a category. I've been lumping together network effects, trust, attention, community, and ecosystem lock-in under one word. These are different mechanisms with different durability. Not every product benefits from network effects. For solo-use developer tools, the distribution moat is weaker, and maybe technical quality does win in the long run. I might be hiding important distinctions by treating these as one thing.
Here is the reframe I find most promising, and the one I want to spend real time on.
What if distribution isn't just marketing? What if it's a systems engineering problem that happens to involve human behavior instead of server behavior?
Consider onboarding friction. Say you measure that most people who try your tool drop off during initial setup. That's a conversion funnel, sure. But it's also a systems engineering problem. What's the critical path from "heard about this" to "got value from it"? Where's the bottleneck? What's the minimum viable first experience? Time-to-first-value is a latency metric. Drop-off rates are error rates. I can profile the onboarding flow the way I'd profile a slow API endpoint: instrument each step, identify the slowest stages, and optimize ruthlessly. A/B test different setup flows the way I'd A/B test different caching strategies. This isn't marketing intuition. It's measurement and iteration.
Or take adoption cascades. How does information about a tool spread through developer communities? This is literally graph theory applied to people. There are seed nodes (influential early adopters who write blog posts and give conference talks). There's a transmission rate (how often a user recommends the tool to a colleague). There's graph structure (tight community clusters where information spreads fast versus sparse networks where it dissipates). You could model the propagation dynamics the way you'd model message passing in a distributed system. Which communities are the highest-impact seed points? What's the R-value of a recommendation within a given community? Where are the structural holes where information fails to cross?
Community health works the same way. Response times on forums, ratio of questions answered to questions asked, contributor retention curves, time between a bug report and a fix. These are the same metrics you'd use for monitoring a distributed system's health. You could build dashboards, set alerting thresholds, track trends. A community where 80% of questions get answered within 24 hours is a healthy system. One where 30% go unanswered is showing signs of failure, and the fix might be structural (better routing, more moderators, clearer contribution guidelines), not motivational.
The key insight, and the hypothesis I'm genuinely excited to test: engineers who approach distribution as a systems problem might have an advantage over pure marketers. Not because engineering is inherently superior, but because it brings specific habits that are useful here. Quantitative rigor. Instrumentation reflexes. The instinct to measure before optimizing. Systems-level thinking that looks for feedback loops and failure modes rather than one-off tactics.
I don't know if this actually works. It's possible that human behavior is too noisy, too contextual, too resistant to the kind of clean measurement that makes engineering optimization powerful. It's possible that the two domains are more different than I want them to be, and that I'm pattern-matching because it's comforting, not because it's true. Marketing practitioners might hear this and think I'm reinventing their field badly.
But I'd love to find out. I'm running the experiment myself, writing about what I'm building, engaging with communities, treating distribution as a design problem with measurable inputs and outputs. Whether the engineering mindset transfers is genuinely uncertain. But it feels more tractable than "learn marketing," and the curiosity is real.
If distribution is the only durable moat, we should see specific, measurable consequences. I want to make predictions concrete enough that I can be proven wrong.
Prediction 1: The AI coding assistant with the largest market share by 2027 will be the one with the most active developer community (measured by GitHub discussion activity, third-party tutorial volume, and Stack Overflow answer rates), not the one with the highest scores on coding benchmarks like HumanEval or SWE-bench.
Prediction 2: By 2028, the top 3 AI coding tools by market share will have been the first 3 to reach 10,000 active community contributors, defined as people who have written tutorials, answered questions, or contributed plugins. If the top 3 are instead the top 3 on coding benchmarks, technical moats are more durable than I think.
Prediction 3: If AI-mediated tool discovery becomes the primary channel (more than 50% of developer tool adoption decisions influenced by AI recommendations rather than human word-of-mouth) by 2028, this entire essay is wrong. I'd assign this maybe a 20% probability, but it's the scenario that most cleanly falsifies my argument.
I'm an engineer who loves building, confronting evidence that building isn't enough. The 80/20 split I've been running, 80% building and 20% everything else, is almost certainly backwards. And the question isn't whether this is true. The question is what I'm going to do about it.