Why VC Math Encourages Fragile AI Startups
I recently reread The Power Law, and it clicked in a very uncomfortable way with what I see every day working on AI products.
Disclaimer: This publication and its authors are not licensed investment professionals. Nothing posted on this blog should be construed as investment advice. Do your own research.
For context, I’m a software engineer by background, but I’ve also built startups, run a boutique agency for years, and spend most of my days designing and shipping software products, nowadays often with AI components. I’m not watching this market from the sidelines. I’m inside the systems, the invoices, the latency issues, the model limits, and the cost surprises. That proximity changes how you interpret the stories we tell about AI startups.
Especially the venture-backed ones.
Power law returns shape everything downstream
The core argument of The Power Law is not just that VC returns are uneven. It’s that they are violently uneven. One or two companies can return an entire fund, while most others go to zero or limp to an acquihire.
Once you internalize that, a lot of otherwise confusing behavior starts to make sense.
VCs are not optimizing for average outcomes. They are optimizing for outliers. That means they are structurally indifferent to fragility as long as there is a credible path to extreme upside.
This matters because founders adapt to incentives. Consciously or unconsciously, you build what gets funded. And in AI right now, what gets funded is speed, surface-level magic, and the appearance of leverage.
Why “just build on top of the best model” is rational
From a purely technical perspective, building an AI wrapper can feel irresponsible. You don’t control the model. You don’t control pricing. You don’t control rate limits. You don’t control whether a new API release makes your core feature obsolete overnight.
But from a founder trying to raise venture capital, it’s often the most rational move available.
Wrappers let you move fast. They let you demo something impressive with a small team and they let you show usage graphs before you’ve solved any hard infrastructure problems. But most importantly, they let you tell a story that fits power law thinking.
“If this category explodes, we could be the category leader.”
That sentence matters more than unit economics in early-stage VC. And AI wrappers are excellent vehicles for that narrative.
I don’t say this as an outsider criticizing naive founders. I’ve been on both sides of this. I know how tempting it is to defer hard problems when momentum is rewarded more than resilience.
The illusion of leverage in AI products
AI makes this even trickier because it creates the illusion of leverage everywhere.
A small team can now ship something that would have taken a large organization a few years ago. That’s real. But the leverage often belongs to the model provider, not the startup.
When working on AI products for my company and clients, I see the same pattern over and over. Early costs look trivial. Latency feels acceptable. The system works well enough. Then usage or requirements grow, edge cases pile up, prompts get more complex, and suddenly the cost curve shows up.
At that point, if you don’t control the underlying system, your margins are someone else’s variable.
From a VC perspective, that’s fine. If the company breaks, it breaks. The fund only needs one winner. From a founder or operator perspective, it’s terrifying.
Fragility is not accidental, it’s selected for
One thing The Power Law makes very clear is that venture capital doesn’t accidentally produce fragile companies. It selects for them.
Speed beats correctness. Growth beats efficiency. Story beats control.
If you try to build something slower, more robust, and more boring, you often get punished in fundraising. Your charts look worse and your pitch sounds less exciting. Your upside feels capped, even if your downside is far better managed.
Again, I’ve felt this tension personally. Running an agency forces you to think in terms of cash flow, trust, and systems that don’t fall apart when something upstream changes. Those instincts actively work against the kind of risk-taking VC wants to see.
AI just amplifies that mismatch.
The dependency nobody prices correctly
The quiet problem with many AI startups is not that they will fail. Failure is normal. It’s that their dependency structure caps their long-term value even if they succeed.
If your core capability is rented, your differentiation has a ceiling. You can have great UX, strong distribution, and excellent branding, but if the underlying intelligence is commoditized and controlled by someone else, your negotiating power erodes over time and you can get easily replaced by competitors or evolving AI agents.
I see a lot of founders assume they’ll “figure it out later.” Replace the model. Negotiate pricing. Build proprietary data moats. Maybe all of that happens. But maybe it doesn’t.
From the VC side, that uncertainty is acceptable. From the founder side, it’s existential.
How this changed how I invest and build
As an investor or customer of an AI project, I’ve become much more skeptical of AI companies that look impressive quickly. I don’t ask whether the demo works. I ask if the team behind the product is able to adapt swiftly. This assumption is especially important when I meet AI companies without a technical cofounders - or especially - vibe coded AI products built by marketing and sales people who obviously don’t know what’s happening under the hood.
As a builder, I’m increasingly biased toward projects that look boring in the early days. Things with real constraints. Things that don’t explode on Twitter. Things where the hardest work happens before the story gets interesting.
Power law math encourages founders to chase optionality over durability. That’s not a moral failure. It’s a structural one.
But if you’re an operator, an employee, or an investor who actually cares about long-term outcomes, you need to see that clearly. Most AI startups are not designed to last. They’re designed to be asymmetric bets.
Once you see that, the current AI boom looks less like a gold rush and more like a very efficient machine for producing impressive demos, fragile businesses, and a small number of enormous winners.
And the winners probably won’t look like wrappers at all. They’ll look slow, capital-intensive, and unsexy for a very long time. Which is exactly why most people will miss them.



