MVPs in the Age of the LLM

Building got cheaper. Learning did not.

8 min read

A young founder messaged me on Reddit last month, asking for advice. He had vibe-coded a complete SaaS application in a weekend. User authentication, dashboard, data visualisation, Stripe integration, email notifications. The kind of scope that would have occupied a small team for a quarter not long ago. The LLM wrote the code. He provided the prompts and a rough sense of direction. By Sunday evening it was deployed, functional, and visually polished. He was proud of it. He wanted to know how to get his first users.

I asked him a question he had not considered: how did he know anyone wanted this? He had not talked to a single potential customer before building it. He had not tested the core assumption - that small e-commerce sellers needed another analytics dashboard - with anything lighter than a fully built product. He had spent a weekend building instead of spending a day asking. The application worked. It worked well. Nobody wanted it.

His mistake is becoming more common, not less, and I think the reason is that the framework designed to prevent it - the Minimum Viable Product - is being quietly abandoned by founders who believe that LLMs have made it obsolete. If you can build the whole thing in a weekend, why build the minimum? The logic seems sound. It is not. This essay is, in a sense, the longer version of the advice I gave him.

What the MVP Actually Was

The product was the apparatus. The learning was the result.

Eric Ries did not invent the Minimum Viable Product (MVP) because building was expensive, though it was. He invented it because learning was slow - and building without learning was the most expensive mistake a startup could make. The entire lean startup framework was an argument about epistemology dressed up as product advice: you do not know what your customer wants, you only think you do, and the fastest way to close the gap between what you think and what is true is to build the smallest possible thing that generates real evidence.

The word "minimum" was doing critical work in that formulation. It was not a concession to resource constraints, though it respected them. It was a forcing function for intellectual honesty. Minimum meant: strip away everything that is not directly testing your riskiest assumption. If your riskiest assumption is that small businesses will pay for automated bookkeeping, you do not need a beautiful dashboard. You need a way to automate one bookkeeping task for one business and see if they value it enough to pay. Everything else is decoration, and decoration is where founders hide from the uncomfortable question of whether anyone actually wants what they are building.

The MVP was never about the minimum product. It was about the minimum experiment.

What LLMs Actually Changed

Let me be honest about what shifted, because dismissing LLMs' impact on build speed would be intellectually dishonest and the argument does not require it.

The cost of writing working code has dropped by something like an order of magnitude for many categories of software. A functional prototype that required three engineers and six weeks now requires one person and a weekend. Building variant B alongside variant A - to test which approach customers prefer - is no longer a resource allocation decision. It is trivially cheap. Producing a polished interface, generating realistic test data, writing documentation, handling edge cases that would previously have been deferred to a later sprint: all dramatically faster.

These changes are real and they are significant. The build constraint that shaped the original MVP framework - the fact that every feature cost real time and real money - has been substantially relaxed. If you take the MVP literally, as "build the minimum because building is expensive," then yes, the premise has weakened. Building is no longer expensive enough to justify the austerity that the original framework demanded.

But that was never the real reason for the minimum. The build cost was the visible constraint. The invisible constraint was, and remains, the learning cycle.

What LLMs Did Not Change

The time it takes to understand whether a customer actually values what you built did not change. You can ship faster. You cannot learn faster - at least, not in the ways that matter.

Understanding customer behaviour takes observation time. You ship a feature. Some people use it. Some do not. The ones who use it use it in ways you did not anticipate. The ones who do not ignore it for reasons you must investigate. This cycle - ship, observe, interpret, decide - has a clock speed determined by human behaviour, not by engineering throughput. You cannot make customers evaluate your product faster by building it faster. You cannot compress the time between someone encountering a new feature and forming an opinion about whether it is valuable.

The behaviour change tax did not decrease. Users still need to learn new interfaces, develop new habits, integrate new steps into existing workflows. Making the product more polished or more complete does not reduce this tax. If anything, a larger, more feature-rich product increases the cognitive load on the user and makes the behaviour change tax higher.

The cost of user attention did not decrease. Your customers are not less busy because you can build faster. Their willingness to try something new, to invest time in evaluating whether your product is better than their current approach, is not a function of how quickly you built it.

And the discipline of knowing what question you are trying to answer did not get easier. If anything, it got harder. When you could only afford to build one thing, the constraint forced you to choose. What is the riskiest assumption? What is the one thing we need to learn first? These were difficult questions, but at least the constraint made them unavoidable. Remove the constraint and the questions remain just as important - but now they are avoidable. And avoidable questions, in startups, tend to go unasked.

The Maximum Viable Product Trap

Here is what I see happening in practice. A founder has an idea. They spend a weekend building it with an LLM. By Monday they have a working application with a dozen features, a polished interface, and the warm satisfaction of visible progress. They ship it. They wait for users to arrive.

Some users arrive. They poke around. Engagement is scattered - a little usage across many features, deep usage of nothing. The founder cannot tell which features are driving the sparse engagement and which are noise. They cannot isolate the signal because there is too much product. Every feature is a variable, and they have introduced twelve variables simultaneously. The experiment is unreadable.

This is the Maximum Viable Product trap, and it is the natural consequence of removing the build constraint without replacing it with something else. When features are cheap, the default is to build everything. "Why not? It only took an afternoon." But each feature, no matter how cheaply built, has costs that have nothing to do with engineering time. It has cognitive cost for the user, who must discover and evaluate it. It has analytical cost for the team, who must determine whether it is contributing to or detracting from the product's value. It has coherence cost, because every feature interacts with every other feature and the complexity of those interactions grows geometrically.

The MVP's constraint was doing useful work that most founders never noticed. By forcing teams to build less, it was forcing them to think more. To prioritise ruthlessly. To articulate their assumptions explicitly. To design experiments that could actually produce clean signal. Remove the constraint and you do not get a better product. You get a noisier experiment.

I think of it like a restaurant menu. When ingredients are expensive, a restaurant must choose carefully what to offer. That constraint produces a focused menu, which produces a coherent dining experience, which is - not coincidentally - what the best restaurants in the world deliberately maintain even when they can afford to offer more. If ingredients suddenly became free, the mediocre restaurant would expand to a hundred items and execute none of them well. The great restaurant would keep the menu short, because the constraint was never about the cost of ingredients. It was about the coherence of the experience and the kitchen's ability to execute at a consistently high standard.

The New Minimum

Richer vs bigger: A richer experiment has more fidelity - the prototype feels real enough that user reactions are representative. A bigger experiment has more variables - more features, more surface area - which makes results harder to interpret. Use LLMs to increase fidelity, not scope.

The MVP is not dead. But the M needs to be reunderstood.

In the original framework, "minimum" constrained the product: build less, because building is expensive. In the LLM era, "minimum" should constrain the learning cycle: what is the least you need to ship to answer your most important question? The answer might be more than it used to be - a more polished prototype, a more complete experience - because the cost of polish and completeness has dropped. That is a genuine benefit of LLMs. You can now run richer experiments for less.

But "richer" does not mean "bigger." The distinction matters enormously, and it is exactly the distinction that founders lose when they treat LLM-powered build speed as a reason to ship everything at once.

The founders who are using LLMs well are not building bigger MVPs. They are running more experiments. Instead of one rough prototype per quarter, they are testing three polished prototypes per month. Each one is focused. Each one tests a specific assumption. Each one generates clean signal because it was designed to answer one question, not twelve. The speed advantage of LLMs is being applied to the cycle, not to the scope. More iterations, not more features.

The Constraint Moved

Building used to be the bottleneck, so the MVP framework constrained building. Building is no longer the bottleneck. Learning is the bottleneck, and it always was - building was just so slow that it obscured the deeper constraint. Now that the build constraint has been relaxed, the learning constraint is fully visible, and it turns out to be the harder problem. Not harder technically. Harder in terms of discipline.

The natural response to cheap building is to build more, and building more is precisely what makes learning harder. More features, more variables, more noise, less signal. The founders who will win in this era are the ones who internalise a counterintuitive principle: the cheaper it is to build, the more disciplined you must be about what you build. The constraint that used to come from the budget now has to come from the team. From their clarity about what they are trying to learn. From their willingness to ship something focused when they could ship something sprawling. From their taste.

The M in MVP moved. It used to constrain the product. Now it constrains the people.

AI startups product strategy