Minimum Value or Maximum Viability?

Asher Bond
6 min readAug 27, 2024

--

Just trying to get by or swinging for the fences?

As a matter of semantics, there seems to be a trend where the cool kids know that the V in MVP is now Valuable rather than Viable. I get the idea that it helps to remind folks that value is life for a product, but this is already implied since without minimum value, the product can’t be viable.

Some other cool kids started saying MVE instead of MVP. This means Minimum Viable/Valuable Experience rather than Product. But experience and Product are pretty much the same thing. When I was at Paypal, we had a phrase called “Branded Experiences” which was used to refer to business units which were acquired: Venmo, Braintree, Xoom, Honey, Hyperwallet, (you get the idea). These experiences totally revolve directly around products, bruh. Call them experiences if you want. I guess it means the product and its consumption?

I’m gonna tell you why I changed the M to Maximum and left the other two letters alone. And I’m gonna use autonomous software development as the use case for the MVP since it’s the elephant in the room that everyone’s trying to build right now.

Superhuman AGI through autonomous code planning, generation, execution, and validation, is the elephant in the room everyone’s trying to build right now.

Some of you might have read a book called The Lean Startup by Eric Ries. I haven’t read it, but that doesn’t mean you shouldn’t. I’ve been listening to Eric’s podcast and it seems he’s repenting from short-term thinking and advocating for creating long-term value and doing the right things for the right reasons. This is great.

But I’ve been saying since the Lean Startup became popular that I’m concerned with living life only once to the fullest as a software designer and swinging for the fences of maximum viability as a long-term vision.

Minimum viability is just that initial commit.

I always fought tooth-and nail with people who said “build stuff that doesn’t scale.” As if YC said it. I’m pretty sure that’s not what was said. YC said to do stuff that doesn’t scale. You’d be surprised how much stuff doesn’t scale automatically when developers are in a hurry, so I never say this. But sometimes you have to do something in an ad-hoc way. This is stuff which shouldn’t be prematurely optimized. You need to run through a wall and get the stuff done that needs doing in the near future. But you need to build stuff that scales, now more than ever. I’ve always said since day one at Elastic Provisioner: Build stuff that scales, even if you have to do stuff that doesn’t scale along the way. Do the right things for the right reasons. If you’re doing stuff that doesn’t scale constantly, it’s likely that you’re building stuff that doesn’t scale. Building something that doesn’t scale is doing the wrong thing for the wrong reason. Haste makes waste, in other words.

What is Minimum Viability?

Minimum Viability is getting by with as little as possible, from a product-survival perspective. What are the shortest, fewest, and smallest steps I can take in order to get a product to survive? I guess that depends on the market. Is your product a consumer product? I guess that means you better have some understanding of consumer behavior and know where that viability threshold is. But consumer behavior is hard to predict, so it’s hard to speculate on where the threshold of viability is, especially before product-market fit is found. Also, you’ll be taking shortcuts at every opportunity. See the problem? And getting to minimum viability still requires a significant amount more polishing than you need for proving a technical concept. But I think its helpful to know what minimum viability is conceptually, even if it isn’t a milestone or goal.

Now, thanks to autonomous code generation, software development has been significantly democratized. I’ve been saying since the beginning of my engineering career that programming is not engineering. The first engineers built and operated catapults, not code. Startup engineers are also building catapults or rockets in a product-launch sense. Now it’s a lot more obvious that programmers aren’t always engineers. We have chatbots which arguably are not engineers yet that can write code snippets. Sure they forget context, and yes Google Gemini & Anthropic Claude provide frontier models with big context windows. And no, a big context window doesn’t solve the context management problem around software planning instantly, even though long context does unlock some capabilities and make conversation forgetfulness a bit more human-like.

We have startups and incumbents both trying to make autonomous software engineer agents that are capable of planning, code generation, execution, and execution validation. This is a step-wise approach to building the foundational blocks for superhuman AGI. I’d even venture to say that superhuman AGI appears to be a concept that is proven even though the viability of the product is controversial. The idea is that you plan software using artificial intelligence. You can even ask chatGPT to write you a plan for developing software.

Writing Code. It’s so easy.

Once you have a plan , you can take that plan and feed it into a code generation LLM. Even chatGPT can efficiently generate code using the libraries from the time it was pre-trained. Your mileage may vary and your velocity may vary depending on the drift between code generation model pre-training and code library updates. It’s easy enough to retrieve library updates and augment code generation without training a whole new domain specific model or frontier model as long as you’re pulling the latest updates into your retrieval augmented generation (RAG) pipeline. Also retreival augmented generation is but one approach.

Since writing code is so easy, it’s not so much of a technical barrier to development of software. Low code user interfaces went to market fast because founders saw the opportunity to get those customers now and impress them later with the rising tide of code-generation and autonomous planning that lifts all low code apps up as alternatives to implementing software designs and shipping running software. Low code is minimally viable. Superhuman AGI is maximally viable.

Maximum Viability means that you’ve packed as much meat (or beyond meat) as possible into that sandwich because you’re ready to live life to the fullest. If we’re talking about streamlining the short path from idea to running software, we’re talking about Superhuman AGI replacing low code. But low-code companies have an interesting opportunity, if they can pull it off, to benefit from the general convergence of Superhuman AGI more than anyone since they’re already in the business of democratizing software development and being paid to do it. Whether or not low code companies will be able develop any semblance of AGI; If they play their cards right they can just use it to make money with their existing customers. But it really depends on how well they execute software design. You can brand yourself as a software design platform for 10 years, but that doesn’t mean you can design software better than superhuman AGI.

What makes good design? Incorporating User Feedback.

You might think this is something that involves talking to people and getting to know the requirements. That’s a friendly approach, but not the only approach. There are quantitative approaches that autonomous software designers can use to elicit feedback from user bases. The concept of reinforcement learning through human feedback (RLHF) can be applied to the software design feedback loop. In my view this is the right approach to guard-railing a safe and ethical AGI, since in my view the user is the ultimate authority and informs us how to best elevate the frontiers of technology, software and its design. In software, when the user loses control, everything goes off the rails.

--

--

No responses yet