Ah, the MVP. Those three letters, Minimum Viable Product, are uttered with reverence (and sometimes exasperation) in tech circles globally. It’s the battle cry of the lean startup, the mantra of ‘build, measure, learn,’ and the antidote to feature bloat. But when you apply this sacred tech tenet to education, things get… complicated.
In the world of online learning, an MVP isn't just about shipping the smallest thing that works. It's about shipping the smallest thing that teaches. And those two are not always the same thing.
In the classic tech playbook, an MVP is the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort (Ries, 2011). It's the skateboard before the car. But in education, a single feature—a wheel for the skateboard—is often useless. You cannot test a learning journey with a single flashcard.

The ‘viable’ in an EdTech MVP isn’t just about technical functionality. It must fundamentally encompass instructional viability. An MVP needs to prove it can facilitate the desired learning outcome.
Shipping something technically sound but instructionally useless is not an MVP; it’s an expensive, and potentially harmful, demo.

Striking this delicate balance between lean methodology and learning science involves anchoring your MVP to a handful of foundational principles.

Instead of starting with features (‘build an interactive quiz engine’), start with the change you want to effect. ‘Year 2 pupils can successfully solve single-digit addition problems’ is a learning transformation. This becomes your MVP's north star.

Trying to build for ‘all students’ is a fool's errand. Narrow your focus ruthlessly. Perhaps your MVP is for ‘first-year university students struggling with citing sources in essays.’ This specific context allows you to tailor content and functionality precisely.

An MVP doesn't need to be beautiful, but it must be instructionally sound. Focus on the core learning loop: clear explanation, scaffolded practice, timely feedback, and meaningful assessment. A clunky interface that truly helps a learner master a concept is a better MVP than a slick app that leaves them confused (Clark & Mayer, 2016).

Before you spend months automating a feature, ask if you can deliver its value manually. This is the ‘Wizard of Oz’ technique: the user interacts with a seemingly automated system, but behind the curtain, a human is pulling the levers. Want to test an AI-powered essay feedback tool? Your MVP could be a simple text box where students submit essays and you, the human wizard, email them personalised feedback within 24 hours. This allows you to validate the need for the feedback and discover what users actually want from it before you invest a fortune in automating the wrong thing.

Success isn't about daily active users; it's about whether they learned. Design small-scale assessments, observe learners, and track progress against your core learning transformation. This validated learning about impact is your true MVP success metric.
The definition of ‘viable’ changes dramatically depending on where you are trying to sell your product.

An MVP for a university must often be an integration, not an island. Viability here means plugging into the existing systems infrastructure, respecting established workflows, and demonstrating a clear time-saving or pedagogical benefit to a sceptical, time-poor academic.

Here, the MVP has multiple gatekeepers. It must be simple, safe, and curriculum-aligned from day one. Viability is about classroom practicality. Can a teacher set it up in under five minutes? Is it demonstrably safe for children? An MVP that fails on these fronts is dead on arrival.

For a language-learning or coding app, viability is about delivering a tangible ‘aha!’ moment of learning in the very first session. The MVP must be engaging and demonstrate its value proposition immediately before the user's attention wanders.

For a platform product, the MVP is less about a single learning loop and more about a core administrative workflow. Can the system reliably host a course, enrol a user, and generate a basic report? The MVP's user is often an administrator, and ‘viable’ means stable and secure.
In today's climate, the temptation to sprinkle some AI onto your product is immense. ‘AI-powered’ has become the magic incantation for securing funding and generating buzz. But for an MVP, this is usually a terrible idea.

AI is an incredibly powerful tool for scaling a solution, not for discovering a problem.
The golden rule is: don't automate what you don't yet understand.
If you can't perform the task well manually, you have no hope of training an algorithm to do it for you.
When to avoid AI in your MVP:
When to consider a sliver of AI in your MVP:
The tech industry's mantra of ‘move fast and break things’ is profoundly irresponsible when applied to primary or secondary education. When your users are children, you are not simply testing a product; you are intervening in their development.
An EdTech MVP carries a heavy ethical burden. The principle of ‘Do No Harm’ must be paramount.
A buggy, frustrating product isn't just an inconvenience; it can damage a child's confidence. There is no such thing as a ‘Minimum Viable Safeguarding Policy.’ Data privacy, especially concerning minors, must be ironclad from the very first line of code. Therefore, the MVP approach in schools must be one of careful piloting and co-design with educators, replacing ‘fail fast’ with ‘learn safely.’

Before you start wireframing, can you confidently answer ‘yes’ to these questions?
✅ 1. The core problem
✅ 2. The solution & viability
✅ 3. The market context
✅ 4. The measurement
✅ 5. The ethics & safety
If you can tick these boxes, you’re not just building a Minimum Viable Product. You’re building a Minimum Viable Promise: a small but credible demonstration that you can genuinely help someone learn. And that is a foundation worth building on.