Ah, the MVP. Those three letters, Minimum Viable Product, are uttered with reverence (and sometimes exasperation) in tech circles globally. It’s the battle cry of the lean startup, the mantra of ‘build, measure, learn,’ and the antidote to feature bloat. But when you apply this sacred tech tenet to education, things get… complicated.

In the world of online learning, an MVP isn't just about shipping the smallest thing that works. It's about shipping the smallest thing that teaches. And those two are not always the same thing.

The MVP in theory vs. the EdTech reality

In the classic tech playbook, an MVP is the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort (Ries, 2011). It's the skateboard before the car. But in education, a single feature—a wheel for the skateboard—is often useless. You cannot test a learning journey with a single flashcard.

The ‘viable’ in an EdTech MVP isn’t just about technical functionality. It must fundamentally encompass instructional viability. An MVP needs to prove it can facilitate the desired learning outcome.

Shipping something technically sound but instructionally useless is not an MVP; it’s an expensive, and potentially harmful, demo.

Where minimum meets meaningful

Striking this delicate balance between lean methodology and learning science involves anchoring your MVP to a handful of foundational principles.

Identify the core learning transformation

Instead of starting with features (‘build an interactive quiz engine’), start with the change you want to effect. ‘Year 2 pupils can successfully solve single-digit addition problems’ is a learning transformation. This becomes your MVP's north star.

Define your target learner (micro-segment)

Trying to build for ‘all students’ is a fool's errand. Narrow your focus ruthlessly. Perhaps your MVP is for ‘first-year university students struggling with citing sources in essays.’ This specific context allows you to tailor content and functionality precisely.

Prioritise pedagogy over polish

An MVP doesn't need to be beautiful, but it must be instructionally sound. Focus on the core learning loop: clear explanation, scaffolded practice, timely feedback, and meaningful assessment. A clunky interface that truly helps a learner master a concept is a better MVP than a slick app that leaves them confused (Clark & Mayer, 2016).

Be the Wizard of Oz: The human-powered MVP

Before you spend months automating a feature, ask if you can deliver its value manually. This is the ‘Wizard of Oz’ technique: the user interacts with a seemingly automated system, but behind the curtain, a human is pulling the levers. Want to test an AI-powered essay feedback tool? Your MVP could be a simple text box where students submit essays and you, the human wizard, email them personalised feedback within 24 hours. This allows you to validate the need for the feedback and discover what users actually want from it before you invest a fortune in automating the wrong thing.

Measure learning, not just usage

Success isn't about daily active users; it's about whether they learned. Design small-scale assessments, observe learners, and track progress against your core learning transformation. This validated learning about impact is your true MVP success metric.

The MVP across EdTech markets

The definition of ‘viable’ changes dramatically depending on where you are trying to sell your product.

Higher Education

An MVP for a university must often be an integration, not an island. Viability here means plugging into the existing systems infrastructure, respecting established workflows, and demonstrating a clear time-saving or pedagogical benefit to a sceptical, time-poor academic.

Primary & Secondary Education (K-12)

Here, the MVP has multiple gatekeepers. It must be simple, safe, and curriculum-aligned from day one. Viability is about classroom practicality. Can a teacher set it up in under five minutes? Is it demonstrably safe for children? An MVP that fails on these fronts is dead on arrival.

Direct-to-Consumer Apps

For a language-learning or coding app, viability is about delivering a tangible ‘aha!’ moment of learning in the very first session. The MVP must be engaging and demonstrate its value proposition immediately before the user's attention wanders.

Enterprise Platforms & LMS

For a platform product, the MVP is less about a single learning loop and more about a core administrative workflow. Can the system reliably host a course, enrol a user, and generate a basic report? The MVP's user is often an administrator, and ‘viable’ means stable and secure.

The allure of AI: A distraction?

In today's climate, the temptation to sprinkle some AI onto your product is immense. ‘AI-powered’ has become the magic incantation for securing funding and generating buzz. But for an MVP, this is usually a terrible idea.

AI is an incredibly powerful tool for scaling a solution, not for discovering a problem.

The golden rule is: don't automate what you don't yet understand.

If you can't perform the task well manually, you have no hope of training an algorithm to do it for you.

When to avoid AI in your MVP:

  • Problem discovery: You are still trying to understand the core learning problem and what an effective solution looks like. Use the human-powered ‘Wizard of Oz’ method first. Your manual efforts will generate invaluable insights that can later inform an AI model.
  • ‘Black Box’ learning: Your MVP relies on a complex algorithm that you can't explain. In education, educators and parents rightly demand to know how a system works. A pedagogically opaque MVP is an ethical and commercial non-starter.
  • Data scarcity: Most AI models require vast amounts of high-quality data. At the MVP stage, you likely have none.
  • High-stakes users (i.e., children): When your end-users are minors, you are operating in an ethical minefield. The unpredictable nature of a generative AI MVP, with its potential for biased, inappropriate, or developmentally harmful outputs, is fundamentally incompatible with the duty of care required in primary and secondary education. The risks of exposing a child to a flawed algorithm, coupled with stringent data privacy laws, make this a non-starter. This is not an area for 'failing fast'.

When to consider a sliver of AI in your MVP:

  • Simple, proven tasks: If your product involves a well-understood, data-rich problem (e.g., categorising multiple-choice answers, basic speech-to-text), using an existing, reliable AI service might be faster than building a rules-based engine from scratch.
  • The AI is the core value proposition: If your entire product idea is ‘an AI tutor that gives Socratic feedback,’ then your MVP must, by definition, include a sliver of this. But even then, the goal is to test the concept on a tiny scale, perhaps with a pre-scripted dialogue that only feels like a dynamic AI.

The ethical tightrope: experimenting on humans (especially small ones)

The tech industry's mantra of ‘move fast and break things’ is profoundly irresponsible when applied to primary or secondary education. When your users are children, you are not simply testing a product; you are intervening in their development.

An EdTech MVP carries a heavy ethical burden. The principle of ‘Do No Harm’ must be paramount.

A buggy, frustrating product isn't just an inconvenience; it can damage a child's confidence. There is no such thing as a ‘Minimum Viable Safeguarding Policy.’ Data privacy, especially concerning minors, must be ironclad from the very first line of code. Therefore, the MVP approach in schools must be one of careful piloting and co-design with educators, replacing ‘fail fast’ with ‘learn safely.’

Your EdTech MVP checklist

Before you start wireframing, can you confidently answer ‘yes’ to these questions?

✅ 1. The core problem

  • Have I defined a single, specific learning transformation (e.g., ‘From being unable to X, to being able to Y’)?
  • Have I defined a micro-segment of learners this is for (e.g., ‘Year 9 students revising for GCSE Chemistry’)?
  • Have I validated, through interviews, that this is a real and urgent problem for them?

✅ 2. The solution & viability

  • Is my proposed solution the absolute simplest way to facilitate that one learning transformation?
  • Does it include a complete, pedagogically sound learning loop (instruction, practice, feedback, assessment)?
  • Have I considered a human-powered ‘Wizard of Oz’ version to test the value before building tech?

✅ 3. The market context

  • Does my MVP respect the specific workflows and constraints of my target market (e.g., integrates with an LMS for universities, is simple for K-12 teachers)?
  • Is the value proposition clear and immediate for all key stakeholders (learner, teacher, buyer)?

✅ 4. The measurement

  • Have I defined how I will measure learning effectiveness, not just user engagement?
  • Do I have a plan to collect qualitative feedback directly from learners and educators?

✅ 5. The ethics & safety

  • Is the product demonstrably safe, private, and secure for my target users?
  • If testing with minors, do I have a clear plan for preventing harm and gaining informed consent from all necessary parties (school, parents)?
  • Have I prioritised the learner's well-being over the speed of my experiment?

If you can tick these boxes, you’re not just building a Minimum Viable Product. You’re building a Minimum Viable Promise: a small but credible demonstration that you can genuinely help someone learn. And that is a foundation worth building on.

Sources

  • Clark, R. C., & Mayer, R. E. (2016). e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning (4th ed.). Wiley.
  • Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.