build measure learn
Devel­op your MVP: Test your solu­tion under real-world con­di­tions

This sec­tion is for you if …

  • you’re start­ing – or already work­ing with your team on – an impact-dri­ven start­up.
  • you can clear­ly define your tar­get group, the prob­lem you’re solv­ing, your solu­tion, and your intend­ed impact.
  • you’ve built and val­i­dat­ed a work­ing pro­to­type.
  • you’ve seen ear­ly evi­dence of out­comes from your solu­tion.
  • you’ve iden­ti­fied a mar­ket and your solu­tion has the poten­tial to scale.
  • you’ve devel­oped an ini­tial busi­ness mod­el.

In this sec­tion, you’ll learn how to …

  • devel­op a min­i­mum viable prod­uct (MVP).
  • expand your impact mea­sure­ment to include out­come-lev­el changes.

MVP vs. pro­to­type: What’s the dif­fer­ence?

Pro­to­types and min­i­mum viable prod­ucts (MVPs) are both key tools in prod­uct devel­op­ment – but they serve very dif­fer­ent pur­pos­es. Here’s how they com­pare:

  • Pur­pose and func­tion­al­i­ty: A pro­to­type is main­ly used to visu­al­ize the design and test ear­ly ideas. It has lim­it­ed fea­tures and is often the first tan­gi­ble ver­sion of a new con­cept. An MVP, on the oth­er hand, includes the core fea­tures need­ed to func­tion and is used to test usabil­i­ty, design, and over­all func­tion­al­i­ty.
  • Field of appli­ca­tion and devel­op­ment phase: Pro­to­types are typ­i­cal­ly test­ed inter­nal­ly or with a small group of tar­get users. MVPs are test­ed among a broad­er mar­ket seg­ment and rep­re­sent the first ver­sion of the prod­uct that deliv­ers real val­ue to real users
  • Pur­pose and matu­ri­ty: Pro­to­types are test­ed inter­nal­ly – they’re not meant for pub­lic use. The ideas they test are still evolv­ing and thus prone to error. MVPs, on the oth­er hand, while min­i­mal, are ready to go live. They’re func­tion­al and designed to be used by cus­tomers in real set­tings.
  • Objec­tive: Pro­to­types are aimed at spot­ting flaws and test­ing how a prod­uct looks or behaves across devices. MVPs are built to gen­er­ate ear­ly feed­back from the mar­ket and learn what it takes to grow into a full-fledged solu­tion.

How to build your MVP and define impact indi­ca­tors

Before you begin, decide which fea­tures your MVP real­ly needs – and which can wait:

1. Pri­or­i­tize key fea­tures for your MVP

Start by mak­ing a list of all poten­tial fea­tures. Then group and rank them using the MoSCoW method.

The MoSCoW method

The MoSCoW method is a frame­work for pri­or­i­tiz­ing what mat­ters when build­ing your MVP. It helps you eval­u­ate fea­tures and require­ments by sort­ing them into four clear cat­e­gories, based on how essen­tial they are to your product’s suc­cess:

  • Must have: These are the non­nego­tiable fea­tures. With­out them, your MVP doesn’t work or fails to meet your users’ core needs.
  • Should have: These fea­tures add val­ue, but they aren’t imme­di­ate­ly crit­i­cal. Include them if time and resources allow.
  • Could have: Nice-to-haves. They aren’t essen­tial and can be added lat­er if time and resources allow.
  • Won’t have (this time): Fea­tures you’re con­scious­ly choos­ing to leave out of the first ver­sion..

Start by focus­ing only on the “must haves.” Devel­op the MVP with those fea­tures that will have the biggest impact on whether your solu­tion suc­ceeds. For more help with pri­or­i­tiz­ing, check out the empact-effort matrix in the chap­ter “Solu­tion design for impact start-ups: A real­i­ty check for your solu­tion ideas.”

Plan your resources for build­ing the MVP

From day one, be clear about the resources you’ll need. A sol­id resource plan helps you avoid both over­al­lo­ca­tion and under­use – key to deliv­er­ing a func­tion­al, valu­able prod­uct with­out burn­ing out your team or over­stretch­ing your bud­get

1. Iden­ti­fy the resources you need

Break your project into small­er, man­age­able work pack­ages and esti­mate the effort required for each one. Then define what resources you’ll need to make it hap­pen. These might include:

Per­son­nel: Devel­op­ers, design­ers, project man­agers, work­shop facil­i­ta­tor

Tech­nol­o­gy: Tools for devel­op­ment, serv­er infra­struc­ture, soft­ware licens­es

Finances: Bud­get for devel­op­ment, mar­ket­ing, or out­side ser­vices

2. Plan your team capac­i­ty

Assign tasks based on each team member’s avail­abil­i­ty and work­load. Build in buffer time to han­dle unex­pect­ed issues. Clear task assign­ments and ear­ly iden­ti­fi­ca­tion of bot­tle­necks will help your team work more effi­cient­ly – and pre­vent burnout or long-term absences.

3. Esti­mate your costs

Cal­cu­late your total costs by com­bin­ing per­son­nel, tech expens­es, and any out­sourced ser­vices. A detailed cost break­down helps you cre­ate a real­is­tic bud­get and man­age it effec­tive­ly. It also comes in handy when pitch­ing to investors – clear num­bers increase your chances of secur­ing fund­ing.

Define indi­ca­tors for your impact goals

Now it’s time to choose the right met­rics to mea­sure the actu­al impact of your MVP – up to lev­el 6 on the Impact Lad­der.

1. Focus on out­puts and out­comes

By mea­sur­ing your results up to lev­el 6 of the Impact Lad­der, you can start to see whether your solu­tion is cre­at­ing mean­ing­ful improve­ments in people’s lives.

Why mea­sure up to lev­el 6 ?

Prove real change

You’re not just track­ing shifts in aware­ness or behav­ior (lev­els 4 and 5) – you’re cap­tur­ing actu­al improve­ments in the liv­ing con­di­tions of your tar­get group (lev­el 6)

Start mea­sur­ing impact ear­ly

Even if your MVP does not yet have the full impact of a mature prod­uct, mea­sur­ing up to lev­el 6 can give you ear­ly sig­nals of its poten­tial for broad­er social impact (lev­el 7).

Val­i­date your con­cept

You’ll have a bet­ter sense of whether your MVP has the poten­tial to dri­ve last­ing change over time..

Spot chal­lenges ear­ly on

Met­rics at lev­el 6 can help you iden­ti­fy obsta­cles before they grow into big­ger issues.

Make bet­ter deci­sions

Data at this lev­el gives you a stronger foun­da­tion for decid­ing how to devel­op or adjust your MVP going for­ward.

Build stronger argu­ments for stake­hold­ers

Clear evi­dence of real-life improve­ments (lev­el 6) is com­pelling for investors, fun­ders, and part­ners who want to see impact backed by results.

Exam­ple indi­ca­tors for mea­sur­ing impact up to lev­el 6 of the Impact Lad­der:

Step 1 — Activ­i­ties take place as planned

  • Num­ber of workshops/​trainings/​events deliv­ered
  • Per­cent­age of project activ­i­ties imple­ment­ed as planned
  • Num­ber of staff/​volunteers involved
  • Quan­ti­ty of mate­ri­als dis­trib­uted (e.g., fly­ers, text­books)
  • Time­ly deliv­ery of project mile­stones
  • Num­ber of prod­ucts devel­oped or man­u­fac­tured
  • Num­ber of trees plant­ed in a refor­esta­tion project

Step 2 — Tar­get groups are reached

  • Num­ber of applications/​registrations for a pro­gram
  • Reach across var­i­ous com­mu­ni­ca­tion chan­nels (e.g., social media fol­low­ers, newslet­ter sub­scribers)
  • Num­ber of cus­tomers aware of the prod­uct
  • Area of renat­u­ral­ized land (in hectares)

Step 3 — Tar­get groups accept the offer­ing

  • Par­tic­i­pa­tion rates in events/​programs
  • Com­ple­tion rates for courses/​programs
  • Pur­chase or usage rates of your prod­ucts or ser­vices
  • Par­tic­i­pant sat­is­fac­tion rates
  • Repeat par­tic­i­pa­tion, pur­chas­es, or prod­uct usage
  • Tree sur­vival rate after six months in refor­esta­tion efforts

Step 4 — Tar­get groups gain aware­ness or skills

  • Num­ber of par­tic­i­pants who have gained new knowl­edge on a spe­cif­ic top­ic
  • Shifts in atti­tudeswith­in the tar­get group
  • Num­ber of par­tic­i­pants demon­strat­ing new skills after a work­shop or train­ing
  • Increased moti­va­tion to change behav­ior, trig­gered by inter­nal or exter­nal incen­tives
  • Increase in num­ber of oppor­tu­ni­ties for par­tic­i­pants to apply their new skills
  • Num­ber of cus­tomers who can accu­rate­ly describe the product’s fea­tures after pur­chase
  • Increase in bio­di­ver­si­ty in renat­u­ral­ized areas

Step 5 — Tar­get group changes behav­ior

  • Num­ber of peo­ple who adopt new behav­iors as a result of par­tic­i­pa­tion, use or pur­chase (e.g., more sus­tain­able con­sump­tion habits)
  • Num­ber of par­tic­i­pants active­ly involved in new ini­tia­tives or projects
  • Num­ber of new col­lab­o­ra­tions or part­ner­ships formed as a result of the ini­tia­tive
  • Return of endan­gered species to the ecosys­tem

Step 6 — Tar­get group’s liv­ing con­di­tions improve

  • Per­cent­age of the tar­get group with improved social or eco­nom­ic con­di­tions (e.g., access to bet­ter jobs or edu­ca­tion)
  • Num­ber of indi­vid­u­als whose income or finan­cial sta­bil­i­ty increased through par­tic­i­pa­tion or prod­uct use
  • Improve­ments in health or well-being of the tar­get group
  • High­er cus­tomer sat­is­fac­tion relat­ed to the core prob­lem your solu­tion address­es
  • Mea­sur­able improve­ment in air qual­i­ty (e.g., low­er lev­els of par­tic­u­late mat­ter or NO₂ in urban areas)
  • Increased car­bon cap­ture from refor­est­ed areas

2. Set clear goals

Define clear, mea­sur­able, real­is­tic, and time-bound goals for your impact indi­ca­tors by using the SMART frame­work:

  • Spe­cif­ic: Focused and clear­ly defined
  • Mea­sur­able: Quan­tifi­able
  • Achiev­able: Real­is­tic giv­en your cur­rent resources
  • Rel­e­vant: Aligned with your broad­er objec­tives
  • Time-bound: Set with­in a clear time­frame (two to four months)

3. Estab­lish a base­line

Your base­line is the start­ing point for mea­sur­ing change. It helps you com­pare the sit­u­a­tion before and after your inter­ven­tion – like the num­ber of peo­ple who under­stand healthy eat­ing or have basic stress resilience skills.

When­ev­er pos­si­ble, base your base­line on offi­cial sta­tis­tics, not self-report­ed fig­ures. Rely­ing on self-assess­ment increas­es the risk of over­stat­ing your impact (also known as impact wash­ing).

4. Set up a mea­sure­ment sys­tem and data col­lec­tion plan

By defin­ing your objec­tives, indi­ca­tors, data sources and respon­si­bil­i­ties, you cre­ate a sol­id foun­da­tion for your MVP and ensure that all impor­tant aspects are tak­en into account. A sim­ple Excel spread­sheet can serve as your data col­lec­tion plan. It should include the fol­low­ing columns:

Goal

Indi­ca­tor

Base val­ue (base­line)

Tar­get val­ue

Data source (e.g., inter­nal records, sur­veys, obser­va­tions or exter­nal sources)

Sur­vey method (e.g., ques­tion­naires, inter­views, obser­va­tions or doc­u­ment analy­ses)

When/​how often is it col­lect­ed?

Who col­lects the data?

Who ana­lyzes the data?

5. Share your impact data in a clear and com­pelling way (e.g., through an impact dash­board)

An impact dash­board gives you a quick, easy-to-read overview of your most impor­tant KPIs – no need to dig through long reports. It helps you spot trends ear­ly, make faster data-dri­ven deci­sions, and boost trans­paren­cy with your stake­hold­ers.

You can build your dash­board in some­thing sim­ple like an Excel spread­sheet or use spe­cial­ized tools – it all depends on your needs and resources.

Next chap­ter: Real­i­ty check

You’re now well-posi­tioned to devel­op an MVP that not only works but cre­ates real impact.

Once your MVP is ready, we rec­om­mend con­duct­ing a real­i­ty check. In the next chap­ter, you’ll learn how to col­lect reli­able data on how well your solu­tion is received, how effec­tive it is, and whether it has the poten­tial to scale.