build measure learn
MVP on the test bench: test­ing the impact of an impact start­up

You’ve come to the right place if …

  • you want to found an impact start­up or are already in the mid­dle of it with your team.
  • you can clear­ly iden­ti­fy your tar­get group as well as the prob­lem, solu­tion and impact.
  • you have devel­oped a val­i­dat­ed pro­to­type.
  • you have received ini­tial evi­dence of the impact of your offer at out­come lev­el.
  • you have iden­ti­fied a mar­ket for your offer­ing and devel­oped an ini­tial busi­ness mod­el.
  • you have devel­oped a Min­i­mal Viable Prod­uct (MVP).
  • you can ensure the impact mea­sure­ment of your offer at out­come lev­el.

This chap­ter helps to …

  • Col­lect feed­back on your MVP direct­ly from the tar­get group.
  • Rec­og­nize prob­lems in the devel­op­ment of the offer­ing.
  • col­lect data on out­put and out­come indi­ca­tors when mea­sur­ing impact.
  • to check how your solu­tion is scal­able.

Test acces­si­bil­i­ty and accep­tance of the MVP

Is your MVP real­ly acces­si­ble and con­vinc­ing for your tar­get group? Test it with your pilot cus­tomers and test users to iden­ti­fy weak­ness­es and gain valu­able insights for improve­ments. The fol­low­ing four steps will help you do this.

1. defines clear test objec­tives

First of all, you should define clear goals for your tests. Do you want to test a web­site, an app, a work­shop or a con­sul­ta­tion? You can use both qual­i­ta­tive tests to col­lect feed­back on the user expe­ri­ence and iden­ti­fy weak­ness­es, as well as quan­ti­ta­tive tests to col­lect mea­sur­able data such as dwell time or suc­cess with tasks. This data can also serve as bench­marks for your project.

2. recruits par­tic­i­pants

You then recruit suit­able par­tic­i­pants. Ide­al­ly, you should draw on ear­ly adopters who rep­re­sent your tar­get group — see the sec­tion “From pro­to­type to impact: build­ing a sus­tain­able busi­ness mod­el for an impact start­up”. For mean­ing­ful results, it is advis­able to select five to ten test sub­jects per tar­get group seg­ment.

Tar­get group vs. cus­tomers

In the Lean Impact Jour­ney we dif­fer­en­ti­ate between the tar­get group when we deal with the impact mod­el and the prod­uct and cus­tomers when it comes to the busi­ness mod­el.

How you use these two terms for your start­up depends on what your solu­tion con­sists of. In this play­book, the tar­get group is defined as peo­ple who use the solu­tion on the one hand and those who ben­e­fit from the solu­tion on the oth­er.

Depend­ing on the solu­tion, the tar­get group can com­bine both. If it does not do this for you, you should test your MVP with test sub­jects from both groups.

3. cre­ates a test sce­nario and tasks

Then cre­ate real­is­tic test sce­nar­ios that show how your MVP will be used and for­mu­late appro­pri­ate tasks. Make sure that the sce­nar­ios are real­is­tic and rel­e­vant and for­mu­late them clear­ly, pre­cise­ly and mea­sur­ably. Also con­sid­er whether the tests should take place in a lab, remote­ly or in the par­tic­i­pants’ every­day lives. Make sure that all the nec­es­sary tools and mate­ri­als are pre­pared.

4. per­forms the test

Dur­ing the tests, ask par­tic­i­pants to speak their thoughts out loud while using the MVP. Observe close­ly how they inter­act with it and doc­u­ment your find­ings so that you can make tar­get­ed improve­ments.

User tests

User test­ing orig­i­nal­ly comes from the soft­ware sec­tor, but can be trans­ferred very well to non-tech­nol­o­gy-based inno­va­tions. Here are some adap­ta­tions and sug­ges­tions on how you can apply the method:

  • Speak of par­tic­i­pants or the tar­get group instead of users.
  • Con­sid­er ele­ments or aspects of your solu­tion instead of func­tions.
  • Uses phys­i­cal pro­to­types, role-play­ing games or sim­u­la­tions instead of dig­i­tal inter­faces.
  • For­mu­late real­is­tic sce­nar­ios in which your solu­tion can be used.

How quick­ly does your MVP cre­ate added val­ue?

To check the effi­cien­cy of your solu­tion, you can cal­cu­late the Time To Val­ue (TTV) . The TTV shows how long it takes for your tar­get group to derive the first tan­gi­ble added val­ue from your solu­tion. It helps you to mea­sure the effi­cien­cy of your MVP and iden­ti­fy weak­ness­es.

1. defines the added val­ue, start­ing point and end point

Clar­i­fy what added val­ue means for your tar­get group. This can be, for exam­ple, the achieve­ment of a goal, the use of a core func­tion or a gain in knowl­edge.

Deter­mine when the time mea­sure­ment begins — for exam­ple, when reg­is­ter­ing, log­ging in, pur­chas­ing or par­tic­i­pat­ing and define when the added val­ue becomes notice­able, for exam­ple by solv­ing a prob­lem or suc­cess­ful­ly using your offer.

2. mea­sure the time and cal­cu­late the aver­age

Cap­tures the time span between the start and end point with track­ing tools. Google Ana­lyt­ics or heat maps are suit­able for tech­ni­cal solu­tions, for exam­ple, while Excel sheets or par­tic­i­pant track­ing are also help­ful for non-tech­ni­cal offers.
Exam­ple: A par­tic­i­pant reg­is­ters at 9 am and car­ries out the first suc­cess­ful action at 11 am.

TTV = 2 hours.

Add up the TTV of all par­tic­i­pants and divide the val­ue by their num­ber. Exam­ple: A needs 2 hours, B 4 hours and C 3 hours.

Aver­age TTV = (2 + 4 + 3) / 3 = 3 hours.

3. seg­ments the data

Exam­ines dif­fer­ences between dif­fer­ent groups or sce­nar­ios in order to iden­ti­fy opti­miza­tion poten­tial. Exam­ple: How do the TTVs of new and exist­ing cus­tomers dif­fer?

Opti­miza­tion with A/B tests

An effec­tive method for improv­ing your MVP is the use of A/B tests. You cre­ate two dif­fer­ent ver­sions of your solu­tion and test which one is bet­ter received by the tar­get group. In order to obtain mean­ing­ful results, you present both groups with the respec­tive ver­sions and col­lect spe­cif­ic data. For exam­ple, you can record the pref­er­ences, under­stand­ing or expec­ta­tions of the par­tic­i­pants. You can use these find­ings to make adjust­ments to your MVP.

Here are some exam­ples of ques­tions:

Pref­er­enceWhich vari­ant do you like bet­ter and why?
Under­stand­ingWhat mes­sage does vari­ant A/B con­vey?
Uti­liza­tionHow like­ly is it that you will car­ry out [Aktion]?
Emo­tionWhat feel­ings does vari­ant A/B trig­ger in you?
Improve­mentsWhat would you change to make vari­ant A/B bet­ter?
Con­textIn which sit­u­a­tion would you pre­fer vari­ant A/B?
Com­par­i­sonWhat dif­fer­ences do you notice between the vari­ants?
Open ques­tionsWhich vari­ant is more suit­ed to your needs and why?

Car­ry out pilot mea­sure­ments of your effect

How does your MVP show real impact? You can use pilot mea­sure­ments to check the impact on lev­els 1 to 6 of the impact lad­der. Use tar­get­ed ques­tions and track­ing tools to make suc­cess­es vis­i­ble and fur­ther devel­op the MVP.

1. imple­ments track­ing tools

Choose suit­able soft­ware or meth­ods for data col­lec­tion (e.g. sur­veys, data­bas­es, analy­sis tools). It is impor­tant that data col­lec­tion is inte­grat­ed into your process­es.

Pos­si­ble record­ing meth­ods are:

  • Quan­ti­ta­tive data col­lec­tion: e.g. counts of spe­cif­ic ser­vices and prod­ucts, mea­sure­ments of activ­i­ties and tar­get groups reached, struc­tured obser­va­tions
  • Qual­i­ta­tive meth­ods: e.g. par­tial­ly stan­dard­ized or open inter­views, obser­va­tions, doc­u­ment analy­sis, before and after sur­veys

Ide­al­ly, in the pre­vi­ous chap­ter “Devel­op­ing an MVP: Test­ing solu­tions for impact start-ups under real con­di­tions” you have already drawn up a data col­lec­tion plan. If not, go back here.

2. checks the MVP on the basis of the impact lad­der

To check whether your MVP achieves suc­cess on lev­els 1 to 6 of the effect lad­der, you can ask the fol­low­ing ques­tions in the test, for exam­ple:

Stage 1 — Activ­i­ties take place as plannedWere you able to use all the planned ele­ments of the MVP?
Were there any tech­ni­cal prob­lems or obsta­cles dur­ing use?
Stage 2 — Tar­get groups are reachedHow did you hear about our solu­tion?
Do you feel part of the tar­get group?
Lev­el 3 — Tar­get groups accept offersHow sat­is­fied are you with the solu­tion over­all?
Would you con­tin­ue to use/re-use the solu­tion?
Stage 4 — Tar­get groups change aware­ness or skillsWhat new things have you learned by using our solu­tion?
Has our solu­tion changed your view of [rel­e­vantes The­ma]?
Do you have more oppor­tu­ni­ties or moti­va­tion to apply what you have learned after using our solu­tion?
Stage 5 — Tar­get groups change their actionsHave you changed any­thing in your every­day life/​work as a result of our solu­tion?
What con­crete steps have you tak­en after using our solu­tion?
Stage 6 — Tar­get groups’ life sit­u­a­tion changesHow has your personal/​professional sit­u­a­tion improved as a result of using our solu­tion?
What spe­cif­ic improve­ments in your life can you attribute to our solu­tion?

Next chap­ter: Financ­ing strat­e­gy

You have sys­tem­at­i­cal­ly test­ed your MVP, obtained feed­back and reviewed its impact in order to opti­mize it. Before you learn from the results and decide to make adjust­ments to the MVP and the impact log­ic, we rec­om­mend devel­op­ing a financ­ing strat­e­gy.

We will tack­le this in the next chap­ter . At the end, you can sub­ject them to a real­i­ty check togeth­er with your MVP.