build measure learn
Pro­to­type in prac­ti­cal test: Col­lect­ing feed­back and mea­sur­ing impact

You’ve come to the right place if …

  • you want to found an impact start­up or are already in the mid­dle of it with your team.
  • you know your tar­get group exact­ly.
  • you can clear­ly name the prob­lem, the solu­tion and the impact.
  • you have devel­oped a func­tion­al pro­to­type.
  • you have defined a key met­ric (OMTM) for ear­ly impact mea­sure­ment.

This chap­ter helps to …

  • col­lect valu­able feed­back from the tar­get group for your pro­to­type.
  • col­lect ini­tial data on out­put and out­come indi­ca­tors.
  • iden­ti­fy poten­tial for improve­ment.

Tests for land­ing pages

Cam­paign tests are a sim­ple and inex­pen­sive way to find out how well your pro­to­type is received by the tar­get group. You can gain valu­able insights into which aspects of your pro­to­type arouse inter­est and encour­age engage­ment.

1. defines clear test objec­tives

Deter­mine which aspects of your pro­to­type you would like to test (e.g. inter­ests of par­tic­i­pants, con­ver­sion rate, tar­get­ing, inter­ac­tions). Define mea­sur­able key per­for­mance indi­ca­tors (KPIs) (e.g. click rate, reg­is­tra­tions, inquiries).

2. set up your cam­paigns

Google Ads

  • Cre­ates a search net­work cam­paign.
  • Choose rel­e­vant key­words that match your pro­to­type.
  • Write mean­ing­ful ad texts that clear­ly com­mu­ni­cate your offer.

LinkedIn Ads

  • Uses spon­sored con­tent or text ads.
  • Define your tar­get group pre­cise­ly accord­ing to indus­try, job title, com­pa­ny size, etc.
  • Cre­ates appeal­ing ads with a clear promise.

3. tests dif­fer­ent dis­play vari­ants against each oth­er

Exper­i­ment with dif­fer­ent tar­get groups, key­words or mes­sages to find out what works best. Set a lim­it­ed bud­get at first to min­i­mize risks and grad­u­al­ly increase the bud­get based on your results.

Tests for dig­i­tal and non-dig­i­tal pro­to­types: Check acces­si­bil­i­ty and accep­tance of the pro­to­type

With the help of tests, you can ensure that your solu­tion is intu­itive to use and meets the needs of the tar­get group. You can test both dig­i­tal and non-dig­i­tal pro­to­types — e.g. soft­ware, an app, a phys­i­cal prod­uct or a ser­vice.

1. define clear goals

Deter­mine what exact­ly you want to test — e.g. a web­site, an app, a role-play­ing game or a click-through dum­my. You can car­ry out the tests on a qual­i­ta­tive and quan­ti­ta­tive lev­el. The qual­i­ta­tive lev­el is more com­mon in this case.

Qual­i­ta­tive usabil­i­ty tests: Here you focus on gain­ing insights into how peo­ple use your pro­to­type. These tests are ide­al for dis­cov­er­ing prob­lems dur­ing use.

Quan­ti­ta­tive usabil­i­ty tests: Here you can find out some­thing about the expe­ri­ence of the par­tic­i­pants dur­ing use by mea­sur­ing key fig­ures such as the suc­cess of the task or the time spent on the task. These tests help you to set bench­marks.

2. deter­mine your test method

You can choose from var­i­ous test meth­ods. The most com­mon are these:

  • Indi­vid­ual inter­views: Ask indi­vid­ual par­tic­i­pants about their expe­ri­ence with your pro­to­type.
  • Field tests: Test your pro­to­type in a real envi­ron­ment. Observe how the par­tic­i­pants inter­act with your pro­to­type and con­duct inter­views.
  • Focus groups: Bring togeth­er a group of poten­tial users of your solu­tion to dis­cuss your pro­to­type.
  • Expert eval­u­a­tions: Let experts from your field eval­u­ate the pro­to­type.

3. recruits par­tic­i­pants

Select peo­ple who cor­re­spond to your tar­get group. Plan for five to eight par­tic­i­pants in order to obtain mean­ing­ful results.

Tar­get group vs. cus­tomers

In the Lean Impact Jour­ney we dif­fer­en­ti­ate between the tar­get group when we deal with the impact mod­el and the prod­uct and cus­tomers when it comes to the busi­ness mod­el.

How you use these two terms for your start­up depends on what your solu­tion con­sists of. In this play­book, the tar­get group is defined as peo­ple who use the solu­tion on the one hand and those who ben­e­fit from the solu­tion on the oth­er.

Depend­ing on the solu­tion, the tar­get group can com­bine both. If it does not do this for you, you should recruit test sub­jects from both groups.

4. cre­ates a test sce­nario and tasks

Devel­op real­is­tic usage sce­nar­ios for your pro­to­type and for­mu­late clear tasks. Make sure that the sce­nar­ios are real­is­tic and rel­e­vant and for­mu­late them clear­ly, pre­cise­ly and mea­sur­ably. Decide whether the test should take place in the lab, remote­ly or direct­ly in the con­text of use. Ensures that all required tools work.

5. per­forms the test

Dur­ing the tests, ask the par­tic­i­pants to speak their thoughts out loud while using the pro­to­type. Observe close­ly how they inter­act with it and doc­u­ment your find­ings so that you can make tar­get­ed improve­ments.

User tests

The terms “user test­ing” or “user test­ing” orig­i­nal­ly come from the soft­ware sec­tor — but you can adapt them to non-tech­nol­o­gy-based inno­va­tions . Here are some tips:

  • Speak of par­tic­i­pants or tar­get groups instead of users.
  • Con­sid­er aspects of your solu­tion instead of func­tions.
  • Uses phys­i­cal pro­to­types, role-play­ing games or sim­u­la­tions.
  • Clear­ly for­mu­late the sce­nar­ios in which your solu­tion can be used.
  • Cre­ate a real­is­tic envi­ron­ment to observe the par­tic­i­pants’ reac­tions.

A/B tests: Com­pare dif­fer­ent ver­sions of the pro­to­type

In A/B tests, you check dif­fer­ent ver­sions of your pro­to­type to cre­ate the great­est pos­si­ble impact . For exam­ple, you can test dif­fer­ent ele­ments of user guid­ance, but­tons or visu­al design ele­ments — but also dif­fer­ent ver­sions of con­sul­ta­tions, train­ing and sup­port ser­vices as well as design and func­tion­al­i­ties. This will help you find out which ver­sion of your pro­to­type achieves the best results. Present two dif­fer­ent ver­sions (ver­sion A and ver­sion B) ran­dom­ly to dif­fer­ent parts of the tar­get group.

1. define clear goals

Select the aspects of your pro­to­type that you want to test. Define met­rics and suc­cess cri­te­ria . It is impor­tant to define SMART met­rics that will deter­mine the suc­cess of your solu­tion. Depend­ing on the prod­uct or offer, these can be con­ver­sion rates, behav­ioral changes or qual­i­ta­tive indi­ca­tors. We explain how the SMART method works in the chap­ter “How to devel­op your first pro­to­type and find your key met­rics”. For test­ing, ran­dom­ly allo­cate the tar­get group to two vari­ants (A and B).

2. col­lect feed­back from the par­tic­i­pants

Col­lect feed­back from par­tic­i­pants after the A/B tests, e.g. in a sur­vey. This can include both mul­ti­ple choice and open ques­tions or a rat­ing scale. Typ­i­cal ques­tions are:

  • What are your ini­tial thoughts on the vari­ant shown to you?
  • Is there some­thing miss­ing?
  • How sat­is­fied were you with the ver­sion you used?
  • Which ele­ments were par­tic­u­lar­ly help­ful for you?
  • What did­n’t you like or was irri­tat­ing?
  • What did you think of the design and user-friend­li­ness?
  • What changes would you make to make this ver­sion even bet­ter?
  • How like­ly is it that you would click on the but­ton in vari­ant A/B?
  • What would you change about the vari­ant shown to you?

In addi­tion to writ­ten sur­veys , it can be use­ful to invite indi­vid­ual par­tic­i­pants to a short inter­view to dis­cuss the results of the A/B test in more detail. This can help to link the quan­ti­ta­tive results with sub­jec­tive impres­sions and thus gain a deep­er under­stand­ing of the tar­get group expe­ri­ence.

Mea­sure key met­ric (OMTM) at lev­el 5 of the impact lad­der

1. ask your tar­get group about your ear­ly impact

To be able to assess the social and eco­log­i­cal impact of your pro­to­type at an ear­ly stage, it is impor­tant to mea­sure the One Met­ric That Mat­ters (OMTM). This way you can see whether you are on the right track to achieve a long-term impact.

To find out whether your pro­to­type achieves suc­cess at lev­el 5 of the effect lad­der, you can ask ques­tions like these dur­ing test­ing:

  • How has the tar­get group’s behav­ior changed as a result of using the pro­to­type?
  • What spe­cif­ic skills or com­pe­ten­cies have users acquired or improved as a result of the pro­to­type?
  • To what extent has the pro­to­type had a pos­i­tive impact on the tar­get group’s qual­i­ty of life or work sit­u­a­tion?
  • What mea­sur­able improve­ments in rela­tion to the addressed prob­lem could be observed through the use of the pro­to­type?
  • How sus­tain­able are the behav­ioral changes or improve­ments achieved?


Next chap­ter: Mar­ket analy­sis

You have now col­lect­ed feed­back from par­tic­i­pants for your pro­to­type, gath­ered ini­tial data on out­come indi­ca­tors and know where and how your pro­to­type can be improved.

Before you val­i­date the find­ings of your pro­to­type and devel­op a busi­ness mod­el, we rec­om­mend that you car­ry out a mar­ket analy­sis. You can do this in the next chap­ter devel­op.