build measure learn
Pro­to­type in real-world test­ing: Col­lect feed­back and mea­sure impact

This sec­tion is for you if …

  • you’re plan­ning to found an impact start­up or are already in the mid­dle of build­ing one with your team.
  • you have a clear under­stand­ing of your tar­get group.
  • you can clear­ly define the prob­lem, your solu­tion, and the impact you aim to cre­ate.
  • you’ve devel­oped a func­tion­al pro­to­type.
  • you’ve defined a key met­ric (OMTM) to mea­sure ear­ly impact.

In this sec­tion, you’ll learn how to …

  • gath­er valu­able feed­back from your tar­get group.
  • col­lect ini­tial data on out­put and out­come indi­ca­tors.
  • iden­ti­fy spe­cif­ic areas for improve­ment.

Land­ing page test­ing

Cam­paign tests are a sim­ple and cost-effec­tive way to find out how well your pro­to­type res­onates with your tar­get audi­ence. They offer valu­able insights into what grabs atten­tion and dri­ves engage­ment.

1. Set clear test goals

Decide which aspects of your pro­to­type you want to test (e.g., par­tic­i­pant inter­ests, con­ver­sion rate, mes­sag­ing, engage­ment). Define mea­sur­able KPIs (e.g., click-through rate, sign-ups, inquiries).

2. Set up your cam­paigns

Google Ads

  • Cre­ate a search net­work cam­paign.
  • Choose rel­e­vant key­words that match your pro­to­type.
  • Write clear, com­pelling ad copy that com­mu­ni­cates your offer.

LinkedIn Ads

  • Use spon­sored con­tent or text ads.
  • Nar­row your tar­get group by indus­try, job title, com­pa­ny size, etc.
  • Cre­ate engag­ing ads with a strong, clear promise.

3. A/B test dif­fer­ent ad vari­a­tions

Try out dif­fer­ent audi­ences, key­words, or mes­sages to see what works best. Start with a small bud­get to reduce risk, then increase your spend grad­u­al­ly based on what per­forms well.

Test­ing dig­i­tal and non-dig­i­tal pro­to­types: check acces­si­bil­i­ty and accep­tance

Use test­ing to make sure your solu­tion iseasy to use and meets the needs of the tar­get group. You can test both dig­i­tal and non-dig­i­tal pro­to­types – like a soft­ware tool, app, phys­i­cal prod­uct, or ser­vice

1. Define clear goals

Decide exact­ly what you want to test – this could be a web­site, an app, a role-play exer­cise, or a click-through dum­my. You can run both qual­i­ta­tive and quan­ti­ta­tive tests. In prac­tice, qual­i­ta­tive test­ing is more com­mon at this stage.

Qual­i­ta­tive usabil­i­ty tests: These tests focus on how peo­ple use your pro­to­type. They’re great for spot­ting usabil­i­ty issues and improv­ing the expe­ri­ence.

Quan­ti­ta­tive usabil­i­ty tests: These tests look at mea­sur­able out­comes – like task suc­cess rates or time spent on a task – and help you set bench­marks.

2. Choose your test­ing method

There are sev­er­al effec­tive meth­ods to choose from:

  • One-on-one inter­views: Talk to indi­vid­ual par­tic­i­pants about their expe­ri­ence with your pro­to­type.
  • Field tests: Observe peo­ple using your pro­to­type in a real-world set­ting, then inter­view them.
  • Focus groups: Bring togeth­er poten­tial users to dis­cuss your pro­to­type in a group set­ting.
  • Expert reviews: Ask pro­fes­sion­als in your field to eval­u­ate your pro­to­type.

3. Recruit par­tic­i­pants

Select peo­ple who match your tar­get group. Aim for 5 to 8 par­tic­i­pants to gath­er mean­ing­ful insights.

Tar­get group vs. cus­tomers

In the Lean Impact Jour­ney, we dis­tin­guish between the tar­get group when talk­ing about your impact mod­el and prod­uct, and cus­tomers when it comes to your busi­ness mod­el.

How you use these two terms in your start­up depends on your solu­tion. In this play­book, the tar­get group refers both to peo­ple who use the solu­tion and to those who ben­e­fit from it.

Some­times those groups are the same. If they’re not, make sure to recruit test par­tic­i­pants from both.

4. Cre­ate a test sce­nario and tasks

Design real­is­tic usage sce­nar­ios for your pro­to­type and define clear tasks. Make sure the sce­nar­ios are rel­e­vant and easy to under­stand – clear, spe­cif­ic, and mea­sur­able. Decide whether the test will take place in a lab, remote­ly, or in the real-life set­ting where your solu­tion will be used. Ensure all tools and mate­ri­als are work­ing prop­er­ly.

5. Run the test

Dur­ing the tests, ask par­tic­i­pants to think out loud as they use the pro­to­type. Watch close­ly how they inter­act with it and doc­u­ment your insights so you can make tar­get­ed improve­ments.

User test­ing

The terms “user test­ing” or “usabil­i­ty test­ing” come from the soft­ware sec­tor – but they can be eas­i­ly adapt­ed to non-dig­i­tal inno­va­tions . Here are a few tips:

  • Refer to “par­tic­i­pants” or “tar­get groups” instead of “users.”
  • Focus on aspects of your solu­tion instead of fea­tures.
  • Uses phys­i­cal pro­to­types, role-plays or sim­u­la­tions.
  • Clear­ly describe the sce­nar­ios in which your solu­tion can be used.
  • Cre­ate a real­is­tic envi­ron­ment to observe par­tic­i­pants’ reac­tions.

A/B test­ing: Com­pare dif­fer­ent ver­sions of your pro­to­type

A/B test­ing lets you com­pare dif­fer­ent ver­sions of your pro­to­type to find out which one cre­ates the great­est pos­si­ble impact. You can test things like nav­i­ga­tion ele­ments, but­tons, and visu­als – as well as dif­fer­ent for­mats for coach­ing, train­ing, sup­port ser­vices, design choic­es, or core fea­tures. This helps you see which ver­sion of your pro­to­type deliv­ers the best results. Show two dif­fer­ent ver­sions (ver­sion A and ver­sion B) to dif­fer­ent parts of your tar­get group – ran­dom­ly assigned.

1. Define clear goals

Select the aspects of your pro­to­type you want to test. Set spe­cif­ic met­rics and suc­cess cri­te­ria . It’s impor­tant to define SMART met­rics that will help you mea­sure how well your solu­tion works. These could be con­ver­sion rates, behav­ior changes, or qual­i­ta­tive indi­ca­tors – depend­ing on your prod­uct or ser­vice. You’ll find more on the SMART method under “How to build your first pro­to­type and find your key met­ric.” Make sure to split your test group ran­dom­ly between Ver­sion A and Ver­sion B.

2. Col­lect feed­back from par­tic­i­pants

After the A/B test, gath­er feed­back from par­tic­i­pants – for exam­ple, through a sur­vey. It can include mul­ti­ple choice, open-end­ed ques­tions, or rat­ing scales. Here are some typ­i­cal ques­tions:

  • What were your first impres­sions of the ver­sion you saw?
  • Was there any­thing miss­ing?
  • How sat­is­fied were you with the ver­sion you used?
  • Which ele­ments did you find espe­cial­ly help­ful?
  • What didn’t you like or found con­fus­ing?
  • How did you feel about the design and ease of use?
  • What changes would you make to improve this ver­sion?
  • How like­ly are you to click the but­ton in ver­sion A/B?
  • What would you change about the ver­sion you were shown?

In addi­tion to writ­ten sur­veys , it can be help­ful to invite some par­tic­i­pants for short fol­low-up inter­views. This allows you to dig deep­er into the A/B test results and com­bine quan­ti­ta­tive data with per­son­al insights, giv­ing you a bet­ter under­stand­ing of how your tar­get group expe­ri­ences your solu­tion.

Mea­sure your key met­ric (OMTM) at lev­el 5 of the Impact Lad­der

1. Ask your tar­get group about ear­ly impact

To under­stand the ear­ly social or envi­ron­men­tal impact of your pro­to­type, it’s impor­tant to mea­sure your one met­ric that mat­ters (OMTM). This helps you see whether you’re on the right track to achiev­ing long-term impact.

To find out whether your pro­to­type is already mak­ing a dif­fer­ence at lev­el 5 of the Impact Lad­der, you can ask ques­tions like:

  • How has the behav­ior of your tar­get group changed as a result of using the pro­to­type?
  • What spe­cif­ic skills or abil­i­ties have users gained or improved through the pro­to­type?
  • In what ways has the pro­to­type pos­i­tive­ly affect­ed the qual­i­ty of life or work sit­u­a­tionof your tar­get group?
  • What mea­sur­able improve­ments relat­ed to the core prob­lem have been observed thanks to the pro­to­type?
  • How last­ingare the behav­ior changes or improve­ments you’ve achieved?


Next chap­ter: Mar­ket analy­sis

At this point, you’ve col­lect­ed feed­back from par­tic­i­pants, gath­ered ini­tial data on out­come indi­ca­tors, and iden­ti­fied where and how your pro­to­type can be improved.

Before you val­i­date those find­ings and build your busi­ness mod­el, we rec­om­mend doing a mar­ket analy­sis. You’ll work on that in the next chap­ter.