SOFTWARE DEVELOPMENT

Measuring the suc­cess of your soft­ware

Over the past 9 years, we’ve built plenty of soft­ware prod­ucts. For the ma­jor­ity of that time, our mind­set mir­rored that of the in­dus­try - we build to our clients’ re­quire­ments. This cre­ated a blind spot af­ter de­vel­op­ment was com­plete. Did the so­lu­tion we built ac­tu­ally en­able our clients to reach their goals?

On pa­per, it sounds fair enough. You pay some­one to build a house and as long as they build the house you de­signed, you would­n’t ex­pect them to check in and see if you like it…

But as an in­dus­try and a com­pany, we felt we had to do bet­ter. Rather than build so­lu­tions blindly, we’re shift­ing the fo­cus to build­ing suc­cess­ful prod­ucts. Out of this came a new con­cept; prod­uct suc­cess con­sults.


What is suc­cess de­fined as?

It would be ab­surd to try and come up with a de­f­i­n­i­tion of suc­cess that can be ap­plied across every pro­ject. Success for pro­ject A is com­pletely dif­fer­ent to suc­cess for pro­ject B. So, we must set the suc­cess cri­te­ria on a per pro­ject ba­sis. This is gen­er­ally done at the start of a pro­ject but up­dated as we progress and as busi­ness de­mands change.

How can it be mea­sured?

This is a tough ques­tion to an­swer. It gen­er­ally de­pends on the type of goal you’re try­ing to mea­sure. Some goals may be quan­ti­ta­tive, which can be mea­sured by an­a­lyt­ics tools. We’ll dis­cuss these tools in more depth shortly. Qualitative goals are harder to mea­sure but can usu­ally be done through user test­ing and more open-ended sur­vey meth­ods.

With back­ground knowl­edge of our clients ap­pli­ca­tions, busi­ness goals and suc­cess cri­te­ria, as well as ex­pe­ri­ence us­ing an­a­lyt­ics tools, we’ve found our­selves in a great po­si­tion to cap­ture and syn­the­sise prod­uct data. Product own­ers can be in­cred­i­bly busy man­ag­ing the ap­pli­ca­tion and help­ing de­liver in­ter­nal busi­ness goals. To en­sure we’re mak­ing data-dri­ven de­ci­sions, we ex­per­i­mented with a few dif­fer­ent for­mats. As men­tioned be­fore, what came out of this ex­per­i­men­ta­tion was prod­uct suc­cess con­sults.

Product suc­cess con­sult­ing

These con­sul­ta­tions are de­liv­ered in the for­mat of a 1 hour meet­ing sched­uled every 4 weeks. Prior to the con­sul­ta­tion, WM will ob­serve col­lected an­a­lyt­ics re­lated to the pre­vi­ous 4 weeks. Afterwards, WM will de­liver a Product Success Report de­tail­ing all items ob­served, dis­cussed and the ac­tions set. So ef­fec­tively, it in­volves a short meet­ing to run through the most re­cent data and set ac­tions based on that. In or­der to have ef­fec­tive con­sults, we must setup an­a­lyt­ics tools. These tools are usu­ally iden­ti­fied at the scope stage and are cho­sen based on what data needs to be cap­tured.

Supporting tools we rec­om­mend

Smartlook

Smartlook is great at record­ing event based mea­sure­ments. An event might be a user press­ing a but­ton that re­quests a con­sul­ta­tion, or com­ple­tion of a pay­ment. They are im­por­tant ac­tions that you want vis­i­bil­ity across. Smartlook also has a record­ing func­tion avail­able if you want a more de­tailed look into how users are be­hav­ing.

Google Analytics & Firebase

These two are pretty well known. We’re a fan of them for those more gen­eral us­age an­a­lyt­ics. For ex­am­ple, re­turn vis­i­tors vs new vis­i­tors, and av­er­age user ses­sion data.

Hotjar

Hotjar has a great heat map fea­ture. A heat map is a sta­tic pic­ture of a page with click data over­laid. It shows which call to ac­tions have suc­cess­fully at­tracted your users’ in­ter­ests. By ma­nip­u­lat­ing the or­der or de­sign of call to ac­tions, you can use heat maps to eval­u­ate its suc­cess.

CRM Integration

Integrating an ap­pli­ca­tion into an ex­ist­ing CRM like Pipedrive or Hubspot can align the prod­uct to busi­ness ob­jec­tives. For ex­am­ple, if the prod­uct fa­cil­i­tates B2B sales, but those con­ver­sions hap­pen out­side of the ap­pli­ca­tion it­self, CRM in­te­gra­tion can help cre­ate vis­i­bil­ity.

The ben­e­fits of mea­sure­ment

Perhaps the biggest ben­e­fit that comes from mea­sur­ing prod­uct per­for­mance is the abil­ity to make data-dri­ven de­ci­sions. That way, we com­pletely re­move the risk of mak­ing in­ac­cu­rate as­sump­tions.

These de­ci­sions be­come crit­i­cal when it­er­at­ing on the first ver­sion of your prod­uct and adding new fea­tures. They can show which fea­tures are needed most, or the im­pact that a new fea­ture has on en­gage­ment. We like to think that every step taken is for­wards, but the data helps prove (or dis­prove) that.

ABOUT THE AUTHOR

Yianni Stergou

Marketing en­thu­si­ast and FIFA ex­tra­or­di­naire

Get cu­rated con­tent on soft­ware de­vel­op­ment, straight to your in­box.

Your vi­sion,

our ex­pe­ri­ence

Book an analy­sis call