SOFTWARE DEVELOPMENT

The KPIs of Customer Success

KPIs, or key per­for­mance in­di­ca­tors, are sim­ple in the­ory, but so­phis­ti­cated in demon­stra­tion. Performance is fluid and sub­jec­tive and needs to be paired with ap­pro­pri­ate in­di­ca­tors to be mea­sured re­li­ably. KPIs are not bound by in­dus­try, tech­nol­ogy or de­mo­graphic there­fore ap­plic­a­ble to Customer Success.

The world of Customer Success is heav­ily fo­cused on per­for­mance re­sults, hence the KPIs of a plat­form or ap­pli­ca­tion is an im­por­tant mech­a­nism in de­ter­min­ing suc­cess­ful prod­ucts. Thought lead­er­ship in this area tells us there are 10 over­ar­ch­ing KPIs in the CS play­book. Below I have out­lined them and ex­em­pli­fied where there im­por­tance lies in the po­ten­tial im­prove­ment of soft­ware prod­ucts, along with some in­dus­try stan­dards. For a more in depth guide on cre­at­ing and mea­sur­ing a suc­cess­ful soft­ware prod­uct check out our guide.

1. NPS (Net Promoter Score)

This sur­vey asks the user the like­li­hood they would rec­om­mend the plat­form to a friend or col­league. This rec­om­men­da­tion is cap­tured on a 10-point scale with the break­down of sta­tis­tics cat­e­goris­ing the re­spon­dent as ei­ther a pro­moter (scored 9 or 10), pas­sive (scored 7 or 8) or de­trac­tor (0 to 6) (HubSpot). Industry stan­dards for these sur­veys vary largely with de­part­ment stores sit­ting around 5.5 all the way to Internet providers sit­ting at a 2. In re­search con­ducted by Colloquy the con­cen­tra­tion of re­sponses is skewed neg­a­tively due to a 75% like­li­hood of peo­ple who have had neg­a­tive ex­pe­ri­ences with prod­ucts feel­ing com­pelled to voice this, op­posed to 42% who have a pos­i­tive ex­pe­ri­ence and would like to re­ceive feed­back.

This sur­vey can be sent to a ran­domised list so the spread of re­sponses should rep­re­sent most of the user groups. The ran­domi­sa­tion as­sists with its lack­ing av­er­age re­sponse rate.

Typically speak­ing, a 50% re­sponse rate for any sur­vey is the ul­ti­mate goal, for NPS, 30% or higher is the sta­tis­ti­cal goal. Between 15% and 30% re­sponse rate is the av­er­age for SaaS prod­ucts with 10% a safe count (Perceptive). With this in mind, set­ting fol­low ups to in­crease re­sponses is a great way to en­sure a true rep­re­sen­ta­tion of the data is cap­tured through the sur­vey.

2. C-SAT Score (Customer Satisfaction Score)

The Customer Satisfaction Score is com­monly used by SaaS com­pa­nies who are cu­ri­ous to know the qual­ity of their ser­vices. Respondents are asked to rate their hap­pi­ness on a 3-point emo­tional scale: Unhappy // Neutral // Happy.

As this scale has a lower vari­ance in scor­ing than the NPS the skew of re­sults when com­par­ing in­dus­try stan­dards are not so hor­i­zon­tal. The re­sults also clus­ter around the top end of the scale mean­ing the re­sponses re­ceived are gen­er­ally more pos­i­tive. This con­tributes to the C-SAT score be­ing favoured over the NPS. In my opin­ion though, to build a strong ad­vo­cate for your prod­uct, the data from an NPS is much more use­ful. Alternatively, the ben­e­fits for a C-SAT sur­vey are to sur­vey the user af­ter a cer­tain ac­tion, such as a sup­port desk re­sponse, whereas the NPS is a strong gauge of the per­for­mance of the prod­uct al­to­gether.

3. CES (Customer Effort Score)

The Customer Effort Score asks the cus­tomer how easy it was to find and re­ceive help from a help desk or mem­ber of the ser­vice team on a 1-7 Likert scale (7 be­ing the high­est).

For ex­am­ple, a user on a plat­form is con­fused and not sure of how to nav­i­gate around it. They see that the plat­form has a built-in chat bot which can be opened to sub­mit re­quests through (this is a great ser­vice chan­nel if the re­quest is sub­mit­ted out of of­fice hours). The user opens the bot and in­puts their query. The bot as­sesses the in­put and pro­duces ar­ti­cles from a knowl­edge base. Post the user read­ing the ar­ti­cles a CES sur­vey would pop up. It is in this that the user re­sponds with a low score (anything be­tween 1 to 4).

Two things that can be con­cluded here;

1. The ar­ti­cles pro­duced did not match the query, or

2. The ar­ti­cles pro­duced did not house enough in­for­ma­tion to suit the user’s re­quest.

There is no set stan­dard here with the CES as it is more of a rat­ing on a cer­tain process within an or­gan­i­sa­tion. It does go with­out say­ing that the higher the value, the bet­ter the ser­vice, so it would be ideal to aim for a CES rat­ing of 5 or above.

Quick Summary!!

Each of the above 3 KPIs have the abil­ity to fol­low up the re­spon­dent for more de­tail. It is cru­cial to un­der­stand, analyse and strate­gise this feed­back to im­prove the prod­uct for the fu­ture. It is not rec­om­mended to act on every piece of feed­back, as that would be quite in­ef­fi­cient, rather, when analysing the feed­back, map trends to for­mu­late data-based as­sump­tions about the pain points users might be hav­ing.

Guide CTA

It also goes with­out say­ing, that if the re­sponses are more pos­i­tive, then cap­tur­ing the pos­i­tive feed­back is also cru­cial in as­sess­ing a good prod­uct-mar­ket fit, and cel­e­brat­ing with your team!

4. MRR (Monthly Recurring Revenue)

Monthly Recurring Revenue (MRR) is a growth met­ric and is so com­monly used in the SaaS world. It is cal­cu­lated by mul­ti­ply­ing the num­ber of ac­counts/​li­cences by the bill­able amount. This is cal­cu­lated month on month, but its trend and tra­jec­tory is in­cred­i­bly im­por­tant for fore­cast­ing. This mea­sure is quite large and in terms of us­ing it for fore­cast­ing can be con­sid­ered as too generic. In light of this, MRR can be split into 4 cat­e­gories; Net New MRR, Expansion MRR, Churned MRR and New MRR. Net New MRR is the por­tion of MRR that is con­tributed from new cus­tomers. Expansion MRR is the por­tion of ex­ist­ing cus­tomers that have in­creased their spend (e.g. have up­graded their li­cense). Churned MRR is cal­cu­lated as a loss (subtracting this value from the over­all MRR). Finally, the new MRR is the sum of the Net New MRR, plus, the Expansion MRR, mi­nus the Churn MRR.

As men­tioned ear­lier, this met­ric is in­cred­i­bly use­ful for fore­cast­ing for the fu­ture. It is also a great tool to ret­ro­spec­tively analyse ar­eas of growth or con­trac­tion. For ex­am­ple, if your new MRR is look­ing on trend, yet when it is bro­ken down the Net New MRR is sub­stan­tially larger than av­er­age, and this is bal­anced out by the Churn rate also be­ing sub­stan­tially larger than av­er­age. An as­sump­tion can be made from this analy­sis that the SaaS prod­uct is do­ing well to at­tract new users, but it is not do­ing well in keep­ing cur­rent users. This is a big flag be­cause at some point that pool of new users will lessen, but the churned cus­tomers con­tin­ues to trend as in­creas­ing.

5. CRC (Customer Retention Cost)

It goes with­out say­ing, the cost of ac­quir­ing a new cus­tomer is far greater than the cost of re­tain­ing a loyal cus­tomer (between 5 and 25 times more). The met­ric here mea­sures the dif­fer­ence in cost be­tween re­ten­tion cam­paigns and MQL cam­paigns. Maintaining a lower CRC com­pared to the ac­qui­si­tion cost is the main ob­jec­tive here. It is much eas­ier to stay with a com­pany than trans­fer data to an­other and build trust some­where else. This is why the in­dus­try stan­dards for CRC seen in bank­ing, in­ter­net ser­vices, telecom­mu­ni­ca­tions gen­er­ally sit be­tween 75% to 85%. Even though these are B2C ex­am­ples, a sim­i­lar trend for B2B, or pro­fes­sional ser­vices is also high (around 85%) (HubSpot). An in­dus­try that strug­gles here is re­tail, this is mostly due to its strong com­pet­i­tive na­ture where the sub­sti­tu­tion cost is high.

6. Support Requests Submitted — Product Adoption

Product un­der­stand­ing sup­port re­quests (or tick­ets) gen­er­ally mean that the user is not privy to the way the prod­uct is de­signed and the de­sired out­comes are not clear to them. It goes with­out say­ing that a user on the prod­uct for the first cou­ple of times may need as­sis­tance, and there are mech­a­nisms to as­sist with their adop­tion (such as tool tips or prod­uct tours), but if sim­i­lar re­quests are sub­mit­ted and their quan­tity is not de­creas­ing in a timely fash­ion (this will change prod­uct to prod­uct and user group to user group), then there are 2 ar­eas of im­prove­ment here. The first be­ing the de­sign of the plat­form and the sec­ond is im­prov­ing the on­board­ing mech­a­nisms. Ultimately, there should be an in­verse re­la­tion­ship be­tween the num­ber of tick­ets sub­mit­ted and the time the prod­uct has been in mar­ket.

7. Support Requests Submitted — Functionality Issues

Support re­quests sub­mit­ted due to func­tion­al­ity is­sues is in­evitable in soft­ware. On av­er­age, Zendesk counts about 500 tick­ets sub­mit­ted for high user prod­ucts per month. Similar to prod­uct adop­tion, there should be an in­verse re­la­tion­ship with the num­ber of tick­ets sub­mit­ted com­pared to the plat­for­m’s func­tion­al­ity. When tick­ets are sub­mit­ted, it is im­por­tant to as­sess the trends of these tick­ets. For ex­am­ple, if many re­quests are sub­mit­ted re­gard­ing a par­tic­u­lar page then there might be a larger is­sue caus­ing the dis­rup­tion. Most ser­vice desks have the abil­ity to track trends in tick­ets as well as quan­ti­ties, us­ing this av­enue of com­mu­ni­ca­tion in­creases the like­li­hood that the ser­vice team can be proac­tive, and the user ex­pe­ri­ence is im­proved.

8. Churn Rate

Churn rate is a KPI in Customer Success is cal­cu­lated by the num­ber of cus­tomers who have de­cided to end their com­mer­cial agree­ment with a prod­uct, di­vided by those who have de­cided to re­turn (not new cus­tomers) as a per­cent­age. The ap­pli­ca­tion of this KPI may dif­fer de­pend­ing on if the prod­ucts pay­ment struc­ture or prod­uct of­fer­ing. As the rate is de­noted as a per­cent­age it is a great in­di­ca­tor of qual­ity and value. Similar to sup­port re­quests, it is stan­dard prac­tice to see this rate drop over time. As the prod­uct moves from MVP to ma­ture, the user base is grow­ing there­fore in­creas­ing the de­nom­i­na­tor of the cal­cu­la­tion which will re­duce the prod­uct (or per­cent­age).

Generally, aim­ing be­tween 3% to 5% churn rate is con­sid­ered para­mount across in­dus­tries and prod­ucts. Do keep in mind, a churn rate of 0% is a goal, but it’s a stretch, there will al­ways be some­thing out of our con­trol has users mov­ing away from the prod­uct. It’s much more strate­gic to fo­cus on the 95% loyal cus­tomers than to get caught up in the 5% churned cus­tomers. That be­ing said — if that churn rate con­tin­ues to grow, then it’s time to fo­cus your at­ten­tion on them and un­der­stand the root cause of their leav­ing.

9. Feature Adoption

To en­sure mar­ket fit, the prod­uct team needs to en­sure that the fea­tures they are de­sign­ing will be utilised ap­pro­pri­ately by the in­tended users. Measuring this can be con­ducted in dif­fer­ent forms. The sim­plest way to quan­tify the up­take is to use an­a­lyt­ics plat­forms such as Smartlook or Fullstory. They pull user jour­ney data and set­ting up track­ing events is sim­ple.

This KPI is more rel­e­vant to start track­ing once the prod­uct has been in mar­ket and the de­vel­op­ment team is ready to re­lease its sec­ond ver­sion. Creating your own thresh­olds for this is im­por­tant be­cause every prod­uct is dif­fer­ent and each new el­e­ment of a prod­uct will be adopted dif­fer­ently. This is why user test­ing and mar­ket re­search is im­por­tant. Create tar­gets that have a spec­i­fied time­frame and as­sess how close you came to reach­ing those tar­gets af­ter each prod­uct re­lease.

10. Customer Health

The health of a cus­tomer re­lies on all of the above KPIs, plus much more. It is typ­i­cally rep­re­sented in a traf­fic light form (red = bad, yel­low = needs some im­prove­ment, green = good). In de­cid­ing where a cus­tomer sits on this scale de­pends heav­ily on the rat­ings re­ceived from the other ar­eas of per­for­mance. Where the cus­tomer sits on this scale is de­cided by Customer Success team.

As with each of the KPIs there are cer­tain in­dus­try stan­dards that set the thresh­olds for each. This is no dif­fer­ent, ex­cept they are cre­ated at a com­pany level rather than met­ric level. Each prod­uct com­pany will set met­ric thresh­olds for each of the KPIs that align with their busi­ness goals and long term vi­sion of suc­cess. From this, the place­ment of cus­tomers on this scale will be a doc­u­mented process for each com­pany for CSMs to fol­low and up­date when nec­es­sary.

The ul­ti­mate KPI though is when a cus­tomer ad­vo­cates for your prod­uct. They are some­one who will go out of their way to show­case the value the prod­uct has made on their busi­ness, or lifestyle. An ad­vo­cacy pro­gram takes time, trust and in­vest­ment, but at the end of the day, a word-of-mouth rec­om­men­da­tion is a high con­ver­sion lead gen­er­a­tion strat­egy.

ABOUT THE AUTHOR

Alice Spies

KPI mo­ti­va­tor and res­i­dent head chef

Get cu­rated con­tent on soft­ware de­vel­op­ment, straight to your in­box.

Your vi­sion,

our ex­per­tise

Book a con­sul­ta­tion