Red flags when out­sourc­ing soft­ware de­vel­op­ment

APP DEVELOPMENT

We live in in­ter­est­ing times. Software has never been more crit­i­cal for keep­ing up with the com­pe­ti­tion (let alone dis­rupt­ing in­dus­tries), and yet, Australian soft­ware de­vel­op­ment com­pa­nies are drop­ping like flies. Buzinga col­lapsed into liq­ui­da­tion in 2017, and Appster fol­lowed a year later in 2018. Prior to these col­lapses, both Appster and Buzinga were her­alded as ris­ing stars in the soft­ware de­vel­op­ment in­dus­try. In their demise, they left many cus­tomers with empty hands and bro­ken dreams.

What hap­pens when a soft­ware de­vel­oper goes broke?

Can an­other de­vel­oper pick up where the other left off? Changing de­vel­op­ers is rarely and easy un­der­tak­ing; there’s a cou­ple of steps you’ll need to take be­fore you can get your soft­ware pro­ject back on track.

Step 1: Extricate your as­sets

The first thing you will need to do is re­trieve your code­base and other as­sets, such as pro­to­types, roadmaps, and de­posits from a mad­house of soon-to-be-un­em­ployed de­vel­op­ers and hard-eyed liq­uida­tors. Your tri­als and tribu­la­tions are only be­gin­ning.

Step 2: Look for a new de­vel­oper

You’ll need a new de­vel­oper, and they’ll prob­a­bly want to scope your pro­ject re­quire­ments for them­selves be­fore pick­ing up the pieces. (Bonus red flag: be­ware of de­vel­op­ers who agree be­fore see­ing your pro­jec­t’s re­quire­ments). Even if they’re happy to build to your orig­i­nal de­vel­op­er’s re­quire­ments back­log, there’s go­ing to be a learn­ing curve. In other words, you’re go­ing to lose time, and time is money.

Step 3: Go back to Step 1 (optional)

To make mat­ters worse, if you picked the wrong de­vel­oper the first time around, can you be cer­tain you will pick the right one now? This ar­ti­cle will make you a pro at recog­nis­ing and avoid­ing de­vel­op­ers wav­ing any of these 5 red flags.

A guide for pick­ing the right soft­ware de­vel­oper (or get­ting out early)

Developing soft­ware is com­plex. This is why most busi­nesses out­source their re­quire­ments to ser­vice com­pa­nies such as Appster and Buzinga. Normally, there’s no prob­lems. Sometimes, how­ever, soft­ware de­vel­op­ers can be­come a night­mare.

Here are 5 red flags you can use when screen­ing po­ten­tial soft­ware de­vel­op­ment com­pa­nies.

#1: Your de­vel­oper fixes the time and the scope

Nothing is cer­tain. Rather than pre­tend­ing oth­er­wise, soft­ware de­vel­op­ers should ac­knowl­edge this. Failure to do so is a mas­sive red flag!

The two main vari­ables in soft­ware pro­jects are the scope of the pro­ject (requirements) and the time re­quired to de­velop these re­quire­ments.

Some soft­ware de­vel­op­ment com­pa­nies fix both these vari­ables. This is called a Waterfall method­ol­ogy. At first glance, this method can ap­pear to de-risk the de­vel­op­ment process. This is be­cause your de­vel­oper com­mits to both a list of re­quire­ments and a dead­line. Anyone with any ex­pe­ri­ence with com­put­ers will see the risk here.

The prin­ci­ple of the Cone of Uncertainty sug­gests that es­ti­mat­ing the time it will take to com­plete a pro­ject be­comes eas­ier over time as you work through the re­quire­ments back­log.

Estimations are less ac­cu­rate the fur­ther you are from com­ple­tion.

This may sound like it’s your de­vel­op­er’s prob­lem, but if nei­ther time nor scope are flex­i­ble, the only re­main­ing vari­able is qual­ity!

The al­ter­na­tive to the Waterfall method­ol­ogy is called Agile. Instead of fix­ing both vari­ables (time and scope), only one is fixed.

If time is fixed, a de­vel­oper can es­ti­mate how long in­di­vid­ual re­quire­ments will take to com­plete and work with the cus­tomer to en­sure the high­est pos­si­ble value is de­liv­ered dur­ing the agreed time frame.

Alternatively, if scope is fixed, a de­vel­oper can set about de­liv­er­ing fea­tures as they’re set for­ward in the re­quire­ments. With less at­ten­tion paid to time and more paid to func­tion­al­ity.

None of these three meth­ods (Waterfall, fixed time, or fixed scope) is in­her­ently bet­ter than the oth­ers. However, soft­ware de­vel­op­ers of­fer­ing a Waterfall ap­proach should be treated cau­tiously. As ap­peal­ing as the cer­tainly of fixed time and scope is, the sim­ple fact of the mat­ter re­mains: es­ti­ma­tions are usu­ally wrong and you run a high risk of im­pact­ing the qual­ity of your prod­uct.

#2: Your de­vel­oper only talks to you over the phone/​video call

Part of the rea­son why Appster failed is be­cause they off­shored all of the ac­tual de­vel­op­ment and de­sign of their cus­tomer’s pro­jects. For some ‘onshore’ soft­ware de­vel­op­ment com­pa­nies, lit­tle more than a meet­ing room is ac­tu­ally on­shore.

Even if face-to-face meet­ings with your ac­tual de­vel­op­ers and de­sign­ers are ir­reg­u­lar, they’re a fan­tas­tic boon for com­mu­ni­ca­tion. Any soft­ware de­vel­op­ment com­pany that hides their great­est as­sets be­hind closed doors is do­ing both them­selves and their cus­tomers a dis­ser­vice. Red flag.

#3: Your de­vel­oper does not re­lease reg­u­lar, stan­dard­ised it­er­a­tion re­ports

Software de­vel­op­ment is typ­i­cally com­pleted in it­er­a­tions or sprints. At the end of each it­er­a­tion, your de­vel­oper should write an it­er­a­tion re­port doc­u­ment­ing all progress.

WorkingMouse’s it­er­a­tion re­ports in­clude a re­it­er­a­tion of the ini­tial plan (User Stories), a log of User Stories that were added or re­moved af­ter the it­er­a­tion was started, and an out­line of the com­ple­tion sta­tus of all ac­tive User Stories.

A for­malised it­er­a­tion re­port helps cus­tomers and their de­vel­op­ers main­tain a clear fo­cus on the state of soft­ware. Poor com­mu­ni­ca­tion af­ter an it­er­a­tion is a mas­sive red flag.

#4: Your de­vel­oper has no stan­dard­ised Definition of Done

Our Definition of Done check­lists en­sure we’re de­liv­er­ing work to a high stan­dard, as well as clearly com­mu­ni­cat­ing our progress. These check­lists are: Definition of Done Beta and Definition of Done Prod (production or live). These check­lists out­line spe­cific re­quire­ments for mark­ing a User Stories as done.

Our Definition of Done Beta, for ex­am­ple, re­quires that we’ve writ­ten au­to­mated tests for the User Story, our code has been peer-re­viewed in­ter­nally, and we’ve writ­ten doc­u­men­ta­tion.

Armed with these check­lists, our cus­tomers can be sure User Stories are only pro­gressed when spe­cific re­quire­ments are achieved. This max­imises both qual­ity and com­mu­ni­ca­tion.

A de­vel­oper who de­cides when re­quire­ments are done based on noth­ing more than per­sonal dis­cre­tion is wav­ing a red flag big enough to put any mata­dor to shame.

#5: Your de­vel­oper is wishy washy about their pro­ce­dures

Always be­ware a ser­vice com­pany that’s un­will­ing to be open about its processes and rea­sons for these processes.

When we re­fer to our Way of Working, we are talk­ing about spe­cific processes, col­lab­o­ra­tion pat­terns, prac­tices, tools, and prin­ci­ples. Our Way of Working is an overview of the way we work and why we be­lieve it de­liv­ers the best re­sults.

If you read our Way of Working, you will find de­tailed ex­pla­na­tions for our de­f­i­n­i­tions of done, our meth­ods for es­ti­mat­ing User Stories, and even who at­tends meet­ings. Full trans­parency.

Image

Read our Way of Working and de­cide for your­self if our processes align with your goals and risk ac­cep­tance.

Discover Software
Secrets

ABOUT THE AUTHOR

Mitchell Tweedie

Get cu­rated con­tent on soft­ware de­vel­op­ment, straight to your in­box.

Your vi­sion,

our ex­per­tise

Book a con­sul­ta­tion