Dealing with risk when us­ing ag­ile to de­velop soft­ware

SOFTWARE DEVELOPMENT

Risk, it is every­where and no mat­ter how hard you try, it is im­pos­si­ble to es­cape. To deal with the in­evitable risk that comes with the soft­ware de­vel­op­ment process, we must find ways to iden­tify, mit­i­gate and ac­cu­rately al­lo­cate time to risk. This en­ables a team to keep to pro­ject time­lines and not have them blow out. This ar­ti­cle will out­line how we ap­proach risk through­out the var­i­ous stages of ag­ile de­vel­op­ment and the steps we take to de-risk our pro­jects. There is risk un­der every stone through­out the soft­ware de­vel­op­ment process, so we will only be touch­ing on a few of the places where risk is found and how best to deal with it. The Way of Working is how WorkingMouse man­ages risk. See how in­tro­duc­ing a Way of Working will im­prove your soft­ware de­vel­op­ment pro­jects.

The Cone of Uncertainty

The be­gin­ning of any soft­ware de­vel­op­ment pro­ject is by far the riski­est time for all in­volved if not ap­proached from the right an­gle. The Cone of Uncertainty (available be­low) is a de­pic­tion that as we go fur­ther into the fu­ture there is more that we are un­cer­tain about. There are so many other fac­tors that may im­pact a pro­ject in 6 months time that we just don’t know about cur­rent. For ex­am­ple, Apple/Google may up­date their soft­ware re­quire­ments or a par­tic­u­lar plu­gin we were plan­ning to use has reached end of life.

This is why it’s im­por­tant to break down the pro­ject into smaller builds. Each build can be scoped at the rel­e­vant time al­low­ing greater cer­tainty around the es­ti­mated length. This is the premise of ag­ile soft­ware de­vel­op­ment but it helps to un­der­stand the ‘why.’

Image

Our x-axis on the Cone of Uncertainty is the time in the fu­ture we are try­ing to pre­dict. We can see that the fur­ther out from where we are cur­rently, the more risk that is as­so­ci­ated with that pre­dic­tion. Our aim is to es­ti­mate out to our “sweet spot”. We find this is about 8 weeks into the fu­ture. This en­ables us to be con­fi­dent in the es­ti­ma­tions we make with­out tak­ing on too much risk.

Using the cone of un­cer­tainty in your pro­jects

The cone of un­cer­tainty is­n’t just a vi­sual de­pic­tion of how risk is mea­sured in soft­ware de­vel­op­ment, it can also be used as a use­ful tool in your ar­se­nal to help ex­plain the im­pact of the vary­ing soft­ware de­vel­op­ment ap­proaches, ag­ile (fixed time or fixed scope) vs wa­ter­fall.

Image

The cone of un­cer­tainty is a great way to re­in­force why ag­ile is such a ben­e­fi­cial ap­proach to soft­ware de­vel­op­ment. Utilising shorter it­er­a­tion cy­cles work­ing to­wards a mile­stone with scop­ing ses­sions to map out the next de­vel­op­ment cy­cle en­ables us to keep the es­ti­ma­tions in that sweet spot through­out the life of pro­ject rather than es­ti­mat­ing too far into the fu­ture.

We utilised this very ap­proach with one of our clients. We were able to bring the Cone of Uncertainty into the con­ver­sa­tion early on to show them the ben­e­fits of us­ing an ag­ile ap­proach over fixed time and ma­te­ri­als (waterfall). Often this per­spec­tive shift for clients away from hav­ing every­thing de­liv­ered at once to work­ing it­er­a­tively is the most dif­fi­cult to con­vey. The Cone of Uncertainty can make this dis­cus­sion more vi­sual and eas­ier to di­gest the ben­e­fits of ag­ile de­vel­op­ment through the eyes of risk. Understanding the Cone of Uncertainty and how it ap­plies to a pro­ject is also ben­e­fi­cial to a client. When we can mit­i­gate risk through­out the brief and scop­ing process, it gen­er­ally re­duces the es­ti­ma­tion, which brings the over­all cost of a pro­ject down.

Risk rat­ing and tech spikes

The tech spike al­lowance cre­ates a bank of time which can be drawn upon when the pro­ject team needs it. The tech spike al­lo­ca­tion is used as a chance for the team to take time to re­search or test con­cepts be­fore they es­ti­mate prop­erly on some­thing. Typically, it is a pro­ject team mem­ber that in­di­cates the need for it, though all tasks with a risk rat­ing higher than eight must be de-risked be­fore es­ti­mat­ing, which is al­most al­ways re­duced through the com­ple­tion of a tech spike.

Image

A tech spike can be in­cluded in the de­vel­op­ment process in two ways: Case 1- If the is­sue be­ing es­ti­mated has a high pri­or­ity and can­not be moved into an­other it­er­a­tion, then the other op­tion is that a tech spike for the is­sue can slot in be­fore the next it­er­a­tion as a chance to re­search be­fore es­ti­mat­ing. Case 2 - the is­sue can be moved into a later it­er­a­tion, and a tech spike can be put into the next it­er­a­tion in its place (this op­tion does­n’t in­ter­rupt the de­vel­op­ment flow).

Our risk rat­ing and its score is what will ini­ti­ate a tech spike. As men­tioned above, if the risk score as­so­ci­ated with an is­sue is above an 8, a tech spike is re­quired to de-risk the task be­fore go­ing into de­vel­op­ment. We ap­ply a risk mul­ti­plier us­ing the un­fa­mil­iar­ity and com­plex­ity of the is­sue to the team. This gives us our risk score out of 25. If this score is above an 8, a tech-spike is then re­quired. We then use a Fibonacci like se­quence to get the gen­eral size of the task which we mul­ti­ply with our risk score to get our time es­ti­ma­tion for that task.

Image

Using risk rat­ing and tech spikes

Risk rat­ing and tech spikes are used ex­ten­sively dur­ing the in­ter­nal de­vel­op­ment of Codebots. Our team work­ing on Springbot made great use of tech spikes and risk rat­ings when im­ple­ment­ing re­la­tion­ships to the bot. They found it use­ful when es­ti­mat­ing for fea­tures when they were un­sure of the de­tails or tech­nolo­gies. It can pro­vide the in­sight into if a tech spike is re­quired or just more re­search and time al­lo­cated. In the case of our re­la­tion­ship im­ple­men­ta­tion, the idea was­n’t com­pletely for­eign to the team so when it came to as­sign­ing a risk they dis­cov­ered (while the risk score was just be­low an 8), ex­tra time was needed to be al­lo­cated to en­sure any un­knowns could be han­dled within the ex­pected de­liv­ery time­frame. This ended up be­ing ex­tremely valu­able to the team as through­out the im­ple­men­ta­tion, as is­sues cropped up dur­ing de­vel­op­ment. If it weren’t for the ap­pro­pri­ate rat­ing of risk and al­lo­ca­tion of time, dead­lines would have been missed.

Discover Software
Secrets

ABOUT THE AUTHOR

Oliver Armstrong

Get cu­rated con­tent on soft­ware de­vel­op­ment, straight to your in­box.

Your vi­sion,

our ex­per­tise

Book a con­sul­ta­tion