Way of Working

Principles of qual­ity soft­ware


The Way of Working is a recog­ni­tion that the soft­ware de­vel­op­ment in­dus­try is­n’t per­fect, and it’s un­likely to ever be. There are con­stant risks that threaten to de­rail a pro­ject at any mo­ment. Rather than watch­ing the news cy­cle of over-bud­get, over-time IT pro­jects re­peat, we be­lieve that with the right tools and tac­tics we can help com­pa­nies de­liver suc­cess­ful soft­ware pro­jects.

Through our ex­pe­ri­ences and learn­ings in real-world pro­jects, we have cat­a­logued what works and what does­n’t work in the form of processes. These processes un­der­pin every­thing we do. It cre­ates a plat­form to share our ideas with our teams, cus­tomers and any­one else de­vel­op­ing soft­ware. We be­lieve this level of trans­parency and open­ness is in­te­gral to guid­ing our cus­tomers, com­peti­tors and any other or­gan­i­sa­tion along the soft­ware de­vel­op­ment jour­ney.

The Way of Working be­gan as a de­fined process around the ag­ile de­vel­op­ment frame­work, which has slowly grown into its own set of prin­ci­ples, tools and guide­lines. Where the ag­ile method­ol­ogy sets out a vi­sion, the Way of Working cre­ates a more de­tailed frame­work for de­vel­op­ment teams to fol­low. Broadly speak­ing, the Way of Working processes and prac­tices have been ap­plied to:

  • Develop be­spoke web and mo­bile ap­pli­ca­tions
  • Migrate legacy soft­ware
  • Maintain and en­hance ex­ist­ing prod­ucts
  • Develop with teams of 3-9 peo­ple
  • Release pro­jects fre­quently
  • Sustain and sup­port pro­jects and their cus­tomers

Every soft­ware pro­ject has four key phases: Brief, Scope, Development and Support. Projects will nav­i­gate these in dif­fer­ent ways, and it is not de­signed to be treated as a lin­ear process. The strength in this process is its abil­ity to adapt to dif­fer­ent use cases, while still main­tain­ing a re­li­able and con­sis­tent process to guide the way.

While the Way of Working has been de­signed for soft­ware de­vel­op­ment agen­cies work­ing on ex­ter­nal cus­tomer pro­jects, the same prin­ci­ples and tools can be ap­plied to in­ter­nal de­vel­op­ment pro­jects.

Figure 1: Our Way of Working process

Discover Software

Way of Working

Principles of qual­ity soft­ware


The process re­lies on five teams to guide the pro­ject through dif­fer­ent phases: Growth, Product, Delivery, Support and Customer. Each team will be in­volved dur­ing var­i­ous phases de­pend­ing on a pro­jec­t’s re­quire­ments.

While we rec­om­mend each role is car­ried out by a sin­gle per­son, the re­al­ity is that may not al­ways be pos­si­ble. Use your best judge­ment when as­sign­ing roles and re­spon­si­bil­i­ties, tak­ing into con­sid­er­a­tion a per­son’s ex­per­tise and ca­pac­ity.


The Growth team is a part of all phases of the process. The team is made up of two roles, the Account Manager and the Customer Success Consultant. An Account Manager is an in­ter­nal role that is the key in­ter­me­di­ary be­tween the in­ter­nal teams and any rel­e­vant ex­ter­nal stake­hold­ers. Think of them as your cus­tomer ad­vo­cate on the in­side. The Customer Success Consultant is largely in­volved dur­ing the lat­ter phases of Scope, Development and Support. Their key re­spon­si­bil­ity is en­sur­ing the ap­pli­ca­tion is meet­ing the busi­ness ob­jec­tives by con­sis­tently mea­sur­ing an ap­pli­ca­tion’s per­for­mance against key met­rics.


The Product team is highly in­volved in the Brief, Scoping and Development phases. Their mis­sion is to iden­tify the prob­lems a cus­tomer is fac­ing, find a cre­ative so­lu­tion to solve them and en­sure fu­ture it­er­a­tions con­tinue to de­liver value based on data-dri­ven feed­back loops. Typically, a Scoping team will fea­ture a Product Designer and Software Developer to en­sure pro­jects are de­signed so that they de­liver value and are fea­si­ble from a tech­ni­cal per­spec­tive.


The Delivery team are core to the Development phase and are re­spon­si­ble for de­liv­er­ing high-qual­ity soft­ware. Teams should be struc­tured with a Squad lead and a group of 2-9 de­vel­op­ers for any par­tic­u­lar build. Squad Leads are re­spon­si­ble for the fa­cil­i­ta­tion of the Development process and en­sur­ing de­vel­op­ers are able to build soft­ware with­out ex­ter­nal dis­trac­tions. Developers write the code, tests and per­form any qual­ity as­sur­ance re­quired to build and re­lease an it­er­a­tion of work. Depending on a pro­jec­t’s needs, squad leads have the abil­ity to mon­i­tor and man­age mul­ti­ple Delivery teams at one time work­ing on en­tirely dif­fer­ent pro­jects.

Figure 2: Squad structure


The Support team are a group of soft­ware de­vel­op­ers which pri­mar­ily han­dle the sup­port and main­te­nance of a pro­ject af­ter it has been re­leased. Along with reg­u­lar main­te­nance, a sup­port team can also be used to make grad­ual im­prove­ments to an ap­pli­ca­tion. Allowing a Product Owner to quickly re­spond to feed­back from users or prod­uct met­rics to en­hance their ap­pli­ca­tion.


The Customer team are gen­er­ally an ex­ter­nal team as­sist­ing all other teams to de­sign and build a suc­cess­ful prod­uct that meets the goals and suc­cess cri­te­ria of a pro­ject. The team is typ­i­cally com­prised of a Product Owner, and all other ex­ter­nal stake­hold­ers im­pacted by the pro­ject. The Product Owner is a sin­gle per­son who deeply un­der­stands the do­main, rel­e­vant cus­tomers and is the de­ci­sion maker. It is the re­spon­si­bil­ity of the Product Owner to en­sure rel­e­vant ex­ter­nal stake­hold­ers are ei­ther pre­sent for, or in­formed of, key de­ci­sions made dur­ing the ap­pli­ca­tions de­vel­op­ment.

Internal de­liv­ery teams will li­aise day-to-day with the Product Owner dur­ing de­vel­op­ment. Within this doc­u­ment, “customer” and “product owner” are both used. Customer is used when dis­cussing es­ti­ma­tions that will af­fect bud­gets or when re­fer­ring to all stake­hold­ers that are en­gag­ing in de­vel­op­ment. Product Owner will be used when dis­cussing the de­vel­op­ment and di­rec­tion of the ap­pli­ca­tion, and the sin­gle point of con­tact for key de­ci­sion mak­ing.

Discover Software

Way of Working

Principles of qual­ity soft­ware

The Activity Kit

It’s im­por­tant through­out all phases of a pro­ject to have spe­cific ac­tiv­i­ties and tools avail­able to a soft­ware team to solve prob­lems. For this, it’s rec­om­mended to have an Activity Kit as a col­lec­tion of tools avail­able to teams through­out the soft­ware de­vel­op­ment process. The kit should be a cu­rated list built by var­i­ous lead­ers within all sec­tors of the busi­ness and in many cases, will be deeply in­flu­enced by and tested on nu­mer­ous soft­ware pro­jects. Each ac­tiv­ity should clearly de­fine its pur­pose, how to per­form it and what de­liv­er­ables or out­comes should be ex­pected.

An ex­am­ple of some ac­tiv­i­ties that will be use­ful of­ten in the soft­ware process are as fol­lows. This is not an ex­ten­sive list, how­ever rep­re­sen­ta­tive of some with high value:

  • Defining a prob­lem state­ment
  • User story map­ping
  • Performing dis­cov­ery in­ter­views
  • Writing a prod­uct back­log
  • Project pri­ori­ti­sa­tion
  • Running a ret­ro­spec­tive
  • Facilitating a user test­ing ses­sion

There is no for­mula for what ac­tiv­i­ties to run and when to run them. It is en­tirely de­pen­dent on what the pro­ject needs at the time. We rec­om­mend hav­ing a deep un­der­stand­ing of the Activity Kit items avail­able. Then, based on where your pro­jec­t’s at and what is needed, choose the Activity Kit item that fits.

Throughout the Way of Working, you’ll see links and ref­er­ences to Activities. While it is not manda­tory to com­plete them, you may find the hands-on na­ture of an ac­tiv­ity can help add value to your pro­ject.

Activity Kit Illustration

Discover Software


Identifying op­por­tu­ni­ties


All pro­jects start at the same point, whether it be a newly formed start-up, gov­ern­ment or­gan­i­sa­tion or legacy pro­ject. Products at their core in­tend to solve a prob­lem for a group of users. The pur­pose of the Brief stage is to en­sure that every­one shares an un­der­stand­ing of the prob­lem and are con­fi­dent both a fea­si­ble prod­uct is pos­si­ble, and a part­ner­ship can be formed. The brief phase be­gins with a con­sul­ta­tion that in­volves the cus­tomer, Account Manager and any rel­e­vant Product team mem­bers. The Account Manager is the main point of con­tact for the cus­tomer dur­ing this process.

Brief infographic Brief infographic

During the brief con­sul­ta­tion, the core ac­tiv­ity is to ex­plore the un­der­ly­ing prob­lem the cus­tomer is look­ing to solve. A key part of this is un­pack­ing the busi­ness and user out­comes as­so­ci­ated with the suc­cess of the pro­ject. The ac­tiv­ity aims to un­pack a range of prob­lems the cus­tomer may be fac­ing and fo­cus in on the most valu­able one. The out­come will be a de­fined prob­lem state­ment that will be ex­plored and de­fined within the brief doc­u­ment and form the ba­sis for the Scope stage.

Discover Software


Identifying op­por­tu­ni­ties

Defining a Problem Statement

Defining the prob­lem state­ment cre­ates the foun­da­tion for the pro­ject to suc­ceed. Starting with a so­lu­tion-fo­cused mind­set leads to un­val­i­dated as­sump­tions and lim­its the cre­ativ­ity of the Product team. In our ex­pe­ri­ence, we’ve found pro­jects that skip or rush this stage fail to have a clear and uni­fied vi­sion of their unique value propo­si­tion.

A prob­lem state­ment is­n’t some­thing that re­mains sta­tic through­out a pro­duc­t’s life­cy­cle. Rather, the prob­lem state­ment should fo­cus on the up­com­ing mile­stone.

Brief problem statement infographic

During the Brief stage we rec­om­mend us­ing the Lean UX Canvas Activity Kit item to align stake­hold­ers on the prob­lem state­ment. As part of the ac­tiv­ity, you’ll be asked a se­ries of ques­tions re­lat­ing to:

  • The busi­ness prob­lem
  • Business out­comes
  • User types
  • User out­comes and ben­e­fits
  • Solutions
  • Hypotheses
  • What’s the most im­por­tant thing to learn first?
  • What’s the least amount of work to learn the most im­por­tant thing?

Well de­fined vs Ill-defined vs Wicked prob­lems

Throughout this process, the goal should be to dis­cover a well-de­fined prob­lem. This means en­sur­ing that there is a clear de­f­i­n­i­tion of the prob­lem and the goal state. For ex­am­ple, be­ing locked out of your house. In this ex­am­ple, the prob­lem and the goal are well de­fined.

Avoid the trap of set­ting an ill-de­fined prob­lem. That is where the goal and the means to reach a so­lu­tion are to a large ex­tent un­known. For ex­am­ple, ‘how do I get more users?’ We know that it’s pos­si­ble to get more users, we just don’t have a hy­poth­e­sis for how yet.

It is im­por­tant to avoid align­ing on a wicked prob­lem. A wicked prob­lem is some­thing that can’t be solved. These prob­lems have such a level of ab­strac­tion or com­plex­ity that they can never be to­tally solved. ‘How can we erad­i­cate poverty glob­ally?’ is an ex­am­ple of a wicked prob­lem. Unfortunately, we may not ever have a so­lu­tion for the prob­lem.

The key arte­fact at the end of the Brief stage is a clear, achiev­able and well-de­fined prob­lem state­ment.

Types of problem infographic Types of problem infographic

Discover Software


Identifying op­por­tu­ni­ties


The Brief phase is fa­cil­i­tated by the Growth team, in par­tic­u­lar the Account Manager. It in­volves con­duct­ing a brief con­sul­ta­tion with the Customer, defin­ing the prob­lem state­ment along­side a prod­uct de­signer, and then de­liv­er­ing the fi­nalised brief to the Customer.

Account Manager
  • Main point of con­tact for the cus­tomer.
  • Uncovering the cus­tomer’s busi­ness goals and suc­cess met­rics and be able to clearly ar­tic­u­late this to all par­tic­i­pants in this phase.
  • Discuss part­ner­ship fit be­tween the two par­ties.
  • Collate and de­liver the find­ings to the cus­tomer through the brief doc­u­ment.
Product Designer
  • Explaining the scop­ing process to all par­tic­i­pants in this phase.
  • Ensuring the cor­rect is­sues are un­packed to ar­rive at the core prob­lems.
  • Clearly de­fine and com­mu­ni­cate the un­der­ly­ing prob­lem state­ment.
Product Owner and other stake­hold­ers
  • Clearly ar­tic­u­lat­ing their core prob­lem and cus­tomers.
  • Outlining the com­pany and/​or prod­uct to the brief­ing team.
  • Introducing rel­e­vant stake­hold­ers re­lated to the process of build­ing soft­ware.
  • Be avail­able for a brief work­shop and de­liv­ery.
Brief Participants Infographic

Discover Software


Unpacking the prob­lem


The biggest mis­take in soft­ware de­vel­op­ment (and it’s one that is un­for­tu­nately re­peated too of­ten) is rush­ing or skip­ping the Scope phase. To liken it to an­other in­dus­try, it’s like try­ing to build a house with­out blue­prints. Investing the time at the start of the pro­ject will sig­nif­i­cantly mit­i­gate risks down the track.

The pur­pose of scope is two-fold, firstly to dive deeper into the prob­lem state­ment and de­sign a so­lu­tion and sec­ondly, pre­pare that so­lu­tion for de­vel­op­ment. It is im­por­tant for a scope to fo­cus on a man­age­able body of work, in­stead of a pro­duc­t’s en­tire roadmap. Scoping out one build at a time makes the pro­ject more ag­ile and al­lows the de­liv­ery team to de­liver value ear­lier and of­ten.

Scoping is per­formed by a Product Designer and a Product Developer from the Product team. Having these two spe­cial­i­ties en­ables the team to con­sider so­lu­tions from not only the user’s per­spec­tive, but also the tech­ni­cal fea­si­bil­ity of its im­ple­men­ta­tion. It is also im­por­tant to in­volve the cus­tomer’s key de­ci­sion mak­ers and stake­hold­ers in the process, so that every­one is on the same page and is sat­is­fied with the di­rec­tion the scope is tak­ing. Customer Success Consultants are also in­volved here to ed­u­cate and man­age cus­tomer ex­pec­ta­tions and con­tribute in­sights from a com­mer­cial per­spec­tive.

In gen­eral, a scope should de­liver the fol­low­ing arte­facts for an up­com­ing build:

  • Scope doc­u­ment
  • Non-functional pro­to­type
  • Requirement’s back­log
  • Database model
  • Estimations
  • Projected roadmap for the prod­uct

Realising these de­liv­er­ables for a rea­son­ably sized build typ­i­cally takes be­tween 2 and 5 weeks in length. Teams can scope for longer, how­ever this in­di­cates that they are scop­ing out too much and in­tro­duces risk dur­ing de­vel­op­ment. Our re­search has found that there is a 90% cor­re­la­tion be­tween the scope length and the sub­se­quent de­vel­op­ment length. The more that is scoped out into the fu­ture, the more un­knowns and un­cer­tain­ties are in­tro­duced into the pro­ject.

Scope infographic Scope infographic

Discover Software


Unpacking the prob­lem

The Scope Process

The scop­ing process fol­lows a de­sign think­ing ap­proach and is un­der­pinned by the di­ver­gent and con­ver­gent method­ol­ogy. This user-cen­tred ap­proach is pre­sent through­out the fol­low­ing stages of scop­ing:

  • Understand: Gain a strong un­der­stand­ing of the cus­tomer and the prob­lem they are try­ing to solve.

  • Observe: Perform user test­ing to gather in­sights and val­i­date the prob­lem state­ment.

  • Ideate: Get cre­ative and brain­storm dif­fer­ent ideas to solve the prob­lem.

  • Prototype: Wireframe and pro­to­type the strongest so­lu­tion. Perform test­ing, col­lect feed­back and it­er­ate.

  • Showcase: Complete the re­quire­ments back­log and build a data­base model to align with the val­i­dated pro­to­type.

This ap­proach to prob­lem solv­ing en­sures that the team un­der­stands and has iden­ti­fied the cor­rect prob­lem early on. From there, ideas are tested with users of­ten to fi­nally pro­pose a vi­able so­lu­tion that meets the goals of the pro­ject.

No two soft­ware pro­jects are the same. For this rea­son, the scop­ing process is flex­i­ble to ac­com­mo­date for dif­fer­ent prob­lems, cus­tomers and bud­gets. There are sev­eral fac­tors that im­pact the scop­ing process, the most im­por­tant is the dis­tinc­tion be­tween Greenfield and Brownfield pro­jects. Depending on the pro­ject type, and its spe­cific re­quire­ments, dif­fer­ent ac­tiv­i­ties should be mixed and matched to cre­ate a so­lu­tion dur­ing the scop­ing pe­riod.

4 week scope 4 week scope

Scoping fre­quency

As dis­cussed above, there is a pref­er­ence to­ward 2-5 week scopes to en­cour­age it­er­a­tive de­vel­op­ment and con­tin­ued im­prove­ment. However, gen­er­ally the ini­tial scope tends to be the largest of the pro­ject, as it out­lines the prod­uct roadmap and sets up the first build for the de­liv­ery team.

Beyond the first scope, there are two paths of re-en­ter­ing scope to pre­pare for the next build:

  1. A typ­i­cal ap­proach is paus­ing af­ter the com­ple­tion of the first build, defin­ing your next prob­lem state­ment and then scop­ing the next re­quire­ments - al­low­ing de­vel­op­ment to re­sume once the scope has been de­liv­ered.

  2. Another ap­proach is to con­cur­rently scope and de­velop, which ben­e­fits the pro­ject by con­tin­u­ing the mo­men­tum. A Product team would start an up­com­ing scope at some point dur­ing the cur­rent build and would aim for de­liv­ery at some point be­fore its com­ple­tion. Timed cor­rectly, this al­lows the de­liv­ery team to sim­ply flow into the next build with­out paus­ing.

Discover Software


Unpacking the prob­lem



The Product Designer is the key fa­cil­i­ta­tor of the scop­ing phase, and is as­sisted by a Software Developer, a Customer Success Consultant, and the Product Owner. Other par­ties in­clud­ing the Account Manager and other ex­ter­nal stake­hold­ers will need to be in­volved at cer­tain pe­ri­ods for their ex­per­tise.

Product Designer
  • Facilitates and guides the team through the scop­ing process.
  • Applies de­sign-think­ing to val­i­date the cor­rect prob­lem state­ment is ex­plored.
  • Ideates on pos­si­ble so­lu­tions to the prob­lem.
  • Delivers high-qual­ity de­signs and pro­to­types to ar­tic­u­late the so­lu­tion.
Software Developer
  • Ensures the de­fined so­lu­tion is fea­si­ble to de­velop within de­fined time­frames.
  • Explore and de-risk rel­e­vant tech­nolo­gies re­quired for the so­lu­tion.
  • Write and man­age the prod­uct back­log and re­lated ac­cep­tance cri­te­ria.
Product Owner (plus Stakeholders)
  • Explain their busi­ness and users to the scop­ing team.
  • Provide ac­cess to users for test­ing and val­i­da­tion pur­poses.
  • Attend, par­tic­i­pant and pro­vide key in­sights into the com­pa­ny’s sub­ject mat­ter dur­ing meet­ings.
  • Confirm and ex­plore spe­cific re­quire­ments as­so­ci­ated with the so­lu­tion.
Customer Success Consultant
  • Manage cus­tomer ex­pec­ta­tions and con­tribute in­sights from a com­mer­cial per­spec­tive.
  • Increase their un­der­stand­ing of what the cus­tomer’s key per­for­mance in­di­ca­tors will be for this pro­ject.
  • Clearly de­fine im­por­tant met­rics as­so­ci­ated with the suc­cess of the prod­uct.
Account Manager
  • Translate the learn­ings from the brief to the client and scop­ing team.
  • Manage the clien­t’s ex­pec­ta­tions from a com­mer­cial per­spec­tive.
  • Facilitate any con­trac­tual or fi­nan­cial dis­cus­sions.
Scope participants infographic

Discover Software


Unpacking the prob­lem

Project Types

Greenfield pro­jects

Greenfield pro­jects re­fer to de­vel­op­ing a prod­uct for a com­pletely new en­vi­ron­ment and lacks con­straints im­posed by prior work. When scop­ing prod­ucts that have no ex­ist­ing in­fra­struc­ture or user base, it is im­por­tant to take a prob­lems fo­cused ap­proach. Beginning the scope with a prob­lem state­ment, means that the team can di­verge on a myr­iad of dif­fer­ent ideas be­fore find­ing the most vi­able so­lu­tion.

The ob­serve phase of scop­ing is crit­i­cal for Greenfield pro­jects to val­i­date ideas through dis­cov­ery in­ter­views and user test­ing. Quantifying feed­back and com­par­ing it against mar­ket re­search will help to en­sure a us­able and lov­able prod­uct is de­vel­oped.

Greenfield project infographic Greenfield project infographic

Brownfield pro­jects

Brownfield pro­jects re­fer to the de­vel­op­ment and de­ploy­ment of a new soft­ware sys­tem along­side an ex­ist­ing or legacy sys­tem. For ex­am­ple, a sys­tem that was de­vel­oped years ago is now legacy and the tech­nol­ogy is at risk of be­com­ing end of life. Modernising that sys­tem would con­sti­tute a brown­field pro­ject.

When scop­ing pro­jects where legacy soft­ware is in­volved, there is gen­er­ally a pre-de­fined so­lu­tion in mind. In this case, the cus­tomer will sup­ply a set of func­tional re­quire­ments or have a clear idea of their pro­ject goals. It is im­por­tant that the scop­ing team spend time to deeply un­der­stand the ex­ist­ing plat­form, so that they can de­sign the next phase of its roadmap.

The ben­e­fit of a Brownfield pro­ject is that there is an op­por­tu­nity to learn from the ex­ist­ing user base and make im­prove­ments. The risk, how­ever, is that it can be­come dif­fi­cult to con­strain the scope to a sin­gle prob­lem state­ment. As men­tioned above, smaller scopes are key to de-risk­ing the de­vel­op­ment work­load. The sim­plest way to ac­com­plish this is by break­ing the scope into man­age­able builds.

A worth­while note is that scop­ing pro­jects in this man­ner ham­pers a Product Designer’s abil­ity to per­form val­i­da­tion of the un­der­ly­ing prob­lem, and in­stead as­sumes the va­lid­ity of the legacy (Brownfields) ap­pli­ca­tion.

Brownfield project infographic Brownfield project infographic

Discover Software


Unpacking the prob­lem

Product Back Log

A prod­uct back­log is one of the most im­por­tant arte­facts when build­ing soft­ware. It is the what, why, how and when of fea­ture de­vel­op­ment.

Since re­quire­ments can be ex­pressed in sev­eral ways, there are a va­ri­ety of strate­gies for cap­tur­ing and un­der­stand­ing them. Whether it hap­pens ver­bally dur­ing a meet­ing or through a brain­storm­ing ses­sion on a white­board; it is vi­tal that all re­quire­ments are recorded in a back­log.

A com­monly used method for record­ing these re­quire­ments is to un­der­stand a user’s in­tent through epics and user sto­ries. Beyond epics and user sto­ries, other ticket types can arise such as tasks, de­fects and spikes both within scop­ing and ac­tive de­vel­op­ment. It’s also im­per­a­tive to cap­ture the ac­cep­tance cri­te­ria for each user story to en­sure that the team and prod­uct owner have a shared un­der­stand­ing of what makes each story com­plete. These is­sue types can typ­i­cally be de­fined as the fol­low­ing:

  • Epic: A high-level theme or fea­ture that is used to cat­e­gorise re­lated sto­ries, tasks and de­fects.

  • Story: A fine-grained re­quire­ment, writ­ten from a user’s per­spec­tive, that is small enough that it can be log­i­cally es­ti­mated and built within a short pe­riod.

  • Task: A work item that is­n’t writ­ten from a user’s per­spec­tive and is in­stead a tech­ni­cal item re­quired to achieve the ap­pli­ca­tions sto­ries.

  • Defect: A work item typ­i­cally re­lated to an epic, that has arisen as a re­sult of ap­pli­ca­tion test­ing, which can be a tech­ni­cal bug in the ap­pli­ca­tion or a us­abil­ity is­sue.

  • Spike: Typically, a time-boxed work item with a learn­ing goal as­so­ci­ated with de-risk­ing some tech­nol­ogy or con­cept.

Once the prod­uct back­log is com­plete and cov­ers all func­tion­al­ity shown in the pro­to­type and data­base model, the scop­ing team is ready to es­ti­mate the build.

Product backlog infographic

Discover Software


Unpacking the prob­lem

Estimating your prod­uct

Estimations should al­ways be con­sid­ered as just that - es­ti­ma­tions. Anyone with ex­pe­ri­ence build­ing soft­ware knows that the un­ex­pected of­ten oc­curs dur­ing de­vel­op­ment and should al­ways be planned for, in or­der to man­age ex­pec­ta­tions. The best plan is to take a sci­en­tific ap­proach that has al­lowances for the key fac­tors that of­ten im­pact a pro­jec­t’s time­line. These fac­tors in­clude al­lowances for risk, man­ag­ing tech­ni­cal debt, al­low­ing time for reg­u­lar busi­ness op­er­a­tions and pro­vid­ing time for pro­ject-re­lated work that is­n’t de­vel­op­ment (for ex­am­ple plan­ning and re­view meet­ings).

Estimations infographic Estimations infographic

Estimating Time

Estimating at its core can be man­aged in sev­eral ways, but it’s rec­om­mended to use a time-based sys­tem. A gen­eral rule of thumb when es­ti­mat­ing is that the big­ger and more com­plex some­thing is, the less pre­cise you can be. The time-based val­ues fol­low a Fibonacci-like se­quence. This is com­mon prac­tice within ag­ile es­ti­ma­tions to en­sure de­vel­op­ers don’t get too caught up in the specifics, and in­stead fo­cus on rel­a­tive scale ver­sus other work. Splitting the es­ti­ma­tions be­tween de­vel­op­ment and test­ing ef­fort can fur­ther sim­plify the process and al­low de­vel­op­ers to fo­cus on the spe­cific work item.

Risk Factor

The more com­plex and un­fa­mil­iar a story is, the higher the risk that the time for de­vel­op­ment will be un­der­es­ti­mated. To en­sure that the risk is ap­pro­pri­ately con­sid­ered in es­ti­ma­tions, these fac­tors should be con­sid­ered for each ticket to al­low time for the un­knowns. Developers es­ti­mat­ing should score these risks pa­ra­me­ters, to be in­cluded in all cal­cu­la­tions. The risk fac­tor is cal­cu­lated based on the un­fa­mil­iar­ity and com­plex­ity of the of the story.

Discovery Allowance

It is not un­com­mon for prod­uct fea­tures to change and even be­come more com­plex dur­ing nat­ural dis­cov­ery in the de­vel­op­ment phase. With that in mind, it is ad­vised to pro­vide an al­lowance for when the de­liv­ery team needs time to scope out new dis­cov­ery or de-risk a con­cept dur­ing de­vel­op­ment. The dis­cov­ery al­lowance is spent at the dis­cre­tion of the de­liv­ery team and prod­uct owner. It will be used when the in­tent of a fea­ture has changed, de­tails de­fined were in­suf­fi­cient or are deemed too high-risk.

Discovery allowance infographic Discovery allowance infographic

Delivery Allowance

During the de­vel­op­ment process, teams are fo­cused on de­liv­er­ing high-qual­ity fea­tures at a de­fined pace. This al­lows the team to en­sure that fea­tures are de­liv­ered to the de­fined spec­i­fi­ca­tion. However, de­fects and po­ten­tial im­prove­ments out­side of these re­quire­ments are a re­al­ity of soft­ware de­vel­op­ment. Experience has shown that all pro­jects re­quire pe­ri­ods of time, known as Iteration N, to ad­dress these de­fects and im­prove­ments to pre­pare for pro­duc­tion. The length of time tends to di­rectly cor­re­late to the size of the pro­ject. Tasks per­formed dur­ing Iteration N tend to in­clude tack­ling de­fects, ad­dress­ing tech­ni­cal debt and im­ple­ment­ing core im­prove­ments. These all share a com­mon goal; im­prov­ing prod­uct and code qual­ity.

Also fac­tored into a de­liv­ery al­lowance should be time for re­leas­ing the soft­ware. The most time-con­sum­ing re­lease is to a pro­duc­tion en­vi­ron­ment. The pro­duc­tion en­vi­ron­ment is the ‘final’ live ver­sion of the ap­pli­ca­tion.

The Launch Iteration (period of time set aside to re­leas­ing) be­gins the mo­ment an ap­pli­ca­tion is ready to move to a pro­duc­tion en­vi­ron­ment. Generally, the first re­lease will be the longest and will grad­u­ally de­crease in time the more of­ten the ap­pli­ca­tion is re­leased. Typically, dur­ing the it­er­a­tion, the team should be fo­cused on re­solv­ing any out­stand­ing is­sues block­ing re­lease, doc­u­ment­ing the ap­pli­ca­tion, prepar­ing for the Support team and qual­ity as­sur­ance processes.

Delivery allowance infographic Delivery allowance infographic

Allocation Factor

Within any com­mer­cial soft­ware op­er­a­tion, de­liv­ery teams can’t be al­lo­cated to their de­vel­op­ment work for all work­ing hours. Activities like in­ter­nal meet­ings, help­ing their peers and learn­ing & de­vel­op­ment take up time and should be recog­nised when es­ti­ma­tions are be­ing con­sid­ered. This time should be fac­tored into an al­lo­ca­tion fac­tor that can be ap­plied across all es­ti­ma­tions.

The al­lo­ca­tion fac­tor also ac­counts for time that is­n’t de­vel­op­ing fea­tures but is still pro­vid­ing value to the cus­tomer. All scrum re­lated meet­ings and time spent re­leas­ing to the beta en­vi­ron­ment form part of the al­lo­ca­tion fac­tor. It is ex­pected that any al­lo­ca­tion cal­cu­la­tions will be ap­prox­i­mate based on reg­u­lar busi­ness ac­tiv­i­ties, such as it­er­a­tion ca­dence and stan­dard monthly learn­ing and de­vel­op­ment al­lowances

Discover Software


Unpacking the prob­lem

Development Options

Fixed Time ver­sus Fixed Scope

There are two fea­si­ble de­vel­op­ment meth­ods. While it’s nat­ural to want every­thing done in a fixed amount of time, that does­n’t con­sider the re­al­i­ties of soft­ware de­vel­op­ment. Using a trade-off slider is a great method to com­mu­ni­cate that it is pos­si­ble to fix the time, but as a re­sult the scope must re­main flex­i­ble. This can help in­form the de­ci­sion around what de­vel­op­ment ap­proach will be re­quired for any given pro­ject. By se­lect­ing ei­ther fixed time or fixed scope, it al­lows a cus­tomer to ei­ther pri­ori­tise their scoped fea­tures or the bud­get they have al­lo­cated to the pro­ject. It is para­mount to un­der­stand though - you can’t have both.

Fixed Scope & Variable Time

  • Scoping length is longer, typ­i­cally re­quires a lot of de­tail.
  • The scope may be­come ob­so­lete by the end of the pro­ject de­pend­ing on busi­ness goals and length of de­vel­op­ment.
  • Virtually im­pos­si­ble to guar­an­tee that a scope will not change.

Fixed Time & Variable Scope

  • Scoping time­frames will be shorter.
  • Faster re­turn on in­vest­ment as de­vel­op­ment can be­gin sooner.
  • Certainty in bud­get and spend.
  • Scope is fi­nalised dur­ing de­vel­op­ment and can re­spond to chang­ing busi­ness needs.
Trade offs infographic

Fixed Time and Fixed Scope

Gone are the days of the wa­ter­fall method­ol­ogy. Developers were handed spec­i­fi­ca­tion doc­u­ments that were hun­dreds of pages long, writ­ten by busi­ness an­a­lysts that spent months, maybe years, study­ing the needs of an ap­pli­ca­tion. Often, it took less than a week into de­vel­op­ment be­fore that doc­u­ment was al­ready amended be­cause it was out­dated. The first beta re­lease of a prod­uct would be met with an­other cas­cade of changes, and the pro­ject dead­line would slip fur­ther and fur­ther be­hind. This method pro­vided noth­ing but frus­tra­tion for cus­tomers and de­vel­op­ers alike.

For this rea­son, it’s rec­om­mended that the team fo­cuses on smaller it­er­a­tive scopes of work and de­cide on a trade-off be­tween time and scope. This gives the team a chance to pro­vide es­ti­ma­tions with more cer­tainty in the short-term, while be­gin­ning to be­come more fa­mil­iar with the prod­ucts con­text and its re­quire­ments.

Increased Velocity

There is one other fac­tor that can af­fect the time it takes to de­liver a pro­ject, and that is team size. Increasing the size of the de­liv­ery team can po­ten­tially mean fin­ish­ing the pro­ject in a shorter amount of time. However, there are two rea­sons why it is not a vi­able op­tion for all pro­jects:

  1. The first rea­son is that of bud­get. Generally, if a cus­tomer opts to fix the time, it is be­cause they have a bud­get in mind for their pro­ject. Increasing the team size will not as­sist in bud­get­ing, but in­stead help to shift the de­liv­ery date for­ward. The ad­verse ef­fect of shift­ing the de­liv­ery date for­ward means that de­ci­sions need to be made faster, which can be dif­fi­cult early in a pro­ject.

  2. The sec­ond rea­son is a loss of ve­loc­ity per de­vel­oper. Does the phrase “too many cooks in the kitchen” ring a bell? Adding more de­vel­op­ers does not mean dou­ble out­put, or nec­es­sar­ily even an im­me­di­ate in­crease in ve­loc­ity. A pro­ject needs to be able to be bro­ken into log­i­cal sec­tions to al­low a col­lab­o­ra­tive de­vel­op­ment ef­fort to oc­cur, which is pos­si­ble for some soft­ware pro­jects but may not be for oth­ers.

Squad structure

Discover Software


Vision = re­al­ity


With the pro­ject suc­cess­fully pre­pared, de­vel­op­ment can be­gin. The pur­pose of the de­vel­op­ment phase is to build soft­ware that meets the scoped re­quire­ments, in a man­ner that de­liv­ers value early and of­ten to the cus­tomer. The de­vel­op­ment of a prod­uct can com­mence at any point af­ter the scope has been fully de­liv­ered.

The cy­cle is com­prised of many it­er­a­tions which con­tinue un­til de­vel­op­ment has been com­pleted to the prod­uct own­er’s sat­is­fac­tion. Every it­er­a­tion in­cludes meet­ings, check­points and soft­ware re­leases to en­sure the de­liv­ery of func­tional soft­ware and keep the cus­tomer sat­is­fied.

The de­vel­op­ment work­flow is un­der­pinned by many prod­uct it­er­a­tions that re­sult in a re­leasable prod­uct ver­sion, typ­i­cally called a build. An it­er­a­tion is a short, time-boxed pe­riod when a de­liv­ery team works to com­plete a de­fined set of re­quire­ments. Iterations gen­er­ally have pro­ject re­lated goals as­so­ci­ated with them, but in gen­eral should de­liver an in­cre­ment that can be used and tested. Defining the work to be com­pleted dur­ing an it­er­a­tion, called the it­er­a­tion back­log, is a shared re­spon­si­bil­ity be­tween the prod­uct owner and the de­liv­ery team.

The length of de­vel­op­ment varies be­tween pro­jects and de­pends heav­ily upon the scale and com­plex­ity of the so­lu­tion. As men­tioned, there is a pref­er­ence to­ward shorter scopes, which tend to cre­ate smaller and more man­age­able de­vel­op­ment cy­cles. Reducing the size of a build not only al­lows for in­creased cer­tainty around es­ti­ma­tions, but also lets the Product team in­cor­po­rate user feed­back into sub­se­quent scopes.

Development infographic

Discover Software


Vision = re­al­ity


The de­vel­op­ment phase is pri­mar­ily dri­ven by the Delivery team, with sup­port from the Growth team. The squad lead is the main point of con­tact be­tween the in­ter­nal and ex­ter­nal teams and man­ages the de­vel­op­ment team and their as­so­ci­ated it­er­a­tions. Customer Success Consultants, Product Designers and Account Managers all pro­vide sup­port dur­ing the process as re­quired.

Squad Lead

  • Facilitates the de­vel­op­ment process and all cer­e­monies per­formed.
  • Responsible for en­sur­ing the process is fol­lowed and trans­par­ent for both the de­vel­op­ers and the cus­tomer.
  • Unblocks de­vel­op­ers to en­sure a con­sis­tent mo­men­tum is achieved.
  • Creates and de­liv­ers key re­ports and in­sights to the cus­tomer dur­ing and at the com­ple­tion of it­er­a­tions.

Software Developers

  • Developing and de­liv­er­ing the prod­uct to meet the planned it­er­a­tion back­log.
  • Ensure soft­ware is tested to a high stan­dard.
  • Follows a rig­or­ous QA process to en­sure all new fea­tures are ap­pro­pri­ately de­liv­ered.
  • Attends all it­er­a­tion re­lated cer­e­monies.

Product Owner

  • Actively in­volved in all it­er­a­tion cer­e­monies for con­text and de­ci­sions.
  • The key de­ci­sion maker for the pro­ject and should ex­pect to be con­tacted of­ten in re­la­tion to fac­tors ef­fect­ing or block­ing an it­er­a­tion.
  • Validating the it­er­a­tion back­log and re­lated ac­cep­tance cri­te­ria.
  • Perform user ac­cep­tance test­ing against tick­ets de­liv­ered to the beta en­vi­ron­ment.

Customer Success Consultants

  • Manages cus­tomer en­quiries and con­cerns dur­ing the de­vel­op­ment process.
  • Ensures the ac­count man­ager is kept up to date with nec­es­sary in­for­ma­tion.
  • Advocates and man­ages the in­te­gra­tion of an­a­lyt­ics plat­forms into the ap­pli­ca­tion.

Account Manager

  • Provides sup­port for con­trac­tual dis­cus­sions.
  • Manages the ac­count and re­la­tion­ship be­tween the in­ter­nal and ex­ter­nal par­ties.
  • Acts as an in­ter­me­di­ary for key dis­cus­sions re­lated to the ac­count and pro­ject time­lines.

Product Designer

  • Ensures the de­sign sys­tem de­fined dur­ing scop­ing is be­ing fol­lowed.
  • Helps un­pack and de­sign new prob­lems or re­quire­ments dis­cov­ered dur­ing de­vel­op­ment.
Development particpants

Discover Software


Vision = re­al­ity


To nav­i­gate it­er­a­tions, cer­e­monies are used to pro­vide rel­e­vant check-ins. Each cer­e­mony serves its own pur­pose. The fol­low­ing set of cer­e­monies are rec­om­mended to oc­cur every it­er­a­tion.

Daily Huddle

A daily meet­ing that in­volves the squad lead, de­vel­op­ers and if re­quired, the prod­uct owner. Teams can choose to nav­i­gate these meet­ings in many ways, but the core dis­cus­sions points should al­ways be “What did I do yes­ter­day?”, “What am I do­ing to­day?” and “What is block­ing me?”. Tackling these ques­tions dur­ing the meet­ing should bring up dis­cus­sions on progress, stim­u­late dis­cus­sion around im­ped­i­ments and find­ing res­o­lu­tions.

Planning Session

An event that be­gins an it­er­a­tion, its pur­pose is to clearly de­fine the work­load, an it­er­a­tion goal and con­firm how that will be achieved. The squad lead typ­i­cally de­liv­ers a first take on the it­er­a­tion back­log, which is fi­nalised with the help of the de­vel­op­ers and prod­uct owner. Before com­plet­ing the plan­ning ses­sion, every­one in the room should be able to agree upon the work, the es­ti­mated time­frame, how it will be de­vel­oped and the ac­cep­tance cri­te­ria for each item.

Review Session

The goal of an it­er­a­tion re­view is to demon­strate the de­liv­ered it­er­a­tion of work. It is time for the de­liv­ery team to show­case their work, take rel­e­vant ques­tions and dis­cuss feed­back. Unlike other cer­e­monies, this is the ideal time for ex­ter­nal stake­hold­ers that aren’t the Product Owner to be­come in­volved and pro­vide in­put.

Elaboration Session

A meet­ing with the core pur­pose of in­ves­ti­gat­ing the fu­ture of the pro­ject and en­sur­ing the back­log is prop­erly elab­o­rated upon for de­vel­op­ment. It is rec­om­mended to hold one of these meet­ings per it­er­a­tion to al­low an ap­pro­pri­ate amount of time to look at is­sues po­ten­tially block­ing the next it­er­a­tion, or be­yond. Tasks to look at dur­ing each of these ses­sions are adding or split­ting sto­ries, re­mov­ing ir­rel­e­vant sto­ries, re-as­sess­ing the de­fined pri­or­i­ties and en­sur­ing high-pri­or­ity sto­ries and tasks are ready for de­vel­op­ment.

Retrospective Session

A ses­sion ded­i­cated to the re­flec­tion on an it­er­a­tion. Not to be con­fused with the re­view ses­sion, the idea be­hind a ret­ro­spec­tive is to have a chance for the de­liv­ery team to eval­u­ate it­self with­out the prod­uct owner and build an ac­tion plan for fu­ture changes. These ses­sions pro­mote con­tin­u­ous im­prove­ment and em­pha­sise small, in­cre­men­tal change for the bet­ter of the team and prod­uct.

Discover Software


Vision = re­al­ity

Iteration Backlog

The Iteration Backlog is a sub­set of the prod­uct back­log. It only in­cludes the tick­ets in­tended to be com­pleted within the it­er­a­tion. Once these tick­ets are com­pleted, it will re­sult in a new ver­sion of the ap­pli­ca­tion be­ing ready for test­ing. An it­er­a­tion can be made up of any ticket de­fined within the prod­uct back­log or any newly dis­cov­ered tick­ets. All tick­ets en­ter­ing the it­er­a­tion back­log must pass a qual­ity gate be­fore an it­er­a­tion com­mences.

It is rec­om­mended to keep the ticket work­flows sim­ple; these will be the day-to-day processes that de­vel­op­ers fol­low when com­plet­ing their it­er­a­tion work. Over-engineering these, can very eas­ily re­sult in trou­ble with or­gan­i­sa­tional adop­tion, or missed steps. The fol­low­ing is a stan­dard and widely adopted four-step process:

  • To do: Work that is yet to be started by any de­vel­oper.

  • In progress: Work that is cur­rently in progress by a de­vel­oper and should be as­signed as such.

  • Review: Work that has been ten­ta­tively com­pleted but is await­ing peer re­view from an­other de­vel­oper.

  • Done: Work that is been com­pleted and passed the it­er­a­tion re­view process.

To vet is­sues that en­ter this work­flow, it is rec­om­mended to en­force a Definition of Ready. This will en­sure that tick­ets are prop­erly pre­pared for de­vel­op­ment. The best time to check the de­f­i­n­i­tion of ready is dur­ing the plan­ning ses­sion. If gaps are found, they should be rec­ti­fied prior to the start of an it­er­a­tion to mit­i­gate po­ten­tial is­sues. Key points to ex­plore with a de­f­i­n­i­tion of ready should be ticket qual­ity, siz­ing, risk level and the ex­is­tence of sup­port­ing arte­facts (for ex­am­ple, has it been de­signed in the pro­to­type).

Conversely, pro­gress­ing an is­sue through to “Done” should re­quire a Definition of Done, which is used to en­sure all tick­ets com­pleted meet a de­fined stan­dard. When a ticket is fin­ished, it is rec­om­mended to check that peer re­view has been per­formed, the fea­ture has been ap­pro­pri­ately tested, ac­cep­tance cri­te­ria has been met, and it is ready to be re­leased.

Iteration backlog infographic Iteration backlog infographic

Peer Review

It is im­per­a­tive for de­vel­op­ers work­ing on an it­er­a­tion to­gether to re­view each oth­er’s work. It en­sures code qual­ity and syn­ergy as part of any work­flow. Highlighted be­low is a guide of top­ics rec­om­mended for any peer re­view be­fore a ticket moves through a de­f­i­n­i­tion of done. Keep in mind that it is not fea­si­ble to cover every topic for each ticket:

  • Code qual­ity
  • Performance
  • Security
  • Documentation
  • Testing
  • Styling im­ple­men­ta­tion
  • Smoke-testing the ap­pli­ca­tion

Discover Software


Vision = re­al­ity

De-Risking Software

As dis­cussed within Estimations, an al­lowance should be avail­able for de­vel­op­ers to de-risk cer­tain fea­tures dur­ing de­vel­op­ment. The first step when us­ing this al­lowance is to iden­tify fea­tures that are too high-risk for de­vel­op­ment and there­fore should fail your Definition of Ready. Once these have been iden­ti­fied, a tech spike could be used to time-box and ex­plore the com­plex­ity and/​or un­fa­mil­iar­ity around a spe­cific fea­ture. The fo­cus of a tech spike should not be about de­liv­er­ing a work fea­ture, but in­stead achiev­ing a set of learn­ing ob­jec­tives.

There are gen­er­ally two ways to al­low time for tech spikes:

  • Within Iteration: Simply put, the de­liv­ery team should treat the time-boxed tech spike like any other ticket and in­clude it within their it­er­a­tion’s es­ti­ma­tions. This means that within the time-boxed it­er­a­tion, there should be time al­lo­cated for one or more de­vel­op­ers to de-risk this fea­ture.

  • External to Iteration: For higher pri­or­ity de-risk­ing ex­er­cises, it is pos­si­ble to pause the fol­low­ing it­er­a­tion in favour of in­ves­ti­gat­ing the fea­ture. This re­sults in the en­tire team fo­cus­ing on the fea­ture, which should re­sult in a faster res­o­lu­tion, but at the cost of start­ing the next it­er­a­tion.

Discover Software


Vision = re­al­ity

User Acceptance Testing

User Acceptance Testing is the key qual­ity as­sur­ance process as­signed to Product Owners. It’s an op­por­tu­nity to con­firm or de­cline that the it­er­a­tion was com­pleted to the Product Owner’s sat­is­fac­tion against the de­fined scope. User Acceptance Tests should be per­formed against all tick­ets com­pleted dur­ing an it­er­a­tion and com­pared against the agreed upon ac­cep­tance cri­te­ria.

Performing these can be done in two ways: the first in­volves the Product Owner com­plet­ing them in their own time and sub­mit­ting de­ci­sions re­motely. The al­ter­na­tive is for the Product Owner to join a mem­ber of the de­liv­ery team in a fo­cused walk­through ses­sion. This al­lows an in­ter­nal de­vel­oper to wit­ness any is­sues as they oc­cur, which can be very valu­able for rec­ti­fy­ing prob­lems. This is the pre­ferred method, as it en­sures that the tests are com­pleted thor­oughly, and the team can be sure that any is­sues which oc­cur are truly is­sues, and not user er­ror. 

UAT Kickback

In the soft­ware world, it is not un­com­mon for a Product Owner to pro­vide feed­back which needs to be ad­dressed af­ter UATs. Depending on the ur­gency of the change, there are three rec­om­mended meth­ods to han­dle it:

  • Urgent: This re­quires an ur­gent res­o­lu­tion which must be re­solved in the next re­lease. In this sit­u­a­tion the de­liv­ery team should in­clude the ticket into the cur­rent it­er­a­tion and ei­ther de­lay its com­ple­tion or re­move a ticket of the same size.

  • Next Iteration: The res­o­lu­tion is a high pri­or­ity but does­n’t need to be re­solved im­me­di­ately. The ticket should be marked as a con­tender for the next it­er­a­tions back­log.

  • Backlog: The dis­cov­ered prob­lem is deemed a low pri­or­ity and as a re­sult should be placed in the back­log for later con­sid­er­a­tion.

Iteration backlog infographic

Discover Software


Vision = re­al­ity

Releasing Software

The goal of any pro­ject is to de­ploy each build into a live en­vi­ron­ment and en­sure that all stake­hold­ers are happy with the out­come. There are three en­vi­ron­ments to con­sider: Development, Beta and Production.  Each en­vi­ron­ment has its role to play and to­gether they en­sure the nec­es­sary qual­ity con­trol gates are in place. The spe­cific pur­pose for each en­vi­ron­ment is as fol­lows:

  • The de­vel­op­ment en­vi­ron­ment is used to lo­cally de­velop the ap­pli­ca­tion. Only the de­vel­op­ers them­selves will have ac­cess to the en­vi­ron­ment. It is where the ap­pli­ca­tion is built.

  • The beta en­vi­ron­ment is the first live, cloud-based en­vi­ron­ment in most work­flows. It is tra­di­tion­ally the sum of all is­sues com­pleted, that have passed the Definition of Done. It is openly avail­able for the Product Owner and other rel­e­vant stake­hold­ers to use. Avoid putting any per­sonal or sen­si­tive data on the beta en­vi­ron­ment.

  • The pro­duc­tion en­vi­ron­ment is the fi­nal stage of de­ploy­ment where only tick­ets that have passed UAT are de­ployed. Nothing should make it to this en­vi­ron­ment with­out first be­ing re­viewed in the pre­vi­ous en­vi­ron­ments and step­ping through qual­ity con­trol processes. This en­vi­ron­ment is where users in­ter­act with the ap­pli­ca­tion.

Quality Assurance

As part of the de­vel­op­ment life­cy­cle, an ap­pli­ca­tion moves from the de­vel­op­ment en­vi­ron­ment, into beta, then even­tu­ally into pro­duc­tion. In or­der to progress into the next en­vi­ron­ment, the ap­pli­ca­tion needs to meet cer­tain cri­te­ria be­fore it can be ap­proved for re­lease. Checklists for these gates are rec­om­mended to en­sure that each re­lease lives up to the stan­dard of that en­vi­ron­ment.

The beta re­lease check­list is com­pleted be­fore mov­ing tick­ets to the beta en­vi­ron­ment. While most of the cri­te­ria in­volves rou­tine se­cu­rity and qual­ity checks, one im­por­tant arte­fact that should be re­quired is a Testing Report. A Testing Report tra­di­tion­ally shows the cur­rent state of the tests run against the ap­pli­ca­tion, which are ex­pected to all be pass­ing be­fore the re­lease can con­tinue.

The pro­duc­tion re­lease check­list is pre­pared when the prod­uct on beta is ready to be re­leased to pro­duc­tion. Before the ap­pli­ca­tion can be re­leased into the pro­duc­tion en­vi­ron­ment, teams must have re­ceived ex­press ap­proval from the Product Owner.

Discover Software


Vision = re­al­ity

Experimental Framework

The mantra be­hind this Way of Working is to dis­cover new bound­aries. In or­der to con­stantly im­prove a process, there needs to be a frame­work to ideate, ex­per­i­ment and doc­u­ment bet­ter ways of work­ing. The ex­per­i­men­tal frame­work pro­vides that.

Within a pro­ject and across mul­ti­ple pro­jects, there are too many mov­ing parts for a sin­gle per­son to be across. Instead, the ex­per­i­men­tal frame­work en­cour­ages that every­one par­tic­i­pates in process im­prove­ments. If the re­sults of an ex­per­i­ment are pos­i­tive and ap­proved by the ap­pro­pri­ate stake­hold­ers, it be­comes a part of this Way of Working. To en­sure the best pos­si­ble fit, it is rec­om­mended that you make your own adap­ta­tions to this Way of Working us­ing the ex­per­i­men­tal frame­work.

Broadly speak­ing, the ex­per­i­men­tal frame­work is com­prised of five steps.

  1. Identify area of im­prove­ment

    The first stage of an ex­per­i­ment is to iden­tify the need to con­duct one, and then clearly ex­plain what the prob­lem is. Useful ques­tions which can as­sist in this process are:

    • Where did the prob­lem oc­cur?
    • When did the prob­lem oc­cur?
    • What process did the prob­lem in­volve?
    • How is the prob­lem mea­sured?
    • How much is the prob­lem cost­ing?
  2. Analyse cur­rent meth­ods

    To find the best so­lu­tion, the cause of the prob­lem must first be iden­ti­fied. The root cause can be found through analysing the his­tor­i­cal data and speak­ing with peo­ple this prob­lem has af­fected.

    Start by doc­u­ment­ing the cur­rent meth­ods and the is­sues that arise. It’s im­por­tant to doc­u­ment con­stantly and thor­oughly as you progress through an ex­per­i­ment.

  3. Generate ideas or im­prove­ments

    The next step is to iden­tify po­ten­tial so­lu­tions. Brainstorming is an im­por­tant part of this step, as it helps con­ceive of a range of so­lu­tions. For each idea, es­tab­lish how it will help solve the prob­lem and any po­ten­tial pros and cons.

  4. Develop an Implementation Plan

    After a range of so­lu­tions have been iden­ti­fied, choose the most promis­ing one to ex­per­i­ment with. The next step is to cre­ate a strat­egy to im­ple­ment the so­lu­tion. The fol­low­ing points should be an­swered in the Implementation Plan:

    • Who is in charge of the ex­per­i­ment?
    • Who is in­volved in the ex­per­i­ment?
    • What is the du­ra­tion of the ex­per­i­ment?
    • How are we im­ple­ment­ing the ex­per­i­ment?
    • How to track or mea­sure the ex­per­i­ment (in money, time, cus­tomer sat­is­fac­tion, or an­other crit­i­cal met­ric)?
  5. Evaluate the ex­per­i­ment

    Once the Implementation Plan has ended, the find­ings of the ex­per­i­ment are dis­cussed amongst the stake­hold­ers. They will then de­ter­mine whether the ex­per­i­ment achieved the mea­sur­able goals and if it could be con­sid­ered a suc­cess. If the method failed, it should be iden­ti­fied why and whether a new ex­per­i­ment should be con­ducted.

    If the ex­per­i­ment suc­ceeded, then the strat­egy should be pro­posed to man­age­ment, who along­side the ex­per­i­ment stake­hold­ers, will de­ter­mine if the strat­egy is both ben­e­fi­cial and ap­plic­a­ble across the pro­ject or the com­pany as a whole. If the method is ap­proved, it be­comes a part of the Way of Working.

Discover Software


Continuing the jour­ney


Completing a de­vel­op­ment build tra­di­tion­ally marks the tran­si­tion into a sup­port phase for any pro­ject. Outside mov­ing a pro­ject into sup­port, there are also op­por­tu­ni­ties to con­tin­u­ally learn from your live ap­pli­ca­tion to im­prove it or de­fine your next scope of work. It’s ideal for newly fin­ished pro­jects on their MVP to leave time to al­low ini­tial users to pro­vide feed­back, which will shape the next phase of the pro­ject. However, for long stand­ing ap­pli­ca­tions with large roadmaps, start­ing the next scop­ing ef­fort should con­tinue to im­prove your prod­uct while it is be­ing main­tained and en­hanced in pro­duc­tion.

The Support phase is fo­cused on help­ing a prod­uct tran­si­tion from ac­tive de­vel­op­ment to daily main­te­nance and im­prove­ments. As an in­di­ca­tor of im­por­tance, ex­perts are clear that busi­nesses should ex­pect to bud­get 15-20% of the to­tal spend every year the ap­pli­ca­tion is in sup­port. Three core ser­vice of­fer­ings in Support are Maintain; han­dling cus­tomer re­quests and keep­ing an ap­pli­ca­tion op­er­a­tional, Product Success; re­port­ing on ap­pli­ca­tion an­a­lyt­ics and dri­ving key in­sights for im­prove­ment and Enhance; in­cre­men­tally im­prov­ing an ap­pli­ca­tion us­ing a rapid de­vel­op­ment flow.

An in­te­gral part of sup­port is en­sur­ing an ef­fec­tive han­dover oc­curs be­tween the ac­tive de­liv­ery team and those re­spon­si­ble for sup­port­ing an ap­pli­ca­tion long term. This en­sures that Support Developers will be able to op­er­ate in iso­la­tion from the de­liv­ery team, to ef­fi­ciently mon­i­tor and im­prove an ap­pli­ca­tion. Once the build has suc­cess­fully tran­si­tioned, the prod­uct will en­ter a con­tin­u­ous state of main­te­nance and im­prove­ment. The day-to-day op­er­a­tions of sup­port in­volve man­ag­ing re­quests com­ing from the prod­uct owner. Other tasks in­volved with sup­port­ing a prod­uct can in­clude:

  • Providing proac­tive and pre­ven­ta­tive mon­i­tor­ing through reg­u­lar us­age and test­ing of the prod­uct 
  • Advising the prod­uct owner on ap­pli­ca­tion and per­for­mance im­prove­ments
  • Completing timely up­dates and up­grades 
  • Maintaining back-ups and pro­vid­ing re­cov­ery for the soft­ware 
  • Assisting with helpdesk or on-call sup­port
Support infographic

Who needs Support?

Activating a sup­port provider once a build has been com­pleted is rec­om­mended for any and all ap­pli­ca­tions. If you in­tend to have users on your ap­pli­ca­tion reg­u­larly, a sup­port provider will be able to help en­sure your users have con­sis­tent ap­pli­ca­tion ac­cess.

The re­al­ity of soft­ware de­vel­op­ment is that prob­lems oc­cur. There is no such thing as ‘perfect soft­ware.’ Whether these prob­lems are in the ap­pli­ca­tion it­self or the in­fra­struc­ture, at some point you are go­ing to need a group of de­vel­op­ers who un­der­stand your prod­uct and who are able to quickly get every­thing run­ning nor­mally again.

A lesser known, but use­ful ben­e­fit of en­gag­ing sup­port is the abil­ity to proac­tively im­prove your ap­pli­ca­tion. Once in pro­duc­tion, your ap­pli­ca­tion should be re­ceiv­ing feed­back from your users both di­rectly and in­di­rectly. Direct feed­back can come in the form of sched­uled user feed­back ses­sions or through var­i­ous pub­lic com­mu­ni­ca­tion chan­nels. Indirect feed­back gen­er­ally comes from in­te­grated an­a­lyt­ics tools or will be un­cov­ered as part of reg­u­lar ap­pli­ca­tion main­te­nance. Having a group of de­vel­op­ers avail­able to ac­tion the feed­back in shorter bursts means an ap­pli­ca­tion can be con­tin­u­ally im­proved, with­out the need for a longer-term strate­gic build each time.

When Does Support Finish?

While an ap­pli­ca­tion is ac­tively in use, it should con­tinue to be mon­i­tored, sup­ported, and im­proved upon. A fa­mil­iar con­cept is where ma­jor soft­ware pack­ages have be­come “end of life.” This gen­er­ally means that the com­pany who cre­ated the soft­ware has stopped build­ing new fea­tures for the ap­pli­ca­tion and will no longer pro­vide key fixes. Depending on the scale of the soft­ware and how many peo­ple use it, this can oc­cur months af­ter the ini­tial re­lease, or even decades af­ter. In our ex­pe­ri­ence, we are still sup­port­ing and build­ing new im­prove­ments for ap­pli­ca­tions that are up to 5-10 years old.

Discover Software


Continuing the jour­ney


The Support phase is bro­ken into three sec­tions; Maintain, Enhance and Product Success. Maintain is han­dled com­pletely by the Support team, with sup­port from the Customer Success Consultant as re­quired. Enhance is man­aged by the Enhance Lead and worked on by sup­port de­vel­op­ers within the Support team. The man­age­ment of Product Success is han­dled solely by the Customer Success Consultant, who will iden­tify whether to bring in other ex­perts as rel­e­vant prob­lems or op­por­tu­ni­ties are un­cov­ered.

Support Developers
  • Triage is­sues iden­ti­fied through the sup­port ser­vice desk.
  • Develop and test ap­pli­ca­tion fixes re­quired through Maintain.
  • Elaborate, de­velop and test im­prove­ments iden­ti­fied through the Enhance sec­tion.
  • Ensure qual­ity as­sur­ance prac­tices are up­held through­out the whole life­cy­cle of the ap­pli­ca­tion.
Enhance Lead
  • Facilitates the en­hance process and all cer­e­monies as­so­ci­ated.
  • Responsible for en­sur­ing the process is fol­lowed and trans­par­ent for both the de­vel­op­ers and the cus­tomer.
  • Unblocks de­vel­op­ers to en­sure a con­sis­tent mo­men­tum is achieved.
  • Creates and de­liv­ers key re­ports and in­sights to the cus­tomer dur­ing and at the com­ple­tion of it­er­a­tions.
Product Owner
  • Raises is­sues through the ser­vice desk.
  • Defines any de­sired im­prove­ments and work with the Enhance team to iden­tify the spe­cific re­quire­ments.
  • Performs user ac­cep­tance test­ing against tick­ets de­liv­ered to the beta en­vi­ron­ment.
  • Attends reg­u­lar prod­uct suc­cess con­sul­ta­tions to ex­plore find­ings from pro­ject an­a­lyt­ics.
Customer Success Consultant
  • Generate monthly an­a­lyt­ics re­ports.
  • Facilitate monthly prod­uct suc­cess catch ups to un­pack cur­rent in­sights.
  • Identify and dis­cuss new op­por­tu­ni­ties or prob­lems with the cus­tomer.
Product Designer
  • Partake in Enhance elab­o­ra­tion that re­quires de­sign work.
  • Ideates on pos­si­ble so­lu­tions to the prob­lem.
  • Delivers high-qual­ity de­signs and pro­to­types to ar­tic­u­late the so­lu­tion.
Support infographic

Discover Software


Continuing the jour­ney


The Maintain sec­tion of sup­port is in­tended to pro­vide cus­tomers with on­go­ing sup­port and main­te­nance for their ap­pli­ca­tions. Engagement with Support needs to be con­sis­tent so that you have some­one with a good un­der­stand­ing of the pro­ject avail­able to quickly re­solve ap­pli­ca­tion and in­fra­struc­ture is­sues. Infrastructure sup­port (via a Managed Service Provider) in­cludes en­sur­ing your ap­pli­ca­tion is up and run­ning, as well as main­tain­ing the server, data­base and other ser­vices used by the ap­pli­ca­tion. Application sup­port is gen­er­ally fo­cused on prob­lems dis­cov­ered by users that are hin­der­ing them from en­gag­ing with the ap­pli­ca­tion as they ex­pect. In soft­ware prod­ucts these types of de­fects and in­fra­struc­ture is­sues can arise of­ten and should as a re­sult be pri­ori­tised based on their im­pact to the user.

Maintain infographic Request types infographic

Underpinning the sup­port phase are three lev­els that de­fine how a sub­mit­ted ticket will be han­dled, and by whom.

Level 1: Basic help desk sup­port

Level 1 sup­port is the first sup­port tier within any op­er­a­tion. It is usu­ally pro­vided by lower-level IT sup­port per­son­nel but can change de­pend­ing on the con­text and re­quire­ments. Typically, tasks re­quired here are col­lect­ing cus­tomer re­quire­ments, han­dling calls and ba­sic trou­bleshoot­ing to mit­i­gate the im­pact of user cen­tric prob­lems.

Level 2: In-depth tech­ni­cal sup­port

Level 2 sup­port is han­dled by a trained sup­port de­vel­oper able to triage, es­ti­mate and re­solve is­sues with an ap­pli­ca­tion. Pre-requisites for han­dling these re­quests is hav­ing knowl­edge of the sys­tem, un­der­stand­ing its ex­pected be­hav­iour and hav­ing ac­cess to rel­e­vant in­for­ma­tion or peo­ple. It’s im­por­tant that any res­o­lu­tions are han­dled us­ing stan­dard de­vel­op­ment, test­ing and qual­ity prac­tices.

Level 3: Expert prod­uct and ser­vice sup­port

An is­sue is typ­i­cally de­ferred to level 3 once all lay­ers of sup­port de­vel­op­ment have been ex­hausted and the orig­i­nal, or a more se­nior soft­ware de­vel­oper needs to ad­dress the re­quest. These de­vel­op­ers will have the most ap­pro­pri­ate level of knowl­edge and in­for­ma­tion avail­able to them to re­solve the is­sue. To save time and ef­fort, it’s im­por­tant that all in­for­ma­tion cur­rently gath­ered through the first 2 lev­els is trans­ferred to level 3.

External ven­dor and sup­plier sup­port

Outside of the lev­els de­fined, some­times ap­pli­ca­tion prob­lems come from out­side the or­gan­i­sa­tion that de­vel­oped the ap­pli­ca­tion. In these cases, level 2 and 3 sup­port need a mech­a­nism to iden­tify ven­dor or sup­plier is­sues that need to be de­ferred to the sup­port mech­a­nism of the rel­e­vant par­ties. If you source any­thing from ex­ter­nal ven­dors or sup­pli­ers, it’s im­por­tant to have a chan­nel avail­able to gain sup­port in these in­stances. A good ex­am­ple here is in­te­grat­ing with a pay­ment gate­way, where that gate­way’s API re­quires its own fix to work on other ap­pli­ca­tions.

Continuous sup­port

Even though a build has en­tered sup­port, this does not mean that a cus­tomer needs to stop work with a Delivery Team. Once a ver­sion of the ap­pli­ca­tion has been re­leased to pro­duc­tion, the sup­port team will be able to han­dle the ac­tive main­te­nance and im­prove­ment of that build. Meanwhile, this al­lows the other teams to be­gin fo­cus­ing on the next prob­lem state­ment, scope and sub­se­quent it­er­a­tion of the prod­uct. Once that sub­se­quent de­vel­op­ment cy­cle is ready for pro­duc­tion, a sim­i­lar flow can be­gin, to han­dover the next ver­sion of the ap­pli­ca­tion for sup­port.

Request Types

Product Owners, or in some cases their users, can sub­mit tick­ets to sup­port. Tickets tend to come in the form of de­fects, gen­eral ques­tions or im­prove­ments. Submitted tick­ets re­quire time to triage, repli­cate, di­ag­nose and at­tempt to re­solve the is­sue. Tickets that are im­prove­ments will be de­ferred to a sep­a­rate Enhance process for proper elab­o­ra­tion prior to de­vel­op­ment. To man­age the pri­or­ity of re­quests, prod­uct own­ers are asked to as­sign one of the sever­ity lev­els de­fined be­low.

  • Blocker: Produces an emer­gency in which the soft­ware is in­op­er­a­ble, pro­duces in­cor­rect re­sults, or fails com­pletely. 

  • Critical: Produces a detri­men­tal sit­u­a­tion in which per­for­mance of the soft­ware de­grades sub­stan­tially un­der re­spon­si­ble loads, such that there is a se­vere im­pact on use.

  • Major: Produces an in­con­ve­nient sit­u­a­tion in which the soft­ware is us­able but does not pro­vide a func­tion in the most con­ve­nient or ex­pe­di­tious man­ner.

  • Minor: Produces a no­tice­able sit­u­a­tion in which the soft­ware use is af­fected in some way but can be cor­rected by a doc­u­mented change or by a fu­ture re­lease.

Request types infographic

Discover Software


Continuing the jour­ney

Product Success

The Product Success sec­tion of sup­port is about set­ting up a sys­tem that al­lows an or­gan­i­sa­tion to sys­tem­i­cally im­prove their prod­uct us­ing a data-dri­ven ap­proach. During the scop­ing phase, an ap­pli­ca­tion should be in­ves­ti­gat­ing key met­rics to iden­tify suc­cess and en­sure the mea­sure­ment of those are built into the prod­uct back­log. With these an­a­lyt­ics pas­sively track­ing in the back­ground, rel­e­vant par­ties can reg­u­larly in­ves­ti­gate and cre­ate find­ings re­ports. The find­ings should un­earth data-dri­ven trends that will feed fu­ture de­vel­op­ment op­por­tu­ni­ties both within Development and Enhance.

Support product success infographic

Define Key Metrics

During the scop­ing phase of a prod­uct, key fea­tures should be de­fined that re­quire reg­u­lar mea­sure­ment. These fea­tures should be writ­ten into the prod­uct back­log along with the met­rics re­quired to iden­tify the suc­cess of each. This ac­tiv­ity en­sures the work re­quired is com­pleted dur­ing de­vel­op­ment and mea­sured dur­ing sup­port.

Building Measurements

While de­vel­op­ing fea­tures within reg­u­lar it­er­a­tions, the Delivery Team should be aware of key fea­tures that re­quire an­a­lyt­ics. These re­quire­ments should fea­ture ac­cep­tance cri­te­ria that iden­tify what mea­sure­ments are re­quired and how those will be out­put to the an­a­lyt­ics ser­vice of choice.

Report Findings

It may go with­out say­ing, but if you go to the ef­fort of build­ing in met­rics, make sure you use them to in­form fu­ture de­ci­sions. A key method to do this is by sched­ul­ing in reg­u­lar meet­ings fo­cused on trends found through all an­a­lyt­ics ser­vices you’re util­is­ing. This meet­ing does­n’t need to be fo­cused on find­ing new so­lu­tions but should be acutely fo­cused on iden­ti­fy­ing prob­lems that users are cur­rently ex­pe­ri­enc­ing with the ap­pli­ca­tion.

Solving Problems

Once a prob­lem has been de­fined, it is at the dis­cre­tion of the Product Owner whether ac­tion should be taken. Provided there is an easy so­lu­tion for the prob­lem, it can progress through the Enhance process. Alternatively, some prob­lems may be sub­stan­tial enough to war­rant a new scop­ing and de­vel­op­ment process to be­gin.

Discover Software


Continuing the jour­ney


The Enhance sec­tion of sup­port fo­cuses on em­pow­er­ing the cus­tomer by giv­ing them the abil­ity to quickly it­er­ate on im­prove­ments with­out the need for long de­vel­op­ment cy­cles. Typically, these op­por­tu­ni­ties will arise from ei­ther di­rect cus­tomer re­quests or via op­por­tu­ni­ties iden­ti­fied through Product Success.

The key dif­fer­en­tia­tor for the Enhance process ver­sus reg­u­lar sup­port is al­low­ing de­vel­op­ers time to fo­cus on the why of an im­prove­ment and work along­side a de­signer if ap­plic­a­ble. Too of­ten im­prove­ments made when siloed within a tra­di­tional tech­ni­cal sup­port team, don’t con­sider the user or the ex­ist­ing de­sign sys­tem. Enhance pro­vides the abil­ity to col­lab­o­ra­tively con­sider an im­prove­ment with all rel­e­vant dis­ci­plines, al­low­ing more mean­ing­ful and con­sid­ered im­prove­ments to be de­signed and de­liv­ered.

Enhance infographic Enhance infographic


Before any de­vel­op­ment be­gins, an elab­o­ra­tion should oc­cur for the re­quest. Elaboration can be con­sid­ered a smaller ver­sion of scop­ing that is in­stead fo­cused on a small set of fea­tures, or just one well-de­fined prob­lem state­ment. The core goal of elab­o­ra­tion is to build a back­log of work that meets a de­f­i­n­i­tion of ready. With that back­log, an es­ti­ma­tion should be pos­si­ble, which will com­mu­ni­cate the ef­fort re­quired to com­plete the im­prove­ment for the client.


The de­vel­op­ment should be­gin with one or more is­sues that meet the de­f­i­n­i­tion of ready. It’s typ­i­cal to use a Kanban flow with this type of short-term work to en­cour­age start­ing de­vel­op­ment as soon as a ticket is ready. This means elab­o­ra­tion for other items can con­tinue with an­other party in the mean­time. Work mov­ing through de­vel­op­ment within this process should come with sim­i­lar qual­ity as­sur­ance processes such as peer re­view, re­lease check­lists and cus­tomer based UAT for con­fir­ma­tion.

Moving be­tween Maintain and Enhance

The Maintain stage is con­sis­tently ac­tive, while the Enhance phase should be en­gaged only as re­quired for im­prove­ments. The key to mak­ing these two processes work well to­gether is hav­ing a clear process of en­abling Enhance to quickly start and stop as re­quired. This can be done by en­sur­ing that any im­prove­ments made and re­leased to pro­duc­tion are han­dled like any other han­dover from a Delivery Team. Ensure the re­lease was per­formed us­ing stan­dard QA processes, the doc­u­men­ta­tion is up­dated, and the cus­tomer was able to test and ap­prove the ef­forts be­fore re­leas­ing to pro­duc­tion. Focusing on this will as­sist de­vel­op­ers sup­port­ing new im­prove­ments once users be­gin to use the new fea­tures.

Discover Software

Next Steps

Implement Your Learnings

Next Steps

Theory is great but that’s not why we cre­ated the Way of Working. This is a prac­ti­cal guide to help­ing you de­liver bet­ter soft­ware pro­jects. There are two paths you could take as log­i­cal next steps.

Path 1: Internal Adoption

The first path you could take is to use the Way of Working and im­ple­ment it in­ter­nally. This means train­ing your team on the process and cus­tomis­ing it to suit your com­pany and de­liv­ery pref­er­ences.

Start by em­bed­ding some of the cer­e­monies into your teams every­day rou­tine. Start the day with a hud­dle, en­sure that every it­er­a­tion is planned and re­viewed and that there is reg­u­lar user ac­cep­tance test­ing. Developing a pat­tern and ca­dence for these cer­e­monies will start em­bed­ding the Way of Working into your process.

If you are strug­gling with im­ple­ment­ing the Way of Working, please reach out. We help busi­nesses im­prove their ag­ile de­liv­ery processes through a con­sul­tancy model.

Path 2: External Understanding

The sec­ond path you could take is to learn the process in­ti­mately but not nec­es­sar­ily adopt it your­self. We would rec­om­mend this path­way if you’re a prod­uct owner that is look­ing to en­gage a soft­ware de­vel­op­ment com­pany in the fu­ture, or if you’re presently in the mid­dle of a pro­ject.

While most soft­ware de­vel­op­ment com­pa­nies should have a Way of Working, this is not al­ways the case. To com­pli­cate things fur­ther, the level of de­tail in one com­pa­ny’s process may not be the same for oth­ers in the in­dus­try. It’s im­por­tant for prod­uct own­ers that are part of the busi­ness fund­ing the pro­ject to have that back­ground con­text of what a ro­bust process looks like.

Most ac­tiv­i­ties in the Way of Working in­volve the par­tic­i­pa­tion of a prod­uct owner. Understanding your time com­mit­ments through­out an ap­pli­ca­tion’s life­cy­cle can be in­cred­i­bly valu­able. Whether you en­gage WorkingMouse, or an­other soft­ware de­vel­op­ment com­pany we would rec­om­mend gain­ing an in-depth un­der­stand­ing of the Way of Working.

Discover Software