Believe it or not but software testing is a divisive topic. Given that application quality is one of the most difficult characteristics to measure, the way in which tests are written and the tools used to run and measure those tests are often under the microscope.
Now from a product owner perspective you might think, this is all good and well but I just need an app that works. The issue with this approach is that there’s no such thing as perfect software. There are too many variables and edge cases to cover every scenario. Taking a naïve approach means that you only know there’s a bug when it’s too late (a user has found it and has had a frustrating experience).
This is where testing and in particular, the traceability matrix, comes into play.
Why do we test?
We’ve already touched on some of the reasons why testing is important. Not only does it provide a proactive response to quality assurance, it also gives you a snapshot of the health of your application.
I won’t spend any more time discussing why testing is important, this is generally accepted by everyone in the industry. Rather, we’ll look at different approaches to testing.
The different testing metrics: code coverage vs traceability
One of the most common testing metrics is looking at code coverage.
Code coverage looks at the percentage of your code that is tested. Let’s say within your application, 2000 lines of code are linked up to a test and 2000 lines of code aren’t. You would have 50% code coverage.
There are a few reasons why we believe code coverage is a poor metric to base the health of your software off:
- It doesn’t link with the whole of the application. If a line of code fails, what does that mean for the usability of the application.
- It’s naïve in assuming that if code is run during test execution, then it is tested. This is not always true.
- Chasing 100% code coverage is incredibly time consuming – on a big enough application it’s almost impossible to test every branch of every piece of logic. Think of the different ways you could test a free text field with a 100 character limit (the variations of inputs are enormous!)
While understanding code coverage is better than no metric, we believe there are more efficient ways to get value out of your test suite. This comes in the form of a traceability matrix.
What is a traceability matrix?
Where code coverage looks at the proportion of lines of code that have been tested, traceability looks at what functionality has been tested.
We believe that code is a by-product of features. At the end of the day, we only really care about features working. If we go back to why we test; it’s about proactively monitoring the health and quality of your application for your end users.
The traceability matrix works best when you only include tests that actually mean something. If the test fails will it tell you something about the quality of the application? Will you respond to it or ignore it?
Data is only as valuable as the insights we draw from it.
We’ll dive deeper into the advantages shortly but the key purpose of a traceability matrix is that it tells you whether a feature has a test associated with it, and if so, how many.
There’s a mock version of the matrix in the image below which will help illustrate how it works.
Importantly, you don’t need to be a developer to understand the traceability matrix. From our mock matrix we know that 64% of requirements have tests that are passing, 25 requirements have failed and 65 requirements do not have any tests associated with them (red flag).
The passing coverage breaks down the requirement name and ID (imported from Jira) and the number of tests associated with each requirement, along with the status of those tests.
Just as no application is ever ‘done’, no testing suite is ever finished. Regression tests (after the fact tests that check none of the existing functionality is broken) should be added and mapped against features in your traceability matrix.
There are some clear advantages from using the traceability matrix, as well as a few trade-offs (or cautions).
Visibility for the product owner
By creating a dashboard/report abstracted outside of developer tools, it gives the product owner a level of insight they wouldn’t otherwise have.
While it’s important to trust your developers, it’s also dangerous to have no visibility. With a high-level overview, you’ve got insight into the metrics that matter without risking analytics fatigue.
You actually know what your code is testing
We know that a failing test is a bad thing, but why? What implications does it actually have for the application and for the end user?
The concept behind the traceability matrix is that we know, for example, the failing test relates to the login page, therefore users won’t be able to login until the bug is resolved. This feature is a critical part of the application so we might sound the alarms and get all hands-on deck until the bug is resolved. Contrast this to lesser used functionality that may not need an urgent resolution, we may de-prioritise the bug in this case in favour of something else.
Better understanding of test density
Rather than simply hearing 80% of the application code is tested, you know the density of tests mapped against each requirement.
This comes in handy if you know there’s a particularly complex feature that needs a higher level of testing against it. Going back to our mock matrix, we expect the CRUD Admin pages to be more complex than the login page, therefore we’re comfortable that there are extra tests against that feature.
Beware of test quality
Nothing in the traceability matrix accounts for test quality. It’s a representation of tests matched against features. You still need developer discretion and strong competency in writing good quality tests.
For WorkingMouse we address test quality concerns through our definition of done (criteria that must be ticked off before classifying a ticket as done).
Tagging takes time
In order to create a traceability matrix every test needs to be tagged with the requirement it relates to.
As you might expect this can be time consuming when you’re completing hundreds of tickets as part of a project. It’s highly recommended to ‘tag as you go.’ It’ll add a minute (or so) to each ticket but it’ll save it from being a big activity/exercise at the end.
- As a product owner, understanding the health of your application is important.
- The traceability matrix is a better and more insightful metric than code coverage.
- Metrics are only as valuable as the insights you draw from them. Remove useless tests from your traceability matrix.
- Proactively monitor your tests and continue to augment.