Legacy System Migration Do’s and Don’ts
Legacy systems represents a massive, long-term business investment, making a migration project an inevitable consequence of aging software systems. Legacy system migration has many traps, but like all traps - if you know they are there - they can be avoided or even turned into an advantage. In this article I will talk about some of the do’s and don’ts from my experience working on these types of projects. At the end of the article I will give some practical advice and lay down a high-level strategy that has worked for me.
What is a legacy migration?
A legacy migration is the process of modernising old, outdated software with more modern technology. All technology requires modernising at some point. Think about it this way, you’re not still using that Motorola Razor from 2007. Many legacy systems run business critical functions so it’s critical to execute a migration successfully.
Conceptually software applications can be split into two general parts - state and behaviour. The state of an application is the data and what it means. The behaviour of an application is the functionality and how it behaves. This is ingrained into most software developers and we learn this very early through techniques like object-oriented programming. An object is a way to encapsulate state and specify behaviour.
The state of a software application is its data. The behaviour of a software application is its functionality.
By splitting the application into these two general parts we can begin to form a plan of how to migrate a legacy system to the cloud.
Data (State) Migration
There is great variation between the ways you end up with needing to undergo data migration. You may be doing a modernisation software project so you only have to move from a legacy database to cloud computing. Or, you may be part of a merger and acquisition - and in this case - you often end up with two or three separate systems that need to be combined. The challenges are different but the most important thing is you will need to do a lot of analysis and mapping.
Do not hand code the mapping! You must ﬁnd a tool - or set of tools - that will manage the mapping and do the data migration. I use an approach to software called model-driven engineering (MDE) and one of the common things we do is called model-to-model (M2M) transformations. If you treat the legacy database schema as one model and also treat the cloud database as another model, then you can use M2M transformations to get from one to the other. My favourite M2M tools comes from Codebots.
One last point I would like to make on the data migration is the quality of the data you will encounter. A word of warning, it’s most likely going to be horrible. Over the years it would have been updated and manipulated by many developers - foreign keys will be missing, it won’t be 3rd normal form, columns will be incorrectly named, or my favourite, the super column where someone dumps an array of data. All these problems add up to a big problem and are the reasons why data migration is so difﬁcult. This emphasises the need for a systematic mapping tool of sorts.
Functionality (Behaviour) Migration
Data is the state of an application while functionality is the behaviour. An example would be something like a workﬂow. It is a common business problem to raise some sort of work request. Delegate that request to a worker and then track the request through to its completion. If you can think of any sort of functionality of your favourite software - this is the behaviour.
Now for the bad news. Believe it or not, data migration is the easier of the two. Functionality migration is far more time consuming. The reason for this is simple. Once you have identiﬁed what the functionality is - hopefully documenting it into some requirements or design documents - a software developer then has to write the code for it. It’s like starting a greenﬁeld project.
Now for the good news. Recall I mentioned earlier about an approach to building software called model-driven engineering. Another common thing we do is called model-to-text (M2T) transformations. These are effectively like code generators - or like robots - and can do a lot of the heavy lifting when it comes to writing the functionality. Using these techniques can save a lot of time, lift the quality of the project, and reduce the risk of the project failing.
The Do’s and Don’ts
Do more analysis
Do testing, testing and more testing
Do use a scientiﬁc approach
Do set both technology and organisation KPI’s
Do use model-driven engineering
Don’t do it all at once
Don’t underestimate the size of the problem
Don’t leave the data migration as an afterthought
Don’t think it’s going to be easy
How to do a legacy migration
Step 1: Ensure you have a systematic path of data at the end
This is done through an automated step that reverse engineers the entity model from the legacy database. When you get to the migration step it’s important to maintain a path for the data. This is known as a lift and shift. Some of the more impactful beneﬁts of the legacy migration and the modern technology stack may only be realised once the data is migrated.
Step 2: Create your requirements backlog
As with any software project, there must be a requirements backlog. This keeps a team aligned and focused on the upcoming deliverables. It’s critical for the product owner to live and breathe the requirements backlog.
Step 3: Determine what can be automated by technology
I’ll elaborate further on this below. Essentially we want to identify repeatable patterns that exist across the legacy system. In doing so, we can model these patterns and effectively leverage technology. Our technology of choice is Codebots which allows us to streamline development when it comes to step 5.
Step 4: Don’t forget to design for mobile
User experience is critical. While we can try to anticipate the manner in which users will engage with an application, given a big enough sample size there will always be outliers. Rather than ignoring the minority, it doesn’t take long to set some break points and create a friendly mobile experience.
Step 5: Utilise technology platforms
This is perhaps the biggest area of opportunity and something that can differentiate software development companies. We utilise the Codebots platform which takes the data models in step 1 and the repeatable patterns in step 3 to help write a significant portion of the codebase. Of course, it will never get us 100% of the way there as every system has custom business logic but it can help save a lot of time, especially when it comes to legacy migrations.
Step 6: Migration
This is where you migrate the data as well as your users onto the new, modern system. I’d recommend ensuring there is a clear migration path. That starts with step 1. It’s important to replicate the same underlying data model before making any new changes.
There is no silver bullet or magic wand that can be waived to solve this problem. You have to roll up your sleeves and work hard to achieve the desired result. The six-step process I’ve outline above and in the graphic below is one that works for us.
Legacy migration can seem like a scary process to a lot of businesses, but it’s often a necessary one. Using a tried and tested method such as our 6-step approach can potentially not only alleviate some of the traps but even help you transform the legacy migration process into a beneﬁt for your company.