Observations and Learnings From a Cloud Modernisation Scope

BY

14 January 2022

Cloud Migration

Legacy Migration

post-cover-image

We recently embarked on a new Scope with an existing customer. Having worked closely with this customer over the last four years, we intimately know the business. We've helped them move away from restrictive industry-based COTS (Custom Off-The-Shelf) software, significantly increase revenue through dynamic pricing and optimisation of internal administration costs.

To read; our quick guide, other blogs or book on Legacy Modernisation check out the Cloud Migration Hub.

The Scope itself turned into a journey with multiple twists and turns. We set out to solve one opportunity with the product and have ended up with the Scope of a completely new product.

Sound confusing?

Well, here's the journey we went on, some technical learnings made and why we ended up where we did.

Background

To give technical context to the present discussion, the target application we commenced Scope on is a Second Generation Codebot Application. Developed 4+ years ago, the technology set was LAMP, referring to Linux, Apache, MySQL and PHP.

LAMP is still a modern web technology stack but internally grandfathered by Codebots third-generation bots. WorkingMouse commences all Greenfields builds at present on C#bot. The main technology stack of #C bot is C# Server-side, Graph QL as the API structure and React client-side. This third-generation bot provides developers with a broader skillset and a significantly more performant architecture for our customers.

To simplify this:

Initial build = LAMP

New Scope = C#

Mobile First

The Scope commenced from analytics and observed user behaviours. We observed an increasing number of mobile visitors to the site over the few months preceding the Scope kick-off. As of September 2021, analytics showed that 5,300 visitors were on mobile while just over 3,000 were on desktop.

The user behaviour was that most transactions and record management occurred on the desktop. This led us to suspect we could increase conversion rates on mobile to desktop run rates.

We set out to design a mobile-first booking process, and we achieved this. We observed three user testing sessions on the design to validate the ease of purchasing.

Restrictive Technical Debt

We constantly stumbled across technical restrictions of the existing system's architecture throughout the design process that inhibited customer-first design thinking. An example of this was the classic login and user credentials system.

The system has over 50,000 user credentials (many of which are double-ups from people forgetting which email address they registered with). The user's primary key is associated with the user email address, but we wanted to implement SMS and phone number as the primary authentication factor.

In researching this, we found several great resources on implementation and security:

Using two-factor authentication (TFA) and mobile number verification would have reduced the number of guest users, made login quicker and stopped future account double-ups whilst being mobile-first friendly.

Alongside other points, such as the booking engine, we decided to review a modernisation approach. We knew the customer was planning to rebrand the software to several white-label customers.

Modernisation Options

We looked at several modernisation options. The key ones we investigated were:

1. Single Sign-On (SSO)

We considered the first method of using a single sign-on between the legacy database and the new application database. The user would just log in once, depending on the internal system navigation, by using either the old or new system.

We found the upfront cost of doing this coupled with the risk of data corruption to be too great in taking this path. Also, we wouldn't be able to easily white-label the system.

2. A Gradual Modernisation

The first viable option was Gradual Modernisation. This option meant using the new server-side code with a proxy on the present database for builds 4 to 6. We'd be able to keep the old system live whilst building new functionality.

This gradual approach comes from having to break down more builds, and the outcome of this is a long build time and, therefore, higher cost. Development was restricted - due to the complexity we couldn’t make improvements until major parts of the product had been modernised.

3. The Pop 'N' Switch (we just made that up... Trademark pending)

This was a slightly more desirable option. However, it still came with significant time constraints.

The plan was to embed React improvements in the existing LAMP stack. Once done, we would rewrite the server-side logic to facilitate the new build requirements. Post the React work, we would migrate and replace the server-side. This meant we could modernise the client-side without having to rebuild all the server-side.

This would have been a great solution if the product were to persist as a single product. Still, we would have had to deal with a significant amount of technical debt in doing so, however, we could quickly get the intended user experience.

The downsides were that anything built in build 3-5 in LAMP would need to be rebuilt in #C when we wanted to re-architecture in Build 6.

4. A Firecracker migration

Finally, we settled on a firecracker migration path, using the existing model and data structure to rebuild a completed new full-stack application and then migrate the data. Whilst this meant a longer wait, it also meant being wholly removed from technical debt.

We have significant experience in the customer business domain and thorough documentation of the old system, which further reduced the risk. This also removed the extra cost of doing higher risk connections or technology replacement. The customer must maintain the existing product slightly longer but will get a better and licensable product post-build 4.

Licensing Opportunities

The licensing options were the linchpin of the modernisation pathway, and it's helpful to explore the different paths we have available in licensing a product. There are two fundamental options available. These are:

Multi-Instance

The application source code is set up on a new application instance for every customer. This means a new server and domain operating in its silo. The custom business logic for each customer and their branding are configured in the super admin's back-end of the target application.

The Pro's

  • Can be converted to multi-tenant later.
  • It doesn't require any software solution (no added development time).
  • Flexibility around deploy times (updates can be done whenever best works for the individual application).
  • Debugging and maintenance can be isolated to different applications.
  • Zero risk of data contamination or leaking other applications data.
  • The licensee can host/maintain their environment, or the customer can host all of them.
  • The applications could be deployed in different locations.
  • Individual applications can be scaled independently.
  • Individual systems won't affect each other's performance & downtimes.
  • User privacy and data are safer due to distribution.

The Con's

  • Need to deploy multiple times (this will be automated through dev-ops but will still take longer).
  • Higher cost due to split systems (more cloud hosting resources).

Multi-Tennant

This is where multiple instances of the same application operate in a shared environment. This means there is a single database, which saves more on servers and data storage.

Pro's

  • Cheaper to maintain.
  • Cheaper to resource, as they are shared.
  • Easier to handle any common data (users, groups etc).

Con's

  • Needs a software solution (which will take time to implement).
  • Data sovereignty is essential so the customers can't access each other's data.
  • If one of them is using many resources, this will impact the other tenants.
  • Multiple applications reading/writing to the same database could lead to performance loss.
  • Hard to scale individual applications.
  • Other applications may cause bottlenecks.
  • Attractive to hackers due to centralised user information for multiple applications.

Summary

As you can see, we went on quite the journey in a single build scope! Upon reflection, we could have negated the migration journey discussion with a franker conversation upfront. However, it felt important to go down that pathway at the time.

What stuck with me is the importance of getting the highest quality product in front of the users and data. If it takes longer to get the quality right, the investment will be easier to achieve. In reality, the essential part of any application is the data; if this is the case and we can migrate the data, should we not rebuild as soon as we realise the technical debt is inhibiting the design?

As we learn and evolve, we should treat software as the vehicle for our present circumstances. As with all vehicles, they require maintenance, and as our needs change so too should our chosen vehicle.

The difficulty lies in this decision: upgrade to a people mover and accept the circumstance or continue to pile the kids into your hatchback!

How we empower departments and enterprises

Government

author-thumbnail
ABOUT THE AUTHOR

David Burkett

Growth enthusiast and resident pom

squiggle

Your vision,

our expertise

Book a chat