By Matthew Romero, Technical Product Evangelist at Skytap – Shortlistee of Best Cloud Infrastructure at Cloud Computing Awards, 2021

Panorama Data Insights estimates the global cloud computing market will grow at a rate of 19% from 2021 to 2030 and Gartner predicts that spending on public cloud services will grow 21.7% to reach $482 billion in 2022. At the beginning of 2022, many organizations have already moved some workloads to the cloud (often the easiest ones to move), but still run complex, business-critical production workloads on-premise. Many applications, either because they were built for older server hardware not used by the hyperscale cloud providers, or because they’ve been heavily customized over decades of running on-premise, are difficult to move to the cloud.

Here is a simple three-step process to ensure success when migrating these “cloud stubborn” business-critical workloads to the cloud:

  • Lift and shift: The first step is to move applications to the cloud without refactoring or rearchitecting them. This reduces the risk of accidentally breaking something and speeds up the process.
  • Create duplicates: Once the application is in the cloud, IT can clone it to streamline software development and testing.
  • Refactor the original in small steps: IT can also introduce accumulative automation for environments instead of refactoring to cloud-native services.
  • Apply learnings to the next migration: Finally, IT should start this process over again with a new application while wrapping up the original one, applying the lessons they learned from the first migration to the second. This makes for a speedier, safer path to innovation while lowering the dependency on current on-prem resources.

These steps allow IT teams to take advantage of the cost and compliance benefits of the cloud while decreasing the risk of breaking critical business applications or causing significant delays.

Lift and Shift

First things first, the organization’s IT team should reproduce on-prem applications in the cloud without refactoring or rearchitecting any of the code. They will need to create the same disk / CPU / memory allocations, same file system formats, same hostnames, same IP addresses, same number of LPARs / VMs, same network subnets, and so on.

Don’t forget that applications running in IBM i or AIX cannot be lifted and shifted directly to AWS, Azure or Google Cloud Platform without specialized alterations. The benefits of adding cloud flexibility to traditional applications generally outweigh the investment in these modifications.

Create Duplicates

After the application has migrated to the cloud, API automation, ephemeral longevity, rapid cloning and software-defined networking can then be applied to it. What’s more, once a group of LPARs / VMs depicting an “environment” is created in the cloud, said environment can be saved as a template and the template can used to clone other working environments. Building ready-to-use environments from a template allows IT to design replicas for various dev / engineering / test groups, all of which can do their work in tandem. Most cloud providers offer built-in access controls so users can only access what has been assigned to them. This prevents one group from accidentally interfering with another group’s copy and maintains security.

Clones are exact copies of the template, with everything from disk allocations to hostname and IP address to subnet. Different environment duplicates can run in concert with one another without running the risk of colliding, however the effort involved in setting this up depends on which cloud provider the organization uses.

To build multiple environments duplicating the same network structure as the final target system, a form of separation must be applied in order to fend off any potential roadblocks in the cloned environments. In the context of this article, “replicate” means to re-use the hostnames, IP addresses and subnets housed in each environment. At this point, every unique environment should be in its own software-defined networking space, while remaining unseen to other environments running in tandem, resulting in each environment becoming a virtual private data center. By allowing cloned hostnames and IP addresses, none of the hosts have to go through the arduous “re-IP” process.

This requires some technical skill to pull off. One method is to use an environmental virtual router (EVR). Cloned environments communicate back to on-prem resources through the EVR, which in turn hides the lower VMs housing the same hostnames and IP addresses and exposes a unique IP address to the larger on-prem network. The EVR working with a “jump-host” can be designed to forward SSH requests (via ssh proxy, OpenSSH 7.x and higher), granting SSH access to all the hosts in an environment. From on-prem, users would SSH to any host in the environment (i.e., ssh user@environment-1-host-2), which reveals a unique IP address to on-prem, then delivers to the VM. This results in a streamlined way for multiple cloned environments to live alongside each other without disturbing basic network structures.

Refactor the Original in Small Steps

While different departments work with their cloned applications, the IT team can start to refactor the original to use native cloud services one piece at a time. There are a variety of established design patterns for this (such as “Side Car” or “Strangler”). This method allows software developers for a progressive approach to transformation instead of starting all over again. Refactoring R&D is done rationally, meaning the overall application can continue to run, thereby averting the creation of net-new, application-wide development efforts. Starting from scratch across various applications moving through the migration process is dangerous and doesn’t work with the Agile principle to “limit work in progress.”

Developers can slowly build automation into the application even if they don’t use cloud-native services. Applications can merely be re-hosted (take a look at Microsoft’s “The 5Rs of Rationalization” post for more information) without seriously changing their original on-prem structure if desired. While the real value if using the cloud comes from using cloud-native services, there are good reasons for just lifting and shifting an appliance to the cloud and leaving it unchanged (a data center exit, easy consolidation with other workloads running in the same cloud, etc.).

Apply Learnings to the Next Migration

When most of the application lives in a cloud-native format while maintaining a dedicated connection back to on-prem, IT can then start moving the next application through this process. They can apply any lessons learned from application #1 to application #2. Each migration should get faster and easier – the entire “assembly line” of migrating an application can run faster as the team gets more comfortable with their cloud of choice.

Agility and Security = Success

By lifting and shifting applications to the cloud unchanged, and then slowly refactoring them piece by piece once they’re there, IT can migrate minimize risk and improve efficiency when migrating business-critical “cloud stubborn” workloads. This process has four steps: lift and shift to the public cloud of choice, create duplicates of the application for other teams to work on, refactor the original to use cloud-native services step by step, and apply the lessons learned from each migration to the next one.

This three-step method allows organizations to take advantage of the cloud’s benefits such as capacity-on-demand and matches to various Agile software development best practices like “responding to change” and “working software.” The strategy speeds up dev/test, engineering and QA by permitting teams to work simultaneously while decreasing risk by not rearchitecting or re-platforming during the migration process by breaking down work into smaller, more manageable tasks. Not only is this the safest migration plan for crucial on-prem applications, it’s also the most likely to succeed.