How to Maintain Legacy Databases in 2019

By:


Many organizations rely on applications older than several of their employees. We’ve all heard the horror stories of some ancient DOS application running on FoxPro still in use today running mission-critical tasks. Many of us have personally struggled with the issue of maintaining an older, legacy database (or several), and often the legacy applications they support as well.

The pains of keeping legacy software running in a modern data center are many. Older software may not be supported on a modern operating system. Running unsupported software on unsupported operating systems also has unpleasant security implications—as patches for recently discovered vulnerabilities aren’t available.

Legacy software lacks resilience, being sensitive to latency, network, and/or storage disconnects. Database administrators may be unable to virtualize legacy systems due to license validation requirements and may struggle with the lack of automation and tools available to ease legacy database management.

Common Problems with Legacy Systems

Several older database systems (and the applications reliant upon them) won’t run on modern operating systems. This exposes your network to the many vulnerabilities left unpatched in unsupported operating systems, database engines, and applications. Because most databases operate in a client-server model, air gapping the legacy systems is often not an option. This leaves you with two options if you intend to keep running the extant systems:

  1. Putting a firewall in front of every legacy system
  2. Taking your chances

The inability to run on modern operating systems is also a roadblock to migrating these legacy applications to newer hardware, as driver support for newer hardware on older platforms becomes nonexistent due to End-of-Life and End-of-Support from the manufacturers.

Legacy applications are often susceptible to errors and potential data loss if their network connection drops out. If the connection to their storage experiences even the smallest outage in the middle of a write to the database, you could suffer data loss, or worse—data corruption. This makes them a great candidate for virtualization in a well-designed, fully redundant virtualized environment.

Virtualization, however, can be yet another obstacle for legacy databases and applications when they rely on a physical hardware dongle for license validation purposes. Others may rely on a network interface MAC address or hard disk serial number for license validation. Then there’s the worst-case scenario—activation relies on some form of “phone home” mechanism and re-triggers on detection of hardware change, but the company no longer exists to respond, leaving you trapped on the same system until you eventually replace it.

A severe lack of available legacy database management tools, extensibility through APIs, or the ability to automate common tasks makes supporting these platforms quite difficult.

In an ideal world, we would migrate the old legacy system onto a completely new platform and scrap the old setup. In the real world, we know  legacy systems are still used because it’s not that easy due to financial or time constraints.

This leaves the question, what can you do to try to mitigate the risks associated with maintaining the current environment in legacy systems?

Mitigation and Transformation Strategies

If moving to a newer operating system is possible, do it as soon as humanly possible. If it’s not, make the business case to management to air gap the systems if feasible. In the more likely scenario that neither are valid options, then harden each legacy system as much as possible, shove a decent firewall in front of it, and use an endpoint security suite.

If license validation doesn’t require a physical dongle or hardware validation that the software developer can’t fix, then virtualization is a good starting point—even if you’re stuck on the older operating system. This can help alleviate concerns regarding the underlying hardware stability if implemented in a proper virtualization cluster with good backups and snapshotting. Just ensure you harden, firewall, and install endpoint security on it.

If you can’t virtualize but can move to newer hardware—again, even if you’re stuck with the older operating system—this is a good step to take if the old systems are out of support. Yes, this comes at a financial hit to the organization, but it’s much cheaper than being down for an indeterminate amount of time due to hardware failure.

If you’re stuck keeping the systems physical, try to ensure you have redundant links for your network and storage to mitigate risks associated with either connection dropping. Teamed network interfaces, MPIO storage links, and a good spanning tree switch infrastructure will help here.

Unfortunately, when it comes to automation and management tools, this is where you’ll still fall short. Here, you must do the best with what’s available using custom shell scripting, what APIs (if any) are available, and macro tools like AutoHotkey and AutoIT.

It isn’t easy to keep these systems going, nor typically very rewarding. Take pride in the accomplishments you achieve.

Leave a Reply