Systems Thinking — Portability
As part of making a resilient system, one must also think about failure cases such that the systems or system parts must migrate.
Having a backup is essential, but there are times when a better alternative is migration. This can be for economic reasons. In the cases where you need to replicate an entire system effectively, then back up maybe a duplication of the whole system and all data in real-time.
Duplication, by definition, often means doubling and tripling of costs, when it’s not just about data storage but processing costs too.
So this comes to having a backup system where the processes and data can be moved or “spun up” elsewhere in a fast and seamless way as possible.
Ideally, this should also be automatic, but this may not be entirely possible with full automation if it’s the case of more black swan events, such as a web server company you’re using shutting down.
So portability helps by decoupling your systems from using bespoke methods and tooling. This doesn’t mean everything has to be “off the shelf” and no custom design at all. It’s more about the types of technologies and methods you use than whether it’s pre-built systems.
For example, a custom python script could be very portable and open and in itself is a type of “standard” you could adhere to. Using this example, how portable it will depend on its dependencies? Does it make use of any external libraries? Do you need to ensure that it is available on another server you’d have to migrate to, along with the python version? Etc.
If you can put all requirements with the python script bundled together, you make it much more portable. A technology I won’t cover here is containerisation, where everything needed is defined and put together in a container.
This post was created with Typeshare