I’ve built distributed applications for years. Put in simplistic terms, you break down an application and data into parts that run on the platforms that will provide the best performance and reliability.
Back in the day, we deployed applications on smaller hardware platforms, such as x86-based servers. We needed distribution. By using distributed processing, we could scale to greater processing loads and set up active/active failover systems where one system could instantly back up the other in the case of an outage.
Much of this wasn’t needed when public clouds arrived on the scene. Public clouds did the systems distribution for you by using tenant management and providing scalability and resiliency since you can virtually set up any platform or platform configuration. This is called an intracloud.
Now that multicloud is a thing, I keep getting the question: Are there any reasons to place parts of an application or the application’s data on different cloud brands? If yes, what are the best practices?
Probably the best way to answer this question is to look at an example. Let’s say we have a retail sales application with the user interface processor running on AWS, the database running on Google, and the core transaction-processing system running on Microsoft. Broken down even further, portions of the application run on-premises, and more subprocessing runs on AWS and Google.
This configuration can be made to work. But I would have real concerns about viability, long-term operations, complexity, and cost, not to mention security. In other words, we can make it work, but should we?
I’ve built a few of these architectures during the past few years. Here’s what I’ve found:
- Latency, as well as ingress and egress, can be costly problems. You can run private circuits from one cloud provider to another, but most enterprises won’t accept the cost to do it. They soon discover that the bursty nature of the open Internet causes peer-to-peer communication problems that impede production. Moreover, cloud providers charge you to send and receive data to and from a public cloud. You’ll need to understand how to project those costs. Typically, it’s not cheap.
- Security becomes much more complex. It’s easy to turn on a cloud-native security system within a public cloud provider for an application that runs on that provider. For cross-cloud distributed applications, you’ll either have to leverage native security on each cloud to protect the application components and application data or find some common security solution that can span all public clouds. Each approach means more cost and risk.
- Your application becomes more vulnerable to outages. By not putting all the eggs into a single cloud’s basket, one would think that an application would become more resilient to outages. That might be the case for applications that run redundantly across cloud brands. However, if you distribute a single application across two or more clouds, the opposite is true. If one of the clouds goes down, the application stops because all processes must run for it to function.
Today, the only reason I would recommend building an application across more than one cloud brand is when there is a specific need to do it; for example, if you can only find the right AI system on one cloud and the right database on another. I would still look for ways to combine them on a single cloud, but there are instances where cross-cloud distribution is the only choice. However, in the hundreds of cloud-based applications I’ve built over the years, I’ve only seen the true need for a cross-cloud distribution twice. Look hard at your own proposed deployment.
Copyright © 2021 IDG Communications, Inc.