I’ve built distributed applications for years. Put in simplistic terms, you break down an application and data into parts that run on the platforms that will provide the best performance and reliability.

Back in the day, we deployed applications on smaller hardware platforms, such as x86-based servers. We needed distribution. By using distributed processing, we could scale to greater processing loads and set up active/active failover systems where one system could instantly back up the other in the case of an outage.

Much of this wasn’t needed when public clouds arrived on the scene. Public clouds did the systems distribution for you by using tenant management and providing scalability and resiliency since you can virtually set up any platform or platform configuration. This is called an intracloud. 

Now that multicloud is a thing, I keep getting the question: Are there any reasons to place parts of an application or the application’s data on different cloud brands? If yes, what are the best practices?

Probably the best way to answer this question is to look at an example. Let’s say we have a retail sales application with the user interface processor running on AWS, the database running on Google, and the core transaction-processing system running on Microsoft. Broken down even further, portions of the application run on-premises, and more subprocessing runs on AWS and Google. 

This configuration can be made to work. But I would have real concerns about viability, long-term operations, complexity, and cost, not to mention security. In other words, we can make it work, but should we?

Copyright © 2021 IDG Communications, Inc.