Architectural Best Practices in Cloud Services

In implementing a new solution for our customers, we identify several practices that will help us take a better architectural approach with the imminent advance of cloud services.

This blog is intended as an initial guide when defining our customer’s architecture solution, and the main considerations that can be taken into account are the following:

  • Design for failure and nothing fails. This principle is oriented to avoid single points of failure by using multiple instances and, when possible, leveraging multiple availability zones for your applications to provide robust availability. You can also consider separating a single server into multiple tiers where each server performs a single function.
  • Build security in every layer. To achieve this principle, some recommendations are related to enforcing the principle of least privilege so no individual or service has more privileges than required.
  • Leverage different storage options. This allows you to optimize resources and reduce database overload. To achieve this, you can consider moving static web assets to a bucket S3. You can also think of storing the session state in a separate, no-SQL database, which allows you to scale up and down without losing session information when horizontal scaling happens. This also makes your web tier stateless for easy scaling.
  • Implement elasticity. This principle allows for the optimization of resource consumption, and it also allows for the reduction of costs. To achieve this, you can implement auto-scaling policies with your specific architecture patterns in mind. For example, if certain periods require more resources, such as special commercial launch dates or low operational demand moments, such as a weekend, you may set the auto-scaling rules to match this criterion.
  • Think Parallel. This means that you may consider scaling your architecture horizontally instead of vertically. For example, adding more computing resources than power to the computing resources. Right-size your infrastructure to your workload to balance cost and performance best.
  • Loose coupling sets you free. This means that the less the services are coupled, the larger they can scale. Consider using multiple queues instead of a single-order workflow.
  • Don’t fear constraints. This principle encourages rethinking the traditional constraints. For example, if you think there is a need for more RAM, the traditional approach would probably be adding extra RAM, so instead, consider distributing the load across several commodity instances. And… What about a problem related to the response to failure? Let’s imagine that the hardware failed or a configuration got corrupted. Rather than wasting time and resources diagnosing the problem and replacing components, favor a rip-and-replace approach. Simply decommission the entire component and spin up a fully functional replacement.

Another example related to this principle could be when you need more input/output operations per second (IOPS) for your database. A traditional approach is to rework a schema to increase IOPS. Again, consider scaling horizontally by spreading the load around. Consider creating a read replica for your database and modifying your application to separate database reads from writes.

Although these are just a few examples for reference, you will probably identify more application scenarios when thinking about your architecture solution, so take advantage of enriching this list and improving your decision-making process.

You may also like

SASE

Why is SASE essential for any security strategy?

cloud

Exploring Cloud Computing Deployment Models

Star Wars Business

5 Business Lessons We Can Learn From Star Wars