Editor’s note: Andy describes the peculiarities of a cloud-native infrastructure and shares ScienceSoft’s best practices on how to shape it. If you consider cloud migration but don’t know how to shape the cloud environment to your needs, take a look at ScienceSoft’s offering in cloud consulting services.
According to the IDG’s 2020 Cloud Computing Study, 92% of surveyed organizations are already in the cloud or on their way there. So, if you are in line for cloud migration – you follow the justified trend to reduce your IT infrastructure costs. Another trend growing forth now is to develop apps right in the cloud, which I consider a feasible alternative to migration, for example, as shown in this ScienceSoft’s project. The criteria for such feasibility lie in the cloud-native infrastructure, so let’s look at it in more detail.
The cloud-native infrastructure is a cloud environment that enables the entire life cycle of applications designed and developed to operate in the cloud. A classic cloud-native app consists of a mesh of isolated services ensuring the overall app stability as the app does not cease to operate when one service is down. The granular infrastructure of such apps enables their on-the-go improvement without operational downtimes and systemic failures.
Important aspects of the cloud-native infrastructure
For the deployment of cloud apps, I recommend using containers to package up software code with all its dependencies necessary to run an app or a service. Containers consume fewer cloud resources and can be easily configured, scaled, replicated and orchestrated via such management systems as Kubernetes. The use of containers facilitates CI/CD implementation and infrastructure automation.
To make the cloud more attractive to users, major cloud providers offer PaaS services for developing, testing, deploying, managing and updating cloud applications: AWS Lambda, Azure Functions, Google App Engine, etc. PaaS releases you from cumbersome server management and lets you extend your cloud infrastructure with special modules for AI, machine learning, IoT, blockchain, etc., with no extra development efforts.
With the Infrastructure as Code (IaC) approach, your DevOps team can automate cloud infrastructure setup and management of its components. They use configuration files to organize unified and instantly configured development environments and trace changes committed to the infrastructure.
As services of a cloud-native app are detached and have clear criteria for their functional operability, they enable a high level of automation and can be simultaneously developed and then assembled, tested and deployed through the branching CI/CD pipelines.
Cloud infrastructures are driven by virtual computing nodes like EC2s in AWS and VMs in Azure or Google Cloud Platform. Each component of a cloud infrastructure consumes CPU, RAM or storage capacities attributable to it and the consumption should timely follow the demand: scale up or down. That’s why I recommend automating resource orchestration to:
- Reduce cloud consumption by scaling down when a service is idle.
- Ensure sufficient performance of a service by scaling up.
Depending on the objectives, you can make the virtual instances scale dynamically against metrics of interest (including predictive metrics) or as scheduled if you expect load surges.
In addition to cloud resource autoscaling, cloud platforms provide for load balancing to distribute traffic and computing loads over the virtual instances. And if a cloud vendor offers access to a global Content Delivery Network, cloud load balancers may be used to route traffic to and from the nearest edge servers making your cloud app highly responsive.
Monitoring of a cloud-native app can be divided into two layers:
- Health checks to define whether a microservice is functional at all. The functional state is automatically reported to a host platform, which can scale up or down the dedicated virtual instances.
- Metrics analysis giving the advanced picture of app performance. It is mostly used by the developers to automate up/downscaling of an app or plan the changes to the app if the service level indicators aren’t met.
A cloud-native app lets you build perimeter and component-level security. However, integration of access verification mechanisms into each app component may become a burden to performance. To avoid this, I suggest using intra-component authentication: a signed-up user gets a token, which is then compared with a reference token cashed in each service to grant or deny access. This technique greatly contributes to app security with the least effect on its performance.
Tip 1: Get an experienced DevOps team skilled in:
- Automation. IaC infrastructure setups, CI/CD pipelines, infrastructure management automation.
- Containerization to make your infrastructure a resource-friendly system easily reproducible on any cloud platform.
- App monitoring to make sure your app adheres to the set business goals throughout its entire life cycle.
Tip 2: Avoid cloud-agnostic architectures as they are resource-hungry and rigid for functional extensions. Instead, I advise you to build your cloud-native infrastructure straight on a platform that natively supports containerization and provides PaaS functionality reducing your development efforts and infrastructure costs (e.g., Azure, AWS, GCP). Thus, you’ll be able to build and optimize the infrastructure faster and cheaper.
Going cloud-native spirals into the following benefits: low infrastructure costs, development shortcuts, high level of app security and high app availability. However, building a cloud-native infrastructure requires a certain extent of IT maturity and domain-specific skills. If you think you lack expertise to cope with the task, feel free to contact my colleagues and me.
Want to stay technologically advanced and still focused on your core business activities? We are ready to help you manage your complex IT environment.