February 17, 2023

Article
9 min

How to Run Modern Workloads in the Data Centre

Modernization shifts away from deploying and managing workloads on virtual machines toward a fully integrated software development pipeline using containers. This does not mean every workload will make this transition, and certainly not all at once.

What's Inside
  • Shift to containers and microservices

    There has been a shift in application development and deployment from virtual machines to containers. This transition to containers is being driven by the need to increase deployment frequency without compromising application quality or stability.

  • Container platform deployment models

    Container platforms may be deployed on-premises either as physical machines or virtual machines, as well as in the cloud on compute instances in a self-supported model.

  • Kubernetes as the orchestration layer

    With Kubernetes as the orchestration layer, workloads are able to be managed and scaled consistently regardless of where they are deployed. If an application needs to be run on-prem for data security or compliance reasons it can be.

  • How to support modern workload deployment

    Regardless of where your workloads run, the same set of tools, processes and controls should provide you with the means to deploy workloads anywhere, without reliance on proprietary or manual processes, or even what infrastructure they run on.

Abstract image of data centre and cloud

Your business relies on continuously delivering and optimizing modern software – while providing the best customer experiences and competitive advantage – on any cloud and at the edge. That includes new cloud native applications along with existing apps modernized for cloud environments. It can be complex.

With modernization comes a shift away from deploying and managing workloads on virtual machines and towards a fully integrated software development pipeline leveraging containers. This does not mean every workload will make this transition, however, and certainly not all at once. Virtual machine platforms, such as VMware, still have their place and technology such as VMware Tanzu can support the coexistence of virtual machines and containers on the same platform with the same management tools, offering many clients an easy entry point into modern workload environments.

Optimizing where your workload runs; hybrid, cloud native or on-prem, as well as how it is delivered; virtual machine, container, microservice and serverless; is not where we stop, however. The compute environment your workload runs on can also make a difference in how well it performs and how easily it transitions between deployment paradigms.

Intel supports the innovation of data centres with selected solutions to help jumpstart IT modernization. Their support for cloud computing includes a vast selection of cloud tools that utilize hardware-enabled features in Intel® Xeon Scalable and other processor-based platforms to support improved performance and security. The next generation of Intel Xeon processors will allow for the next level of performance in helping businesses form a base in data-centric innovation. In efforts to achieve base-level cloud optimization, Intel tools are designed around workload optimization, cloud optimization and migration resolution.

Shift to containers and microservices

For the past several years there has been a shift in application development and deployment from virtual machines to containers as the standard deployment pattern. This transition to containers and microservices is being driven by the need to increase deployment frequency without compromising application quality or stability. It also provides for additional benefits from an infrastructure perspective since the overhead required to run containers is significantly less than that of virtual machines, resulting from stripping away the operating system layer of each virtual machine. This permits greater consolidation ratios while still maintaining workload isolation and east-west security.

This shift requires us to focus on quality and stability of the infrastructure servicing the application workloads, specifically the container platform, and the related ecosystem of interdependent tools utilized to improve scalability, manageability and security. This platform needs to provide a standardized means of running applications regardless of the underlying technology, whether that infrastructure resides on-premises or at a public cloud provider.

While there have been several different container orchestration technologies over the past several years, the market has standardized on Kubernetes for container orchestration going forward. All public cloud providers offer native Kubernetes control planes which customers may take advantage of, greatly simplifying the effort required to run and support a container platform. Alternatively, clients can deploy their own container platform control plane either on-prem or using standard compute instances from any cloud provider.

Microservices is yet a further level of granularity from what containers offer, essentially breaking an application down into its individual constituent parts, allowing for scalability of any individual application component independent of others. This can improve the stability of the application as well as performance when architected correctly. A microservice can be thought of as a container for purposes of management, just at a much smaller scale, ideally representing the components of a workload on an atomic scale. This can also enable the migration and modernization of applications that may still require integration with legacy, monolithic components, such as a database as part of an application refactorization.

We would be remiss not to mention serverless as a deployment pattern for modern applications. Serverless platforms obfuscate the underlying server and container platform infrastructure, allowing functions to be run at an atomic level. The platform is responsible for scaling elastically to meet the load requirements for functions and to provide low latency start up and response time. While public cloud providers have built serverless platforms for client workloads to consume, a serverless or ‘Functions as a Service’ platform can be deployed on-prem leveraging the same deployment pattern as containers, and be compatible with those of public cloud providers, ensuring portability in hybrid environments.

Container platform deployment models

As mentioned, container platforms may be deployed on-prem either as physical machines or virtual machines, as well as in the cloud on compute instances in a self-supported model. Alternatively, clients may wish to leverage public cloud hyperscalers’ control plane for Kubernetes as a service model to focus on building and deploying their applications rather than managing the underlying container platform that they reside on. Standardization of the Kubernetes platform for containers provides us with a way to deploy an application on any of the public clouds or an on-prem environment in a consistent way using the same application package and automation controls within a CI/CD process.

Kubernetes as the orchestration layer

With Kubernetes as the orchestration layer, workloads are able to be managed and scaled consistently regardless of where they are deployed. If an application needs to be run on-prem for data security or compliance reasons it can be. Another application without these constraints could be deployed anywhere that it makes sense and serves the client best. The choice of where to deploy an application may come down to the availability of compute and memory resources or the data it requires. For example, when deploying workloads to support business intelligence initiatives using AI/ML, the volume and variety of potential data input sources makes deployment in the cloud difficult or expensive for data not collocated.

Other types of workloads that are not dependent on specific data persistence requirements or those where data is replicated between on-prem and public cloud providers could be deployed anywhere it makes the most sense. This gives clients flexibility to choose the lowest cost deployment model for their application at time of deployment, allowing this to change over time as costs change.

Container platforms do not exist within a vacuum. While the platform itself provides the means of deploying, scaling and managing your workloads, it exists within the framework of software development life cycle (SDLC) and continuous integration, continuous deployment (CI/CD) development patterns. Management and versioning of source code throughout the development cycle typically requires tools based on Git, and repositories to store them.

Building code into executable binaries necessitates an artifact repository to manage them. Automated testing and validation tools ensure that code is production ready and meets functional, non-functional, load and security requirements before promotion. CI/CD tools provide the build and promotion automation for moving code from development to test, QA and eventually production. And release management tools provide A/B or feature flag capabilities. All this, with security at each level (source code, vulnerability management, container security scanning, service mesh, etc). This ecosystem needs to support a hybrid deployment model for modern workloads that span both on-prem and cloud environments and may exist as SaaS solutions themselves.

How to support modern workload deployment

When deploying solutions to support modern workloads, it is important to do so in a modern way – leveraging automation and orchestration tools to promote repeatability and predictability; preferencing immutable images to improve security; using infrastructure as code; shifting from imperative to declarative; from manual to generated configurations; and baking security into every step of the CI/CD process. The principals of GitOps direct you to manage your infrastructure in the same way as your source code – eliminating configuration drift and providing consistency of the deployed environment and application workloads.

Regardless of where your workloads run, the same set of tools, processes and controls should provide you with the means to deploy workloads anywhere, without reliance on proprietary or manual processes, or even what infrastructure they run on.

Innovation, optimization and flexibility are key demands for businesses today. As the pressure to adapt to digital transformation grows, companies must work with forward-thinking solution providers like CDW who can successfully facilitate digital transformation. CDW can help reduce that complexity with partners such as Intel and VMware.

Intel provides cloud computing solutions that optimize specific workloads and ensure the best performance ratios. They aim to increase the gain from your cloud infrastructure through the provision of tools and solutions to help businesses compete with today's competition for tomorrow's business. As Intel continues to build on its cloud tools for optimization and management, it leaves us wondering what comes next in the new centre of the possibilities for data centres.

VMware Tanzu offers a modular, cloud native application platform that makes it easier to build, deliver and operate cloud native apps in a multicloud world – enabling innovation to happen on your terms. With VMware Tanzu, you can empower your developers to be their most productive and inventive; streamline and secure the path to production for continuous value delivery; and operate highly available and performant apps that make your customers happy and help your business thrive.

Overall, CDW and our partners are well positioned to help you take the next step in your modernization journey, embracing the benefits of the modern data centre.

About the Author

With 25 years of experience in IT, Michael has a wealth of knowledge based in data management, high availability and disaster recovery. As a Principal Solution Architect at CDW, Michael focusses on solutions in DevOps, AI and cloud.