September 19, 2022

Article
9 min

Why You Should Consider a Composable Data Centre

By providing advantages of the software-defined data centre to components in converged, hyperconverged or disaggregated infrastructure designs, composability lets us allocate resources from resource pools we've created and published for consumption.

What's Inside
  • How to provide cloud-like capabilities on premises

    In moving away from single-vendor integrated solutions towards best of breed and ‘as-a service’ provisioning, we introduced the challenge of how to support IT infrastructure in a more effective manner.

  • How hyperconverged infrastructure provides flexibility

    A hyperconverged platform combines compute, storage and networking into standardized building blocks and units of deployment, which all workloads would be deployed on.

  • What is composability?

    Composability is the combination of advantages provided via the software-defined data centre once applied to the underlying infrastructure components whether they persist within converged, hyperconverged or disaggregated infrastructure designs.

  • Bringing it all together to build the modern data centre

    The modern data centre is a result of this evolution from converged and hyperconverged infrastructure stacks to the application of concepts of the software-defined data centre.

Person sitting on the floor next to a server inside a data centre while talking to someone on the phone and has his laptop on his lap.

Looking back at information technology infrastructure for on premise deployments, historically clients would evaluate options to choose single-vendor solutions comprised of server, storage and networking, or to acquire best-of-breed components and build a solution to suit their specific requirements. A best-of-breed approach provided the best set of capabilities, performance and cost, but introduced the potential for interoperability challenges and required customers to self-support the environment or work with a consulting partner to deliver the solution and after-sales support on the technology stack they decided to invest in. While a best-of-breed approach provides the potential for differentiation and technical advantages, the benefit of having to make just one support call cannot be understated.

In building out infrastructure to support IT requirements, one of the shifts was towards solutions that could be delivered as a shared service. An example of this was in backup and recovery, which has moved away from implementing a solution individually for each server and application in the environment, with local software and dedicated backup (tape) devices. The shift has been to a more manageable, centralized and cost-effective solution that would be able to support backup and recovery across all applications, operating systems and server platforms, integrated with the storage infrastructure. This shift resulted in a single, centralized solution providing backup as a service for the client and thus began the underpinnings of today’s ‘as-a-service’ shift in provisioning IT resources.

How to provide cloud-like capabilities on premises

In moving away from single-vendor integrated solutions towards best of breed and ‘as-a service’ provisioning, we introduced the challenge of how to support IT infrastructure in a more effective manner. With the advent of public cloud (or hyperscalers, as they're known today), the ability to allocate resources to a specific workload, such as virtual servers, delivers substantial flexibility and scalability because we no longer have to pre-provision or pre-purchase assets in anticipation of requirements. We can simply allocate them as and when required, either on demand using spot instances or on a reserved basis. The upfront investment is no longer required and substantial flexibility, scalability and agility can be achieved.

When we attempt to adopt this capability for on premise workloads, we need to acquire infrastructure that can deliver the same flexibility and scalability workloads through orchestration, automation and provisioning of the underlying infrastructure like the hyperscalers to provide the same benefits. Of course, we can't scale beyond the capability of whatever infrastructure assets we acquire.

Building cloud-like capabilities on premises initially began with the concept of a private cloud involving either single vendor, fully integrated solutions, or multivendor, best of breed, certified technology stacks, incorporating compute, storage and networking and a hypervisor to virtualize and share resources for multiple workloads. This returns us to the concept of a single support call for an entire technology stack, whether it be from a single vendor or from the organization selling/supporting a multivendor, best-of-breed approach.

With the introduction of pre-qualified, pre-certified technology stacks we also gained predictability in sizing since a standard configuration could be pre-tested for various workload types ahead of purchase and deployment. This gave customers predictability when scaling their infrastructure, supporting both technology integration planning and budget cycle approvals.

How hyperconverged infrastructure provides flexibility

Another approach to cloud-like flexibility on premises is hyperconverged infrastructure (HCI), which attempts to provide many of the same benefits of converged solutions using standardized building blocks. A hyperconverged platform combines compute, storage and networking into standardized building blocks and units of deployment, which all workloads would be deployed on. These building blocks would be configured as a cluster, allowing scalability of the foundational infrastructure over time as needed in a scale out, predictable manner. Different foundational building blocks could be introduced to provide pockets of specific needs (like GPU enabled hyperconverged nodes for VDI), with workload orchestration ensuring the right physical resources were allocated for each workload based on need and demand at deployment time.

What follows from the shift towards converged and hyperconverged infrastructure is the need to provide a software orchestration layer that provides management and automation for workload deployments. The software-defined data centre allows us to standardize the underlying technology stack, enabling the deployment of a workload anywhere. The underlying infrastructure can be moulded to meet the demands of a workload and stood up using standard building blocks for network, compute and storage, allowing the software to allocate those resources to build platforms to run physical, virtual and container platform nodes. This would satisfy the needs of both traditional legacy and modern applications. With software-defined data centres, the ability to both allocate and release assets is possible.

Taking this concept one step further, the disaggregation of the underlying hardware components into their constituent functions allows us to allocate each of them individually based on the specific requirements of a workload. Given an application and associated workload profile we can allocate the required CPU, memory, storage, GPUs and network connectivity to meet the demands of that application at the time it is deployed. Resources can be allocated from resource pools and these resource pools can be scaled as required to support the growth of the organization independently to the extent possible given the underlying physical infrastructure, thus bringing us to the concept of composability.

What is composability?

Composability is the combination of advantages provided via the software-defined data centre once applied to the underlying infrastructure components whether they persist within converged, hyperconverged or disaggregated infrastructure designs. Composability gives us the ability to allocate resources as needed from resource pools that we've created and published for consumption.

The purpose of all of this, of course, is to give us capabilities like what the hyperscalers are providing to clients today: the flexibility, scalability and agility of deploying workloads in public cloud, achieved on premises using composability in a software defined data centre. To take advantage of this, customers need to be able to deploy workloads both on premises and in the cloud using a common set of tools such that the application owner needs to be concerned with where the application runs; it can be provisioned using resources that meet the workload’s resource profile.

A common set of orchestration, configuration and automation toolsets obfuscates any underlying complexity. The application owner or developer can define the needs of their application from a performance, availability, disaster recovery, SLA and security perspective into a profile definition, and use the infrastructure to support that application wherever those resources exist through orchestration and automation. Ideally, we should be able to deploy an application anywhere, be it on premises or on a hyperscaler, where those requirements can be met.

Bringing it all together to build the modern data centre

The modern data centre is a result of this evolution from converged and hyperconverged infrastructure stacks to the application of concepts of the software-defined data centre on common infrastructure components that can be disaggregated and configured to support workload placement – physical, virtual, containers or microservices. This set of resources is available and can be scaled as required by the environment, allowing us to treat both on premise and public cloud in the same way using the same set of tools to support application portability, scalability and agility.

Containers, microservices and cloud-native technologies are the new building blocks of digital transformation. However, the complexity and fast-evolving nature of Kubernetes coupled with the limitations of legacy infrastructure put business agility and speed-to-market gains out of reach for many organizations.

Nutanix Enterprise Cloud enables you to run locally on a single, hyperconverged platform, consolidating workloads to reduce infrastructure footprint, power and cooling. You can avoid costly upgrades by utilizing standard branch office power and networking infrastructure and deliver enhanced local data protection – without the need for skilled onsite resources.

With Nutanix’s HCI as your foundation, you can fast-track your way to a production-grade, on-prem Kubernetes stack ready for hybrid-cloud operation to deliver predictable performance and scalability for a wide variety of applications and workloads with a standardized, repeatable and cost-effective solution.

Eliminating the manual provisioning of resources and the configuration of those assets also provides us tremendous stability within the organization. Infrastructure can be deployed in a fully automated fashion and pre-defined policies for management and security can be applied and enforced without manual user intervention. Our goal is to eliminate manual work, eliminating the opportunity for errors to be introduced into our deployment process; automation achieves this and gives us predictability, stability and improved security through consistency.

Moving forward, the choice of where a particular workload will run will largely depend upon where its data resides and what resources it needs, as well as what co-dependent workloads, including databases, applications, web front ends, may influence its provisioning. The underlying infrastructure provisioned for a workload, regardless of whether it's delivered by a hyperscaler or delivered by a software-defined composable data centre running on your premises, will largely depend on the characteristics you define as part of its workload profile: the resources it requires, its priority, SLA, budget and any other metrics deemed relevant.

Modernizing the data centre does not start and end with new compute and storage platforms. Being able to consolidate tools, protect all your workloads against ransomware and enable automation for rapid recovery is a must. CDW partners such as Commvault can help take the guesswork out of protecting your data and workloads to enable a seamless hybrid IT environment.

With 25 years of experience in IT, Michael Traves has a wealth of knowledge based in data management, high availability and disaster recovery. As a Principal Solutions Architect at CDW, Michael focusses on solutions in DevOps, AI and  cloud.