Over the past 20+ years, virtualization has become a standard infrastructure building block for hundreds of millions of applications, including those on public cloud. It triumphed because of the powerful abstraction it provides over physical hardware, allowing for better utilization, management, automation and security. However, with the introduction of any new technology, it’s natural to question past assumptions and revisit previous questions. For Kubernetes and containers, we’ve heard some folks ask the question of whether it’s preferable to use bare metal or virtualized infrastructure.
The bare metal (really “Linux as host OS”) argument goes like this: Kubernetes and containerization provide some of the basic properties of virtualization such as process isolation and application packaging and abstraction, meaning virtualization is an unneeded layer that adds complexity and overhead, and thus the right solution is to “simplify” by removing the virtualization layer. Indeed, there is certainly something seductive to this argument. However, this argument misses that problems such as resiliency and availability, end-to-end security, and operations management must still be solved. The question facing organizations is how to get this required functionality: get it built into the infrastructure or put in the effort to create a custom solution built on top of the infrastructure.
Rather than looking at the question as bare metal vs. virtualization, we at VMware feel strongly that the right answer is creating “better” infrastructure. Yes, better infrastructure includes virtualization, but it’s so much more than that. Better infrastructure means infrastructure that delivers everything you need to run Kubernetes in production. It means you don’t have to worry about the undifferentiated heavy lifting and can instead focus on creating business value. Better infrastructure means better Kubernetes.
Last year, we released both VMware Cloud Foundation with Tanzu and vSphere with Tanzu. Both of these offerings deeply integrate Kubernetes into VMware Cloud Foundation and vSphere, respectively, making them enterprise-grade Kubernetes-native platforms. This enables modern Kubernetes-based applications to run side-by-side with traditional VM-based applications, taking advantage of all the powerful features and capabilities of VMware infrastructure. This is better infrastructure delivering better Kubernetes–here’s how:
Easier for Operators
You can quickly start running containerized apps on the VMware infrastructure on which most of your workloads already run optimally today, using your existing vSphere ecosystem tools and skillsets. VMware virtualization takes care of the complexity of hardware management and makes management easier by unifying traditional and containerized apps on a single platform. It can drastically reduce your operational overhead by automating deployment of Kubernetes clusters and on-going “day 2” operations such as backup, migration, patching, and monitoring. In a “less infrastructure” model, solutions to these problems must be custom-built, slowing time to value and increasing cost.
Better Developer Experience
Developers now have self-service access to the whole VMware infrastructure via a unified and native Kubernetes API. Using declarative configuration via the command line or the tool of their choice, they can quickly provision K8s pods, namespaces, clusters, VMs, and even developer services-like databases and S3-compatible object storage to build modern applications. The virtualization layer provides better flexibility and workload mobility, API-driven automation, and speed to support developer self-service access.
VMware virtualization seamlessly handles resiliency issues. It typically restarts a failed or problematic Kubernetes node before Kubernetes itself detects the problem. It provides availability of the Kubernetes control plane by utilizing mature heartbeat and partition detection mechanisms to monitor servers, Kubernetes VMs, and network connectivity, enabling quick recovery in the event of a failure. (Kubernetes control plane issues are particularly operationally challenging in a “less” infrastructure environment!) With proactive failure detection, live migration, automatic load balancing, restart due to infrastructure failures, and highly available storage, you can prevent service disruption and performance impacts. This is even more important for your most demanding and critical stateful workloads, such as databases and the Kubernetes control plane(s).
VMware virtualization delivers hardware-level isolation at the Kubernetes cluster, namespace, and even pod level (the customer decides the best approach!). VMware infrastructure also enables the pattern of many smaller Kubernetes clusters, providing true multi-tenant isolation with a reduced fault domain. Smaller clusters reduce the blast radius, i.e. any problem with one cluster only affects the pods in that small cluster and won’t impact the broader environment. In addition, smaller clusters mean each developer or environment (test, staging, production) can have their own cluster, allowing them to install their own CRDs or operators without risk of adversely affecting other teams.
VMware virtualization delivers excellent performance that can, perhaps surprisingly, sometimes exceed that of bare metal through powerful resource management and NUMA optimizations that compensate for any virtualization overhead. vSAN Direct Configuration™ empowers performance-sensitive stateful applications by enabling direct access to underlying direct-attached storage hardware. The vSphere Distributed Resource Scheduler (DRS) transparently balances efficiency and performance for any workload in the cluster, thus reducing resource waste and contention while allowing for higher resource utilization of the underlying infrastructure.
Lower Total Cost
One of the principal arguments for “less” infrastructure is lower overall cost. Yet VMware infrastructure is able to deliver the lowest overall TCO through CapEx savings from higher resource utilization and OpEx savings due to simpler management. Higher resource utilization comes from enabling resource management and overcommitment in both Kubernetes and VMware infrastructure. By using Kubernetes to manage the application (via pod requests, limits and priorities) and VMware infrastructure to manage the Kubernetes clusters (via resource pools and DRS), you can achieve up to 3x higher resource utilization while simultaneously improving performance! Additionally, VMware handles much of the functionality that can drive up the cost of implementing a custom solution, such as deployment, patching, monitoring, and more out-of-the-box.
For all these reasons and more, if you haven’t looked at VMware infrastructure for building and running your Kubernetes and cloud-native applications, check it out today!
vSphere with Kubernetes top sheet
vSphere with Tanzu free trial