Save to My DOJO
Table of contents
- Why Shift to Containers?
- Comparing Containers vs. Virtual Machines
- Kubernetes (K8s) is the Modern Key to Running Containers
- Why Run Containers in VMware?
- VMware vSphere Integrated Containers
- Components of vSphere Integrated Containers (VIC)
- How to Use vSphere Integrated Containers
- VMware Embraces Kubernetes with vSphere 7
- What is the Cluster API Provider vSphere?
- Is it Finally Time to Make the Switch?
Unquestionably, organizations today are transforming from traditional infrastructure and workloads, including virtual machines, to modern containers running containerized applications. However, making this transition isn’t always easy as it often requires organizations to rethink their infrastructure, workflows, development lifecycles, and learn new skills. Are there ways to take advantage of the infrastructure already used in the data center today to run containerized workloads? For years, many companies have been using VMware vSphere for traditional virtual machines in the data center. So what are your options to run containers in VMware?
Why Shift to Containers?
Before we look at the options available to run containers in VMware, let’s take a quick overview of why we are seeing a shift to running containers in the enterprise environment. There are many reasons. However, consider a few primary reasons we see the change to containerized applications today.
One of the catalysts to the shift to containerized applications is the transition from large monolithic three-tier applications to much more distributed application architectures. For example, you may have a web, application, and database tier in a conventional application, each running inside traditional virtual machines. With these legacy three-tier architectures, the development lifecycle is often slow and requires many weeks or months to deploy upgrades and feature enhancements.
The application upgrade is performed by lifting the entire layer to a new version of code as it is required to happen in lockstep as a monolithic unit of code. The new layout of modern applications is very distributed, using microservice components running inside containers. With the new architectural design of modern applications, each microservice can be upgraded separately from the other application elements, allowing much faster development lifecycles, feature enhancements, upgrades, lifecycle management, and many other benefits.
Organizations are also shifting to a DevOps approach to deploying, configuring, and maintaining infrastructure. With DevOps, infrastructure is described in code, allowing infrastructure changes to be versioned like other development lifecycles. While DevOps processes can use virtual machines, containerized infrastructure is much more agile and more readily conforms to modern infrastructure management. So, the shift to a more modern approach to building applications offers benefits from both development and IT operations perspectives. To better understand containers vs. virtual machines, let’s look at the key differences.
Comparing Containers vs. Virtual Machines
Many have used virtual machines in the enterprise data center. How do containers compare to virtual machines? To begin, let’s define each. A virtual machine is a virtual instance of a complete installation of an operating system. The virtual machine runs on top of a hypervisor that typically virtualizes the underlying hardware of the virtual machine, so it doesn’t know it is running on a virtualized hardware layer.
Virtual machines are much larger than containers as a virtual machine contains the entire operating system, applications, drivers, and supporting software installations. Virtual machines require operating system licenses, lifecycle management, configuration drift management, and many other operational tasks to ensure they are fully compliant with the set of organizational governance policies decided.
Instead of containing the entire operating system, containers only package up the requirements to run the application. All of the application dependencies are bundled together to form the container image. Compared to a virtual machine with a complete installation of an operating system, containers are much smaller. Typical containers can range from a few megabytes to a few hundred megabytes, compared with the gigabytes of installation space required for a virtual machine with an entire OS.
One of the compelling advantages of running containers in VMware is that they can move between container hosts without worrying about the dependencies. With a traditional virtual machine, you must verify all the underlying prerequisites, application components, and other elements are installed for your application. As mentioned earlier, containers contain all the application dependencies and the application itself. Since all the prerequisites and dependencies move with the container, developers and IT Ops can move applications and schedule containers to run on any container host much more quickly.
Virtual machines still have their place. Installing traditional monolithic or “fat” applications inside a container is generally impossible. Virtual machines provide a great solution for interactive environments or other needs that still cannot be satisfied by running workloads inside a container.
Containers have additional benefits related to security. Managing multiple virtual machines can become tedious and difficult, primarily related to lifecycle management and attack surface. In addition, virtual machines have a larger attack surface since they contain a larger application footprint. The more software installed, the greater the possibility of attack.
Lifecycle management is much more challenging with virtual machines since they are typically maintained for the entire lifespan of an application, including upgrades. As a result, it can lead to stale software, old software installations, and other baggage brought forward with the virtual machine. Organizations also have to stay on top of security updates for virtual machines.
Containers in VMware help organizations to adopt idempotency. It means that the containers running the current version of the application will not be upgraded once deployed. Instead, businesses deploy new containers with new application versions. The result is a new application environment each time a new container is deployed.
Note the following summary table comparing containers and virtual machines.
Containers | Virtual Machines | |
Small in size | Yes | No |
Contains all application dependencies | Yes | No |
Requires an OS license | No | Yes |
Good platform for monolithic app installs | No | Yes |
Reduced attack surface | Yes | No |
Easy lifecycle management | Yes | No |
Easy DevOps processes | Yes | No |
It is easy to think that it is either containers or virtual machines. However, most organizations will find that there is a need for both containers and virtual machines in the enterprise data center due to the variety of business use cases, applications, and technologies used. These two technologies work hand-in-hand.
Virtual machines are often used as “container hosts.” They provide the operating system kernel needed to run containers and provide other benefits to be used as container hosts. They can take advantage of the benefits from a hypervisor perspective for high availability and resource scheduling.
Kubernetes (K8s) is the Modern Key to Running Containers
Businesses today are looking at running containers and refactoring for containerized applications. They are looking at doing so using Kubernetes. Kubernetes is the single more important aspect of running containers in business-critical environments.
Simply running your application inside a container does not satisfy the needs of production environments, such as scalability, performance, high availability, and other concerns. For example, suppose you have a microservice running in a single container that goes down. In that case, you are in the same situation as running the service in a virtual machine without some type of high availability.
Kubernetes is the container orchestration platform allowing businesses to run their containers much like they run VMs today in a highly-available configuration. Kubernetes can schedule containers to run on multiple container hosts and reprovision containers on a failed host onto a healthy container host.
While some companies may run simple containers inside Docker or containers and take care of scheduling using some homegrown orchestration or other means, most are looking at using Kubernetes to solve these challenges. Kubernetes is an open-source solution that allows managing containerized workloads and services and provides modern APIs to allow automation and configuration management.
Kubernetes provides:
- Service discovery and load balancing – Kubernetes allows businesses to expose services using DNS names or IP addresses. It can also load balance between container hosts and distribute traffic between the containers for better performance and workload balance
- Storage orchestration – Kubernetes provides a way to mount storage systems to back containers, including local storage, public cloud provider storage, and others
- Automated rollouts and rollbacks – Kubernetes provides a way for organizations to perform “rolling” upgrades and application deployments, including automating the deployment of new containers and removing existing containers
- Resource scheduling – Kubernetes can run containers on nodes in an intelligent way, making the best use of your resources
- Self-healing – If containers fail for some reason, Kubernetes provides the means to restart, replace, or kill containers that don’t respond to a health check, and it doesn’t advertise these containers to clients until they are ready to service requests
- Secret and configuration management – Kubernetes allows intelligently and securely storing sensitive information, including passwords, OAuth tokens, and SSH keys. Secrets can be updated and deployed without rebuilding your container images and without exposing secrets within the stack
Why Run Containers in VMware?
Why would you want to run containers in VMware when vSphere has traditionally been known for running virtual machines and is aligned more heavily with traditional infrastructure? There are many reasons for looking at running your containerized workloads inside VMware vSphere, and there are many benefits to doing so.
There have been many exciting developments from VMware over the past few years in the container space with new solutions to allow businesses to keep pace with containerization and Kubernetes effectively. In addition, To VMware’s numbers, some 70+ million virtual machine workloads are running worldwide inside VMware vSphere.
It helps to get a picture of the vast number of organizations using VMware vSphere for today’s business-critical infrastructure. Retooling and completely ripping and replacing one technology for something new is very costly from a fiscal and skills perspective. As we will see in the following overview of options for running containers in VMware, there are many excellent options available for running containerized workloads inside VMware, one of which is a native capability of the newest vSphere version.
VMware vSphere Integrated Containers
The first option for running containers in VMware is to use vSphere Integrated Containers (VIC). So what are vSphere Integrated Containers? How do they work? The vSphere Integrated Containers (VIC) offering was released back in 2019 and is the first offering from VMware to allow organizations to have a VMware-supported solution for running containers side-by-side with virtual machines in VMware vSphere.
It is a container runtime for vSphere that allows developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Also, vSphere administrators can manage these workloads by using vSphere in a familiar way.
The VIC solution to run containers in VMware is deployed using a simple OVA appliance installation to provision the VIC management appliance, which allows managing and controlling the VIC environment in vSphere. The vSphere Integrated Containers solution is a more traditional approach that uses virtual machines as the container hosts with the VIC appliance. So, you can think of the VIC option to run containers in VMware as a “bolt-on” approach that brings the functionality to traditional VMware vSphere environments.
With the introduction of VMware Tanzu and especially vSphere with Tanzu, vSphere Integrated Containers is not the best option for greenfield installations to run containers in VMware. In addition, August 31, 2021, marked the end of general support for vSphere Integrated Containers (VIC). As a result, VMware will not release any new features for VIC.
Components of vSphere Integrated Containers (VIC)
What are the main components of vSphere Integrated Containers (VIC)? Note the following architecture:
Architecture overview of vSphere Integrated Containers (VIC)
- Container VMs – contain characteristics of software containers, including ephemeral storage, custom Linux guest OS, persistenting and attaching read-only image layers, and automatically configuring various network topologies
- Virtual Container Hosts (VCH) – The equivalent of a Linux VM that runs Docker, providing many benefits, including clustered pool of resources, single-tenant container namespace, isolated Docker API endpoint, and a private network to which containers are attached by default
- VCH Endpoint VM – Runs inside the VCH vApp or resource pool. There is a 1:1 relationship between a VCH and a VCH endpoint VM.
- The vic-machine utility – It is the utility binary for Windows, Linux, and OSX to manage your VCHs in the VIC environment
How to Use vSphere Integrated Containers
As an overview of the VIC solution, getting started using vSphere Integrated Containers (VIC) is relatively straightforward. First, you need to download the VIC management appliance OVA and deploy this in your VMware vSphere environment. The download is available from the VMware customer portal.
Download the vSphere Integrated Containers appliance
Let’s look at the deployment screens for deploying the vSphere Integrated Containers appliance. The process to deploy the VIC OVA appliance is the standard OVA deployment process. Choose the OVA file for deploying the VIC management appliance.
Select the OVA template file
Name the VIC appliance.
Name the VIC appliance
Select the compute resource for deploying the VIC appliance.
Select the compute resource for deploying the VIC appliance
Review the details of the initial OVA appliance deployment.
Review the details during the initial deployment
Accept the EULA for deploying the OVA appliance.
Accept the EULA during the deployment of the OVA appliance
Select the datastore to deploy the VIC appliance.
Select the storage for the VIC appliance
Select the networking configuration for the VIC appliance.
Choose your virtual network to deploy the VIC appliance
On the customize template screen, configure the OVA appliance configuration details, including:
- Root password
- TLS certificate details
- Network configuration (IP address, subnet mask, gateway, DNS, DNS search order, and FQDN)
- NTP configuration
- Other configurations
Customize the VIC appliance template configuration
Review and finalize the configuration for the VIC appliance.
Finish the deployment of the VIC appliance
Once the VIC appliance is deployed, you can browse to the hostname you have configured for VIC. You will see the following configuration dialog displayed. Enter your vCenter Server information, connection details, and the password you want to configure for the VIC appliance.
Wizard to complete the VIC appliance installation
Accept the thumbprint for your vCenter Server
Once the installation finishes, you will see the successful installation message. The dashboard provides several quick links to manage the solution. As you can see, you can also go to the vSphere Integrated Containers Management Portal to get started.
Installation of VIC is successful
Once you deploy the VIC appliance, you can download the vSphere Integrated Containers Engine Bundle to deploy your VIC container hosts. Once the container hosts are provisioned, you can deploy the container workloads you want to deploy for development.
The syntax to create the Virtual Container Host in VIC is as follows:
vic-machine-windows create
–target vcenter_server_address
–user “[email protected]”
–password vcenter_server_password
–bridge-network vic-bridge
–image-store shared_datastore_name
–no-tlsverify
–force
Once you have configured the Virtual Container Host, you can create your Docker containers. For example, you can use the following to create a Docker container running Ubuntu using the following:
docker -H <VCH IP address:2376> –tls run -it ubuntu
To learn more details on how to deploy vSphere Integrated Containers, take a look at the posts here:
- vSphere Virtual Integrated Containers and how to deploy them (altaro.com)
- Getting Started with vSphere Integrated Containers in vSphere 6.7 (altaro.com)
- How to Deploy Containers Using vSphere Integrated Containers (altaro.com)
VMware vSphere Integrated Containers – End of General Support
As noted above, vSphere Integrated Containers is now at the end of general support as of August 31, 2021. Why is VMware ending support? Again, due to the advancement in containerized technologies, including Tanzu, VMware is moving forward without VIC. The official answer from VMware on the End of General Support FAQ page for vSphere Integrated Containers (VIC) notes:
“VMware vSphere Integrated Containers (VIC) is a vSphere feature that VMware introduced in 2016 with the vSphere 6.5 release. It is one of the first initiatives that VMware had in the container space to bring containers onto vSphere.
In the last few years, the direction of both the industry and the cloud-native community has moved to Kubernetes, which is now the de facto orchestration layer for containers. During this time, VMware also made significant investments into Kubernetes and introduced several Kubernetes-related products including vSphere with Tanzu which natively integrates Kubernetes capabilities into vSphere. vSphere with Tanzu enables containers to be a first-class citizen on the vSphere platform with a much-improved user experience for developers, dev-ops (platform Op/SRE) teams and IT admins.
Given both the industry and community shift towards Kubernetes and the launch of vSphere with Tanzu, which incorporated many of the concepts and much of the technology behind VIC with critical enhancements such as the use of the Kubernetes API, we decided that it is time to end our support to VIC as more and more of our customers start moving towards Kubernetes.”
As mentioned on the End of Support FAQ page, VMware sees the direction moving forward with Kubernetes technologies. VMware Tanzu provides the supported solution moving forward, running Kubernetes-driven workloads in VMware vSphere.
VMware Embraces Kubernetes with vSphere 7
Organizations today are keen on adopting Kubernetes as their container orchestration platform. With VMware vSphere 7, VMware took a significant stride forward for native containerized infrastructure with the introduction of VMware Tanzu. In addition, VMware vSphere 7 has introduced native Kubernetes support, built into the ESXi hypervisor itself. It means running containers orchestrated by Kubernetes is not a bolt-on solution. Instead, it is a native feature found with a new component in the ESXi hypervisor.
In addition, vanilla Kubernetes can be difficult and challenging to implement. Tanzu provides an integrated and supported way forward for organizations to use the infrastructure they are already using today to implement Kubernetes containers moving forward.
Due to the seamless integration and many other key features with Tanzu, the new Tanzu Kubernetes solution is a far superior solution to run containers in VMware in 2022 and beyond. For this reason, VMware is phasing out vSphere Integrated Containers in favor of moving forward with VMware Tanzu.
VMware Tanzu is an overarching suite of solutions first announced at VMworld 2019. It provides solutions allowing organizations to run Kubernetes across cloud and on-premises environments. For example, with vSphere with Tanzu (codenamed Project Pacific), businesses can run Tanzu Kubernetes right in the VMware vSphere hypervisor. However, it extends beyond vSphere with Tanzu and includes the following solutions:
- Tanzu Kubernetes Grid
- Tanzu Mission Control
- Tanzu Application Service
- Tanzu Build Service
- Tanzu Application Catalog
- Tanzu Service Mesh
- Tanzu Data Services
- Tanzu Observability
There are two types of Kubernetes clusters configured with vSphere with Tanzu architecture. These include the following:
- Supervisor cluster – The supervisor cluster uses the VMware ESXi hypervisor as a worker node, or Spherelet. This Spherelet is essentially the equivalent to the Kubelet. The advantage of the Spherelet is it is not run inside a virtual machine but natively in ESXi, which is much more efficient.
- Guest cluster – The guest cluster is run inside specialized virtual machines for general-purpose Kubernetes workloads. These VMs run a fully compliant Kubernetes distribution
vSphere with Tanzu architecture
To learn more about VMware Tanzu, take a look here:
VMware Tanzu Community Edition (TCE)
VMware Tanzu Community Edition (TCE) is a newly announced VMware Tanzu solution that makes Tanzu-powered containers available to the masses. The project is free and open-source. However, it can also run production workloads using the same distribution of VMware Tanzu available in the commercial offerings. In addition, it is a community-supported project that allows the creation of Tanzu Kubernetes clusters for many use cases, including local development.
You can install VMware Tanzu Community Edition (TCE) in the following environments:
- Docker
- VMware vSphere
- Amazon EC2
- Microsoft Azure
Tanzu Community Edition installation options
Recently, VMware has introduced the unmanaged cluster type with the Tanzu Community Edition (TCE) 0.10 version. The new unmanaged cluster drastically reduces the time to deploy a Tanzu Community Edition by half. The new unmanaged cluster is taking the place of the standalone cluster type found in previous releases.
The new unmanaged cluster is the best deployment option when:
- You have limited host resources available
- You only need to provision one cluster at a time
- A local development environment is needed
- Kubernetes clusters are temporary and are stood up and then torn down
When looking at options to run containers in VMware in 2022, Tanzu Community Edition (TCE) is a great option to consider as it may fit the use cases needed for running containers in VMware environments. In addition, it offers an excellent option for transitioning away from vSphere Integrated Containers (VIC) and allows organizations to take advantage of Tanzu for free. It also provides a great way to use VMware Tanzu Kubernetes for local development environments.
What is the Cluster API Provider vSphere?
Another interesting project to run containers in VMware vSphere is the Cluster API Provider vSphere (CAPV) project. The cluster API allows organizations to have a declarative, Kubernetes-style API to manage cluster creation, configuration, and management. The CAPV project implements the Cluster API for vSphere. Since the API is shared, it allows businesses to have a truly hybrid deployment of Kubernetes across their on-premises vSphere environments and multiple cloud providers.
You can download the CAPV project for running Kubernetes containers in VMware vSphere here:
To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual
machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.
To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).
Is it Finally Time to Make the Switch?
With the tremendous shift to microservices in modern application architecture, businesses are rearchitecting their application infrastructure using containers. The monolithic three-tier application architecture days are numbered as businesses are challenged to aggressively release enhancements, updates, and other features on short development lifecycles. Containers provide a much more agile infrastructure environment compared to virtual machines. They also align with modern DevOps processes, allowing organizations to adopt Continuous Integration/Continuous Deployment (CI/CD) pipelines for development.
VMware has undoubtedly evolved its portfolio of options to run containers. Many organizations currently use VMware vSphere for traditional workloads, such as virtual machines. Continuing to use vSphere to house containerized workloads offers many benefits. While vSphere Integrated Containers (VIC) has been a popular option for organizations who want to run containers alongside their virtual machines in vSphere, it has reached the end of support status as of August 31, 2021.
VMware Tanzu provides a solution that introduces the benefits of running your containerized workloads with Kubernetes, which is the way of the future. The vSphere with Tanzu solution allows running Kubernetes natively in vSphere 7.0 and higher. This new capability enables organizations to use the software and tooling they have been using for years without retooling or restaffing.
VMware Tanzu Community Edition (TCE) offers an entirely free edition of VMware Tanzu that allows developers and DevOps engineers to use VMware Tanzu for local container development. You can also use it to run production workloads. In addition, both the enterprise Tanzu offering and VMware Tanzu Community Edition can be run outside of VMware vSphere, providing organizations with many great options for running Kubernetes-powered containers for business-critical workloads.
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!