Save to My DOJO
Table of contents
- What is Software-Defined Memory?
- Memory Types Explained (DRAM, PMEM, NVMe)
- The Consumers and Use-cases for Pooled and Tiered Memory
- The Challenges of Exponential Data Growth
- Summary of Modern Memory Challenges
- What is VMware Project Capitola?
- What Project Capitola Means for the Future of Memory Management
VMware has continued to innovate in the enterprise datacenter with cutting-edge products that are now household names, such as vSAN and NSX. Over the years, it has transformed how we look at compute, storage, networking, and creating an abstraction layer on top of physical hardware to make these much easier to consume, manage, and tier.
While memory and compute are often tied together, VMware has unveiled a new technology set to bring similar advantages to the world of server memory that VMware vSAN has brought about in the world of server storage. VMware Project Capitola is a new software-defined memory solution unveiled at VMworld 2021 that will revolutionize server memory in the data center.
What is Software-Defined Memory?
With the various challenges and business problems mentioned above, the idea of a software-defined memory solution comes into focus. We mentioned at the outset, as a parallel to VMware vSAN, the notion of software-defined memory. VMware vSAN can take physical storage assigned to a specific physical vSAN host and pool this logically at the cluster level.
This software-defined approach to physical storage provides tremendous advantages in terms of flexibility, scalability, and storage tiering that allows customers to have the tools needed to solve modern storage problems. However, while VMware has pushed the envelope in most of the major areas of the data center (compute, storage, and networking), so far, memory, while virtualized to VMs and other technologies for the guest, has remained a simple hardware resource assigned to the underlying guest OS’es.
What if we had a solution that aggregated available memory installed in physical ESXi hosts and the types of memory installed in the host? Software-defined memory allows organizations to make intelligent decisions on how memory is used across the environment and assigned to various resources. In addition, memory can be pooled and tiered in the environment for satisfying different SLAs and performance use cases, like VMware vSAN allows today.
Memory Types Explained (DRAM, PMEM, NVMe)
Currently, there are three types of memory technologies widely used in the data center today. These are:
-
- DRAM
-
- PMEM
-
- NVMe
DRAM
DRAM (Dynamic Random-Access Memory) is the standard type of memory common in servers and workstations today. It is very durable and extremely fast in terms of access times and latency. However, it has one major downside. It is not able to retain data. This characteristic of DRAM is known as volatility.
When DRAM loses power for any reason, the data contained in the DRAM modules is lost and must be retrieved from physical disk storage.
PMEM
PMEM (Persistent Memory) is a type of memory technology that is non-volatile. It retains the data, even after a power loss. It is high-density and has low latency access times like DRAM. PMEM still lacks the speed of DRAM. However, it is much faster than flash memory, such as used in SSDs.
Intel® OptaneTM is a 3D XPoint memory technology that is gaining momentum at the enterprise server level as an extremely performant memory technology with the advantages of non-volatility. In addition, Intel® OptaneTM provides excellent performance, even with multiple write operations running in parallel, something that SSDs and other memory-based storage technologies lack. This type of memory is also referred to as “storage-class memory.”
At this time, Intel® OptaneTM is not meant to be a replacement for DRAM. Instead, it complements existing DRAM memory, providing excellent performance and high reliability. It is seen as a secondary tier of memory that is used for various use cases and is much cheaper than DRAM. Whereas DRAM is around $7-$20/GB, storage-class memory like Intel® OptaneTM is around $2-$3/GB.
NVMe
Rather than a type of memory technology, NVMe is an interface provided to SSD drives. Thus, you can think of NVMe as a PCIe SSD. As a result, they are much faster SSDs than traditional SATA SSDs. NVMe storage is becoming a mainstream technology in the data center, especially in the area of high-speed storage devices. However, it is fast enough to be used as a slower memory technology tier in certain use cases.
The Consumers and Use-cases for Pooled and Tiered Memory
With infrastructure hardware in the data center, many organizations are becoming memory-bound with their applications. Memory is also a significantly expensive component of physical server infrastructure costs today. Memory can comprise as much as 50% of the price of a two-socket physical server.
Data needs are significantly expanding. Many organizations who are using large database servers find the memory initially allocated database workloads grows over time. Many companies are leveraging in-memory databases. As these grow, so does the demand for host memory consumption. Some even find this may be doubling every 18-24 months.
In addition, memory is often intentionally provisioned from a hardware perspective due to maintenance operations. Why is this? During maintenance operations, the overall capacity of a virtualization cluster is reduced so that remaining hosts must assume the memory footprint of the host in maintenance. Note the comments of an IT admin at a major US Airline company:
“I am running mission-critical workloads; I need 35% excess memory capacity at the cluster level, which I am not even using most of the time.”
Even larger cloud service providers running cloud services are challenged with memory contention. Note the comments from a cloud service provider:
“Our cloud deployment instances are also getting memory bound and losing deals due to lack of large memory instances.”
There is no question that organizations across the board are feeling the challenge of meeting the demands of customers and business stakeholders around satisfying the memory requirements of their applications and business-critical workloads.
The Challenges of Exponential Data Growth
A trend across the board in the enterprise is data is growing exponentially. Businesses are collecting, harnessing, and using the power of data for many different use cases in the enterprise. Arguably, data is the most important asset of today’s businesses. As a result, data has been referred to as the business world’s new “gold” or new “currency.”
The reason for the data explosion is data allows businesses to make better and more effective decisions. For example, pinpointed data helps companies see where they need to invest in their infrastructure, the demographics of their customers, trends in sales, and other essential statistics. The data explosion among businesses is a macro trend that shows no signs of changing.
Data doesn’t only help with the business. The data itself is a commodity that companies will buy and sell, accounting for their main fiscal revenue stream. According to Gartner, by 2022, 35% of large organizations will be sellers or buyers of data via formal online marketplaces, up from 25% in 2020.
Storing the data is only part of the challenge for businesses. They have to make something useful from the data that is harvested. Another trend related to data is modern organizations want to make use of the data collected faster. It means that data must be processed more quickly. A study by the IDC predicts that nearly 30% of global data will be real-time by 2025. It underscores the need for data to be processed more quickly. Data not processed in time declines in value exponentially.
The challenges around data are driving various customer needs across the board. These include:
-
- Infrastructure needs to scale to accommodate the explosive data growth – It includes the scaling of compute, memory, storage, and networking to meet these challenges. All hardware areas are seeing the demands of data processing grow. As more data needs to be processed, it places stress on compute. It is why we are seeing GPUs becoming more mainstream for data process offloading. The network is now seeing 100 Gbit connections becoming mainstream. All NVMe storage is also being more widely used to help meet the demands placed on expedient data processing.
-
- For ultra-quick data processing, in-memory applications are needed
-
- Memory is expensive – It is one of the most expensive components in your infrastructure. Customers are challenged to reduce costs and at the same time keep an acceptable level of performance.
-
- Consistent Day-0 through Day 2 experience – Customers need acceptable experience from an operations and monitoring perspective.
The digital transformation resulting from the global pandemic has been a catalyst to the tremendous growth of data seen in the enterprise. Since the beginning of 2020, businesses have had to digitalize everything and streamline manual processes into fully digital processes to streamline business operations and allow these to be completed safely.
Application designs are changing as a result. Organizations are designing applications that must work with ever-increasing datasets across the board. Even though the datasets are growing, the expectation is that applications can process the data faster than ever.
It includes applications that rely on database backends such as SAP, SQL, and Oracle. In addition, artificial intelligence (AI) and machine learning (ML) are becoming more mainstream in the enterprise. SLAs also require exponentially more extensive data sets to be constantly available.
Virtual Desktop Infrastructure (VDI) instances continue as a business-critical service in the enterprise today. However, the cost per VDI instance continues to be a challenge for businesses today. As organizations continue to scale their VDI infrastructure, the demand for memory continues to grow. As mentioned, memory is one of the most expensive components in a modern server. As a result, memory consumption is one of the primary price components of VDI infrastructure.
In-memory computing (IMC) is a growing use case for memory consumption. Organizations are accelerating their adoption of memory-based applications such as SaaS and high-velocity time-series data. In addition, 5G and IoT Mobile Edge use cases require real-time data processing that depends on the speed of in-memory processing.
Due to the memory demands needed by modern applications and the price of standard DRAM, many organizations are turning to alternative technologies for memory utilization. NVMe is being considered and used in some environments for memory use cases. Although slower than standard DRAM, it can provide a value proposition and ROI for companies in many use cases.
Summary of Modern Memory Challenges
To summarize the variety of challenges organizations are encountered directly related to memory requirements and constraints:
-
- Memory is expensive – The cost of memory is a significant part of the overall hardware investment in the data center
-
- Deployments are memory-bound – Memory is becoming the resource that is most in-demand and in short supply relative to other system resources
-
- Hardware incompatibility and heterogeneity – Up to this point, memory is tied to and limited by the physical server host. This constraint creates challenges for applications with memory resources beyond what a single physical server host can provide.
-
- Performance SLA and monitoring – Businesses will continue to have performance demands while continuing to need more memory to keep up with the resource demands of applications and data processing
-
- Availability and recovery – On top of the performance demands, businesses still need to ensure applications and data are available and can be quickly recovered
-
- Operational complexity – To keep up with the demands of memory and other resources, applications are becoming more complex to work around the memory demands.
These challenges result in unsustainable costs to meet business needs, both from an infrastructure and application development perspective.
What is VMware Project Capitola?
With the growing demands on memory workloads in the enterprise, businesses need new ways to satisfy memory requirements for data processing and modern applications. VMware has redefined the data center in CPU, storage, and networking with products that most are familiar with and use today – vSphere, vSAN, and NSX. In addition, VMware is working on a solution that will help customers solve the modern challenges associated with memory consumption. At VMworld 2021, VMware unveiled a new software-defined memory solution called VMware Project Capitola.
What is VMware Project Capitola? VMware has very much embraced the software-defined approach to solving challenges associated with traditional hardware and legacy data center technologies. VMware Project Capitola extends the software-defined approach to managing and aggregating memory resources. VMware notes the VMware Project Capitola Mission as “flexible and resilient memory management built in the infrastructure layer at 30-50% better TCO and scale.”
VMware Project Capitola is a technology preview that has been described as the “vSAN of memory” as it performs very similar capabilities for memory management as VMware vSAN offers for storage. It will essentially allow customers to aggregate tiers of different memory types, including:
-
- DRAM
-
- PMEM
-
- NVMe
-
- Other future memory technologies
It enables customers to implement these technologies cost-effectively and allows delivering memory intelligently and seamlessly to workloads and applications. Thus, VMware Project Capitola helps to meet challenges associated with operations challenges and those faced by application developers.
-
- Enterprise operations – VMware Project Capitola allows seamlessly scaling tiers of memory based on demand and enable unifying heterogeneous memory types in a unified platform for consumption
-
- Application developers – Using VMware Project Capitola, application developers are provided the tools to consume the different memory technologies without using APIs
The memory tiers created by VMware Project Capitola are aggregated into logical memory. This capability allows consuming and managing memory across the platform as a capability of VMware vSphere. It increases overall available memory intelligently using specific tiers of memory for workloads and applications. In addition, it prevents consuming all memory within a particular tier. Instead, this is now shifted to a business decision based on the SLAs and performance required of the applications.
VMware Project Capitola details currently known
VMware Project Capitola will be tightly integrated with current vSphere features and capabilities such as Distributed Resource Scheduler (DRS), which bolsters the new features provided with VMware Project Capitola with the standard availability and resource scheduling provided in vSphere.
VMware mentions VMware Project Capitola will be released in phases. It will be implemented at the ESXi host level, and then features will be extended to the vSphere cluster. VMware details that VMware Project Capitola will be implemented in a way that preserves current vSphere memory management workflows and capabilities. It will also be available in both vSphere on-premises and cloud solutions.
As expected, VMware is working with various partners, including memory and server vendors (Intel, Micron, Samsung, Dell, HPE, Lenovo, Cisco). In addition, they are working with service providers and various ISV partners in the ecosystem and internal VMware business divisions (Hazelcast, Gemfire, and Horizon VDI) to integrate the solution seamlessly with native VMware solutions. VMware is collaborating with Intel initially as a leading partner with technologies such as Intel® OptaneTM PMem on Intel® XeonTM platforms.
Value proposition
-
- Software-defined memory for all applications provides frictionless deployments without retooling applications and allows addressing memory-bound deployments with large memory footprints. It can also lead to faster recovery from failures.
- Operational Simplicity – No changes in the way vSphere works. It provides flexibility to tune performance and tune applications. It reduces infrastructure customization for a specific workload
- Technology Agnostics – Pay-as-you-grow model that allows tuning performance as you need for specific applications. Bring pooled and disaggregated memory to your server fabric.
How does VMware Project Capitola work?
In phase 1 of the VMware Project Capitola, it is local-tiering with a cluster. ESXi, installed on top of the physical server hardware, is where the memory tiers are created. Management of the tiering happens at the cluster level. When VMs are created in the environment, they will have access to the various memory tiers.
Future capabilities of VMware Project Capitola will undoubtedly have the ability to control memory tiers based on policies, much like vSAN storage today. All current vSphere technologies, such as vMotioning a VM, will remain available with VMware Project Capitola. It will be able to maintain the tiering assignments for workloads as these move from host to host.
Overview of VMware Project Capitola architecture
In phase 2 releases of VMware Project Capitola, the tiering capabilities will be a cluster-wide feature. In other words, if a workload cannot get the tier of memory locally on the native ESXi host, it will get the memory from another node in the cluster or dedicated memory device.
VMware Project Capitola enables transparent tiering
The memory tiering enabled by VMware Project Capitola is called transparent tiering. The virtual machine simply sees the memory that is allocated to it in vSphere. It is oblivious to where the actual physical memory is coming from on the physical ESXi host. VMware vSphere takes care of the appropriate placement of memory paging in the relative physical memory.
A simple two-tier memory layout may look like:
-
- Tier 1 – DRAM
-
- Tier 2 – Cheaper and larger memory (Optane, NVMe, etc)
The ESXi host sees a sum of all the memory available to it across all memory tiers. At the host level, the host monitoring and the tier or tier sizing decide the tier allocation budget given to a particular VM. It decides this based on various metrics, including:
-
- Memory activity
-
- Memory size
-
- Other factors
The underlying VMware Project Capitola mechanisms decide when and where active pages sit in faster tiers of memory or slower tiers of memory. Again, the virtual machine is unaware of where memory pages actually reside in physical memory. It simply sees the amount of memory it is allocated. This intelligent transparent tiering will allow businesses to solve performance and memory capacity challenges in ways not possible before.
What Project Capitola Means for the Future of Memory Management
VMware Project Capitola is set to change how organizations can solve challenging problems in managing and allocating memory across the environment for business-critical workloads and applications. Today, organizations are bound by physical memory constraints related to physical hosts in the data center. VMware Project Capitola will allow customers to pool memory from multiple hosts in much the same way that vSAN allows pooling storage resources.
While it is currently only shown as a technology preview, VMware Project Capitola already looks extremely interesting and will provide powerful features enabling innovation and flexibility for in-memory and traditional applications across the board.
Learn more about VMware Project Capitola in the following resources:
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!