By aggregating the knowledge gained from previous chapters, you now have a good understanding of what to expect from a Hyper-V system. With that information and a few tools, you can begin to plan out your deployment. This chapter will help you to explore and understand the utilities that you have at your disposal. To get started, it will help you to lay out a generalized strategy for moving into the virtual space.
Understand Where You Are
It’s impossible to successfully chart a course to a destination if you don’t know where you are. The very first thing that must be made clear is the current state of your environment. You’ll need to have a thorough inventory of all your hardware-based server environments, their applications, and the needs of those applications. If you’ve got virtual server environments, perhaps running on earlier or competing hypervisors, you’ll need an inventory of those as well. You’ll want to have an idea of future growth, including any planned or in-progress projects that will involve new server environments. If virtual desktop infrastructure (VDI) is on your roadmap, that will have its own planning needs.
If you intend to virtualize a system, you will need a performance profile so that you know how to plan for it. This should be completed for all virtualization candidates before a single piece of hardware for the virtualization platform is ordered. Guidance will be provided later in this chapter.
Determine What You Will Virtualize
There are two extreme approaches to a virtualization strategy. At one end, you virtualize absolutely everything. At the other end, you draw a line between existing systems and new systems; existing systems are left entirely alone and new systems go into the virtual environment. What you choose will likely be somewhere in the middle.
There are a number of guidelines to consider as you decide what to virtualize:
- Once a virtualization platform is in place, it should be the default location for all new server operating system deployments. Essentially, anyone intending to place a new deployment on dedicated physical hardware should be required to present a compelling business case for why it should not be virtualized; no one should ever be asked to explain why a new deployment should be virtual.
- Converting an existing physical deployment to a virtual environment is a non-trivial task. There should be no automatic requirement to move existing loads into the new virtual space. Be wary of consultants and virtualization vendors that attempt to coerce you to virtualize most or all of your existing physical deployments. It is often better to wait to virtualize a load when a migration is necessary, such as a hardware refresh.
- Do not perform physical-to-virtual conversions (P2V) on services or applications that have a clearly defined migration process. Work with application vendors.
-
Do not perform P2V on services or applications that can be easily expanded and contracted through existing features. As an example, you can easily add a new Active Directory domain controller to an existing domain and demote the original. This avoids all of the problems associated with conversions. Other examples:
- DHCP
- DNS
- Exchange Server
- What you need today is not what you will need tomorrow.
- As organizations discover the power and simplicity of virtualized environments, they tend to explore their boundaries in new ways. This almost invariably leads to more virtual server environments being created than were initially anticipated; this phenomenon is simply known as “virtual server sprawl”. Always plan for more capacity than expected.
- In a virtual environment, memory is typically the first resource to be exhausted.
- In a virtual environment, CPU is typically the last resource to be exhausted.
A few reasons to not virtualize that would automatically be acceptable are:
- Lack of application vendor support for a virtual environment
- Use of a non-supported server environment (ex: a customized Linux distribution tuned for particular hardware)
- Use of specialized hardware (ex: fax boards, PCI processing cards)
- Applications that can conceivably consume the entirety of one or more resources on a host
A few reasons to not virtualize that are not automatically acceptable but warrant investigation:
-
Applications that have a better built-in high availability solution than what Hyper-V provides might be better suited for physical installations; examples are Active Directory, DHCP, DNS, and Exchange Server. Even if such technologies bring a superior solution, Hyper-V can provide another layer of resilience at the operational level. Guest portability and centralized backup are positives for most server installations. Efficient use of hardware through virtualization is a strong benefit for lighter roles such as DHCP. Considerations that can help you weigh this decision:
- How easy is it to rebuild the service or application from scratch when compared to restoring a virtual machine? As an example, deploying a Windows Server 2012 R2 image over PXE, adding the DNS role, and adding it into a larger pool of DNS servers is a very quick process and may not save enough time over restoring a virtual machine to justify the overhead of a backup (assuming that at least some of the DNS servers are being backed up).
- How easy is it to rebuild the service or application from an operating system backup as opposed to a virtual machine-level backup?
- How impacting is the loss of an instance of the service or application? For example, you may have so many members in an Exchange Database Availability Group that virtualizing them just adds overhead. However, such a deployment might also benefit from the rapid provisioning and distribution capabilities of a virtual environment.
- Applications that are exceedingly sensitive to even minute latencies in memory, CPU, disk, or network access often don’t make good virtualization candidates, but this is not a rule. Delays in memory and CPU access are barely measurable in a virtualized environment. Disk and network latencies might be more noticeable. Network needs might be addressed through the use of higher-end hardware and/or SR-IOV. Even with these mitigations, the best choice for applications that truly require real-time performance might be physical deployments.
- Specialized one-off situations might be better suited to physical deployments. For example, you may have an untrusted layer-2 network that you don’t want to stretch into your datacenter, but requires a single server. You could argue that Hyper-V does a good job of network isolation and bring that network into your host anyway. You could also build a stand-alone Hyper-V host and place it outside the datacenter just for that singular role. However, the overhead of either solution may not be practical. At the same time, be mindful that one specialized one-off situation tends to lead to other “one-off” situations. If you find that your organization has accumulated multiple “one-off” situations, it is probably a good idea to work on logically grouping them into Hyper-V.
- Domain controllers do not need to be physical. Many organizations insist on retaining a single physical domain controller, but this practice is outmoded and its supporting reasons have long since been shown to be myths. However, if you already have a hardware-based domain controller and have compelling reason to retire it, then it’s certainly fine to leave it in service. One possible exception to DC virtualization is the system that holds the PDC emulator flexible single-master operations (FSMO) role. Most organizations will use it as their root time server. In some cases, the PDC emulator may struggle to keep accurate time when it is virtualized. This is an uncommon situation that usually signifies overworked hardware.
Matching Hardware Selections for Hyper-V to Your Needs
You should not proceed beyond this point until you have a clear idea of what will be virtualized upon deployment of the new system. You should also have some reasonable projections of what will be virtualized within the lifetime of the new deployment or the beginnings of a plan to scale-out in the event that future needs exceed capacity.
Concrete Hardware Planning
The most accurate way to plan virtualization is to find out what existing systems are actually doing and build a system that can fulfill those needs. There are a few commercial applications that will automatically perform this sort of analysis for you, although they are typically very expensive if you attempt to acquire them outright. If you’re working with a solutions provider or consultant, they may provide one.
If you’re working within a budget, there are two completely free ways to gather information about your environment to plan for virtualization.
Microsoft Assessment and Planning Toolkit
The Microsoft Assessment and Planning Toolkit is available at no charge from Microsoft. The tool is very straightforward to use. This blog article discusses an older version, but can be a good starting point.
For best results, run MAP from a system that will not be doing anything else and will be left on for the duration of all data collection. For the best information, collection will need to run for an extended period of time. If possible, gather at least a month’s worth.
Performance Monitor
For more detailed, controlled, and accurate information, you’ll need to go deep with Performance Monitor. The graphical portion of this tool is built in to all graphical editions and versions of Windows. All versions of Windows include performance counters, although the exact counters that exist will vary between systems.
While many administrators are already accustomed to using Performance Monitor’s real-time viewer, the best way to use Performance Monitor as a planning tool is by configuring Data Collector Sets to gather information in the background over a long period of time. Microsoft provides a great deal of documentation on configuring and using Performance Monitor, available on TechNet. The linked site lists that it is for Windows Vista/7 and Windows Server versions 2008 R2 and 2012, but Performance Monitor has not changed in later versions. The two sub-sections of primary interest are “Creating Data Collector Sets” and “Scheduling and Managing Data in Windows Performance Monitor”.
A couple of tips on performance monitoring:
- Use a single dedicated computer for the task that will not be rebooted or shut down during the collection period.
- Do not measure performance counters locally; this may be a useful practice for locating bottlenecks, but performance measuring imposes its own load on the system that impacts, and therefore somewhat invalidates, performance baselining.
- Data collection should occur over a long period of time. Anything less than two weeks is inconclusive. One month is an ideal minimum.
- Do not use a single Data Collector Set to connect to more than one computer. Any issues that occur on one connection places the entire collection set at risk.
- To cut down on the tedium of building similar collector sets for multiple computers, export one to XML, edit a copy of it to change the target computer name to that of another system that uses the same counters, and import the copy into a new Data Collector set.
- Remember to check on the collector sets each day of the gathering period to ensure that they are still running. You’ll also want to briefly look at each; if there is a chance that a system utilizing a lot of resources is experiencing an addressable issue, you’ll want to take care of that as early as possible.
- The primary metrics that you are concerned with involve CPU, disk, and memory. The most commonly tracked counters are mentioned in this blog article.
Once you’ve collected your data, you need to do two things. First, ignore the brief high spikes. You’ve no doubt encountered the phenomenon in which the very act of opening Task Manager causes the CPU to spike to 100%, so you know that these brief high points don’t really impact system performance. Most of those spikes in your performance logs will mean the same thing. What you mainly want to gather are performance averages. In most cases, you’ll find that these are very low. The second thing to gather are any sustained above-average resource utilization. Some will be expected; disk and CPU activity should reflect the periods that backups run.
Take note of the fact that typical performance monitoring does not capture one very vital statistic: disk space utilization. While there are counters for this, those might be better suited to very long tracing for the purpose of trending, not for performance baselining. You could just log on to the systems and collect this manually. You could also automate it with PowerShell.
With performance data and disk space needs in hand, you can make some plans. Remember that new hardware might have a dramatic impact. Newer CPUs process data more efficiently. An existing system may not have enough RAM and is therefore paging a lot of data to disk, which would make it appear to require more disk IOPS than it would be using with more memory.
Abstract Hardware Planning
While running performance traces on existing systems might provide the most accurate view, it’s not always going to be a practical approach. If you’re going to be placing a new application or a new version of an existing system into your new virtual environment, you obviously won’t have an existing installation to track. Also, even if you can demonstrate a particular performance profile, a service vendor may refuse support if you don’t meet or exceed prescribed configuration minimums.
While less accurate, sometimes substantially so, abstract planning is also much easier. The best place to start is by finding out what the vendor requires and/or recommendations for CPU, memory, and disk space/IOPS and architecting with that in mind. Keep in mind that these configurations are almost always designed with physical deployments in mind. Even when they are marked as being specific to a virtual environment, they are still almost always derived from the original physical deployment specifications and not from observed deployments.
As you design around software manufacturer recommendations, remember these two rules of thumb in virtualization:
- CPU needs are almost always overstated and is almost never saturated.
- Memory is almost always the first resource you run out of.
Planning Around the Particulars of Virtualization
It may be tempting to plan a virtual deployment in the same fashion as physical deployments: simply add up everything you’re going to need and purchase a system that has at least that capacity or more. The problem with this approach is that the resultant system will almost undoubtedly be dramatically overpowered (and therefore, overly expensive).
With any virtualization system, one of the central terms is over-subscription (or something like it, such as over-commitment). These terms mean that collectively, the virtual machines have claimed more total resources than the virtualization host can provide. Hyper-V does not over-subscribe memory in the traditional sense, but Dynamic Memory does allow for a greater total allowed maximum than is physically present in the host. Dynamically expanding VHDX files treat disk space in a similar fashion. CPU and network are definitely over-subscribed.
When planning a physical deployment for a single system, resource waste is almost a given. The minimums in disk space and CPU power that are available in modern systems are often well beyond what is necessary for the typical server application. Adding a few more gigabytes of RAM to a server is relatively inexpensive. However, prices start to escalate rapidly at the scales necessary to support a multi-guest system. Attempting to architect a virtualization host in a strict linear fashion by adding up the necessary guest resources individually is a practice that is almost guaranteed to break your budget. Such a system would be ridiculously underutilized.
When designing the capacities for your virtualization hosts, use these baselines:
- Expect to use Dynamic Memory in every situation except those that expressly forbid it; expect each Windows Server guest instance to use around 2 GB of memory
- Even though CPU is rarely driven to its potential, it is also the most difficult item to upgrade in an existing system. True CPU usage is very difficult to predict without good baselines, but an old guideline was to use no more than 12 virtual CPUs per physical core.
- The “second” core presented by Intel Hyper-Threading cannot be counted as more than 25% of a true core.
- Plan to use dynamically expanding VHDX files in all situations except those that explicitly preclude it, such as SQL and Exchange servers.
- VDI systems use more resources than server systems. Users are also more aware of, and therefore less tolerant of, delays in their virtual desktops than in server applications.
- Licensing of virtual instances of Windows Server is handled per CPU pair. If you intend to virtualize many instances, it may be cheaper to build a few high-powered hosts than several smaller systems.
Make Sure Licensing is in Order
Licensing in a virtual environment is often different from licensing in a physical environment. The Windows Server operating system has a well-defined model. I’ve explained the intricacies of Microsoft licensing for virtual environments in this eBook (that link will take you to where you can download a copy) that deals with this topic along with numerous examples. Other operating systems will have their own requirements.
It is vitally important to remember that licensing is a legal agreement and is best handled by subject-matter experts. If you will be purchasing licenses from an authorized reseller of the software licenses that you’ll be purchasing, they should be able to place you in contact with someone who is authoritative on the subject. The need to work with these individuals to you ensure proper licensing cannot be overstated. Failing to properly license software is considered piracy, even if it’s an accident, and can easily lead to fines of several, or even hundreds, of thousands of dollars.
Experience is the Best Teacher
All of the guidance around planning to virtualize can seem quite nebulous; that’s because it is. That’s because no one understands your situation quite as well as you do. Organizations of scale will very quickly adopt a standardized virtualization host build and re-order that same setup when it’s necessary to add capacity. Smaller organizations will commonly purchase a system that fits their budget and never quite use it to capacity. If you’re simply uncertain, the best thing to do is to acquire a little bit more than you think you need. If you accidentally over-purchase, you can employ extended warranties to get more life out of the system at a much lower price point than a hardware refresh.
The best way to learn how to provision is to provision.
Potential pitfalls
It’s a pretty steep learning curve if you’re not very experienced in deploying Hyper-V. To help you avoid some of the common mistakes often made, there’s an article on our blog to help with that: 12 Common Hyper-V Deployment Mistakes.