Save to My DOJO
Table of contents
- 1. Avoid overloading Hyper-V Server
- 2. Avoid creation of multiple Virtual Network Switches
- 3. Configure antivirus software to bypass Hyper-V processes and directories
- 4. Avoid Mixing Virtual Machines that can or cannot use Integration Service components
- 5. Avoid storing system files on drives used for Hyper-V storage
- 6. Use separate volumes for each VM
- 7. Avoid single point of failure for network adapters using NIC Teaming
- 8. Always use Network Isolation Mechanism rather than creating a separate virtual switch
- 9. Install Multiple Network Interface cards on Hyper-V Server
- 10. Always use supported guest Operating Systems
- 11. Always use Generation Type 2 Virtual Machines
- 12. Always change the default location for storing virtual machine configuration and VHD files
- 13. Monitor performance of Hyper-V Server for optimization
- 14. De-fragment Hyper-V Server regularly or before creating a virtual hard disk
- 15. Always install the Integration Components on the supported virtual machines
- 16. When to use Fixed or Dynamic VHD(X) files
- 17. Use Dynamic Memory Feature
- 18. Configure SCSI disks for Data volumes
- 19. Relocate paging file to a SCSI Disk Controller
- 20. Always exclude Paging file for virtual machines participating in Hyper-V Replication
- 21. Implement Hyper-V in Server Core in Production environment
- 22. Close unnecessary Windows
- 23. Use Certified Hardware
- Further Reading
Best practices are the most obvious starting point for improving your Hyper-V and Virtual Machine performance and I’ve listed 23 of them for you below. This article was written for Windows Server 2012 and most of these best practices are applicable to later versions.
Did I miss out on any? Leave a comment and let me know!
1. Avoid overloading Hyper-V Server
You must not overload Hyper-V Server. In other words, there is no need to host and run virtual machines which have no functions or I would say you just should not configure and install virtual machines for the sake of it. It is because VMMS.exe needs to maintain the status of all virtual machines including virtual machines which do not perform any business function.
2. Avoid creation of multiple Virtual Network Switches
VMMS.exe, running as Hyper-V Virtual Machine Management Service, keeps track of virtual switches created and the communication between virtual machines. You must always use the VLAN Tagging or other isolation mechanisms to separate communication between virtual machines instead of creating a virtual switch.
3. Configure antivirus software to bypass Hyper-V processes and directories
Antivirus software performs I/O operations for files being accessed by the Operating System and Hyper-V processes. You must alter the Antivirus configuration to exclude Hyper-V main processes and other directories as listed below:
- Hyper-V Processes: VMMS.exe and VMWP.exe
- All folders containing the Virtual Machine Hard disk files and configuration.
- Snapshot/checkpoint folders.
- Cluster Shared Volumes for path C:ClusterStorage
4. Avoid Mixing Virtual Machines that can or cannot use Integration Service components
There are two types of virtual machine communication taking place on the Hyper-V Server 1) communication using VMBUS design 2) communication using emulation. The former is faster and is available only if you install the Integration Components in the virtual machine. In case if you need to run a virtual machine which is not supported by the Hyper-V or Integration Services cannot be installed, it is recommended to follow below guidelines:
- If you have a free Hyper-V Server, then install such virtual machines on that Hyper-V Server.
- If you do not have a free Hyper-V Server, then connect such virtual machines to a separate Hyper-V virtual switch.
5. Avoid storing system files on drives used for Hyper-V storage
You must not store Hyper-V virtual machine files on drives used by the Operating System. It is because of the I/O operation. Drives, where the system files are stored, are accessed by the system processes continuously and this might cause delay in processing the Hyper-V tasks.
6. Use separate volumes for each VM
Since the administrative tasks of a virtual machine are controlled by its own process (VMWP.exe), keeping several virtual machines on a single volume will cause more Disk I/O operations from each worker process. Hence, it is recommended to use separate volumes for storing virtual machine files (VHD, VHDx, checkpoints, and XML).
7. Avoid single point of failure for network adapters using NIC Teaming
Windows Server 2012 and later operating systems support NIC teaming. You must ensure that the NIC teaming is configured for the host operating system. Virtual Machines can also be configured to use the NIC Teaming feature.
8. Always use Network Isolation Mechanism rather than creating a separate virtual switch
When you come across any networking requirement that needs to be configured on Hyper-V Server, use the order of preference that is listed below to achieve the configuration you need. The best way is to use the “Hyper-v Virtual Network Switch and VLAN Tagging” method. Other methods can also be used depending on your requirements, but consider them in this order:
- Hyper-V Virtual Switch and VLAN Tagging Method
- Hyper-V Virtual Switch Method
- Firewall Method
- Different subnet Method
- Another Physical NIC Method
9. Install Multiple Network Interface cards on Hyper-V Server
There are multiple types of communication taking place in the Hyper-V. For example, communication between virtual machines, communication between virtual machines and parent and communication to manage these virtual machines from a management console. It is always recommended to dedicate a network interface card for managing virtual machines and Hyper-V host. Use of fabric resources is considered nowadays.
10. Always use supported guest Operating Systems
VMBUS and VSP/VSC components are part of the Integration Services which help in improving the performance of communication between virtual machines and parent partition. Integration Components can be installed only on the Supported guest operating systems. Hence, please only install operating systems which are supported. A list of supported Guest Operating systems can be found here http://support.microsoft.com/kb/954958
11. Always use Generation Type 2 Virtual Machines
“Generation Type 2” feature was introduced with Windows Server 2012 R2. Previously, normal virtual machines allowed booting from IDE controller only, but with Generation Type 2, you can boot virtual machines from a SCSI controller, which is much faster than a virtual machine booting from an IDE controller. “Generation Type 2” virtual machine uses VMBUS and VSP/VSC architecture at the boot level, which improves overall performance of the virtual machines. “Generation Type 2” feature also allows paging file to be relocated to a SCSI controller.
12. Always change the default location for storing virtual machine configuration and VHD files
By default, when you enable the Hyper-V Role for the first time, the Hyper-V server is configured to store the virtual machine configuration and VHD files on the local storage under C:ProgramDataWindowsHyper-VVirtual Machines folder. You must change this location to appropriate drives before the Hyper-V servers are moved to the production environment.
13. Monitor performance of Hyper-V Server for optimization
There are several Hyper-V performance counters available which you can use to monitor the performance of the Hyper-V Server, virtual machines, network communication, etc. Please make use of these performance counters and fix any issues with the performance.
14. De-fragment Hyper-V Server regularly or before creating a virtual hard disk
It is advisable to defrag Hyper-V Server disks where the VHD and virtual machine configuration files are stored regularly or before creating a large virtual hard disk file.
15. Always install the Integration Components on the supported virtual machines
Integration Components provide VMBUS and VSP/VSC design to improve the performance of virtual machines running on the Hyper-V Server. Always install the Integration Components on supported Guest operating systems. For Linux distributions, there are separate IC versions available which you can download from Microsoft site.
16. When to use Fixed or Dynamic VHD(X) files
If storage capacity is not an issue to get better performance for resource intensive applications, use dynamically expand VHD(X) disks so that you never run out of disk space. Previous versions of Hyper-V (Windows Server 2012 and earlier) recommend using fixed VHDs.
17. Use Dynamic Memory Feature
Although, the Dynamic Memory feature does not help directly in achieving better performance of the virtual machines, but it allows you to balance the allocation of memory resource dynamically. It is recommended to configure Dynamic Memory parameters for each critical virtual machine running on a Hyper-V server.
18. Configure SCSI disks for Data volumes
Since SCSI is faster than IDE, it is recommended to relocate data volume disk on a SCSI controller. For resource intensive applications like SQL Server, it is always recommended to keep Log and data disk files on separate SCSI controllers.
19. Relocate paging file to a SCSI Disk Controller
For applications running inside virtual machines which requires frequent paging operation, always relocate paging file to a SCSI Controller virtual hard drive. Paging file can be relocated to a SCSI controller if you are running “Generation Type 2” Virtual Machine.
20. Always exclude Paging file for virtual machines participating in Hyper-V Replication
If you have enabled Hyper-V Replication for virtual machines, make sure to exclude paging file from replication. Applications running inside virtual machines may do frequent paging operation and this may cause paging file contents to be replicated which are unnecessary.
21. Implement Hyper-V in Server Core in Production environment
Since Windows Server Core Operating System does not have a GUI, it consumes less resources. It is recommended to enable Hyper-V on a Windows Server Core installation rather than using a full version of Windows Operating System.
22. Close unnecessary Windows
Please make sure to close the following windows on Hyper-V server so the system resources are available to Hyper-V processes:
- Virtual Machine Connection Window: Always remember to close Virtual Machine connection window, once your task to the virtual machine is completed. Keeping the virtual machine connection window opened consumes system resources which could be utilized for other purpose by the hypervisor.
- Minimize Hyper-V Manager Window: Keeping the Hyper-V Manager window opened also consumes system resources. So close it after your task is over.
- Any other Application Window: Make sure to close all other application windows on Hyper-V Server so the enough system resources are available to the hypervisor.
23. Use Certified Hardware
Microsoft developers spent a lot of time in testing the server applications on the specific hardware. It is recommended that we use only certified hardware, whenever possible. The list can be found here: www.windowsservercatalog.com
Further Reading
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
109 thoughts on "23 Best Practices to improve Hyper-V and VM Performance"
I’m sorry, but this is a poorly written article.
1. Overloading goes without saying, but it isn’t overloaded until a resource is used up. Running VM’s that aren’t doing anything is hardly taxing, if you need the RAM just turn them off until they are needed. Nothing really to worry about here.
5. Why would this be the case? Once the host OS is up and running it’s use of RAM and storage is very limited. It’s actually usually recommended to run Hyper-V in one big RAID 10 array, put the OS and VM’s all on direct storage on that array and run with it. What the host OS does is negligible, especially so if you run Hyper-V Server instead of a Server OS.
6. I haven’t seen this suggested in ages. I have over 20 VM’s running on a RAID 10 array, and it has no trouble with it. I’m not running software for stock exchange or anything crazy, but at a high point it was running Exchange 2010 for 275 users, 2 VM’s of SQL Server, and a bunch of other file/print/DC/misc other applications. All on the same RAID 10 array as the host OS, and it handled them all fine. Though even back in the Virtual Server 2005 days I ran 13 VM’s on a 4 disk RAID 5 array and it ran beautifully.
16. Microsoft actually recommends running Dynamic most of the time now. I believe it was when 2012 came out they said that the performance difference in Dynamic vs Fixed is negligible in the vast majority of cases, and Dynamic gives you more options.
17. Most VM’s you will want to run with Dynamic Memory, but that is not always the case. Exchange and SQL need to be fixed or they will run away with every bit of RAM they can muster.
21. This one is a little confusing. I think it’s more recommended to run Hyper-V Server instead of Windows Server 2012 R2 in Server Core mode. Less required updates, and little risk of anyone installing unwanted software in your hypervisor. Then when possible run your VM’s in Server Core as well for less OS overhead.
I haven’t had much opportunity to play with the new networking options in 2012 yet, so I’ll leave those alone
Thanks for your feedback Christopher. I’m sorry to hear you didn’t like the article though I appreciate you reaching out with your feedback.
I’ve updated the article with regards to your feedback on first and last points.
As for your other feedback, I don’t necessarily agree.
For example, Dynamic Memory is recommended by Microsoft and Dynamic Disks are not!
You can find more here:
http://technet.microsoft.com/en-us/library/ee941151(v=ws.10).aspx
Thank you,
Nirmal
I’ve been running into a lot of conflicting advice on some of the same points that Christopher brings up. Your link is from 2010, before server 2012 even came out. Doesn’t instill a lot of confidence.
Hi Drew,
Thanks for the note, we’ve updated the documentation to reflect the new best practices for dynamically expanding VHDs.
Thanks,
Symon Perriman
Altaro Editor
Well, I could be slightly incorrect – under VHD fixed is recommended, however under VHDX dynamic is now recommended. See page 125 of the Performance Tuning Guidelines here: http://msdn.microsoft.com/en-us/library/windows/hardware/dn529134
Hi Christopher,
Thanks for the message, we’ve updated the article to the correct recommendation – use dynamic disks with VHDX, it works great.
Best,
Symon Perriman
Altaro Editor
That article applies to Windows Server 2008 R2 only.
Hi Kerry,
These best practices should be applicable to Windows Server 2012 R2. You are correct, Dynamically Expanding VHDs are recommended with Windows Server 2012 and later. We have updated the article and appreciate your feedback.
Thanks,
Symon Perriman
Altaro Editor
That article is for Windows 2008. In Windows 2012 Hyper-V the VHDX files are recommended and they have much better performance in dynamic mode. http://blogs.technet.com/b/askpfeplat/archive/2013/09/09/why-you-want-to-be-using-vhdx-in-hyper-v-whenever-possible-and-why-it-s-important-to-know-your-baselines.aspx
for point # 16 you mention.
It’s more of a licensing thing.
If you have a single 2012 server license and run only the hyper V role you can have two 2012 vm’s using that one license. So basically two servers for once windows license. If you use (free) Hyper V server you would need a separate license for the same two 2012 vm’s. I know this thread is old but maybe someone will find this info useful.
Hi Sean,
#16 focuses on using dynamic memory, and as a few people already commented, dynamic memory is generally accepted and does not have the same limitations as previous versions.
Your comments about licensing are accurate – if you get the free Hyper-V Server edition, you must still acquire licenses for your guest VMs (or you can use a free Linux distribution).
Thank you,
Symon Perriman
Altaro Editor
I’m sorry, but this is a poorly written article.
1. Overloading goes without saying, but it isn’t overloaded until a resource is used up. Running VM’s that aren’t doing anything is hardly taxing, if you need the RAM just turn them off until they are needed. Nothing really to worry about here.
5. Why would this be the case? Once the host OS is up and running it’s use of RAM and storage is very limited. It’s actually usually recommended to run Hyper-V in one big RAID 10 array, put the OS and VM’s all on direct storage on that array and run with it. What the host OS does is negligible, especially so if you run Hyper-V Server instead of a Server OS.
6. I haven’t seen this suggested in ages. I have over 20 VM’s running on a RAID 10 array, and it has no trouble with it. I’m not running software for stock exchange or anything crazy, but at a high point it was running Exchange 2010 for 275 users, 2 VM’s of SQL Server, and a bunch of other file/print/DC/misc other applications. All on the same RAID 10 array as the host OS, and it handled them all fine. Though even back in the Virtual Server 2005 days I ran 13 VM’s on a 4 disk RAID 5 array and it ran beautifully.
16. Microsoft actually recommends running Dynamic most of the time now. I believe it was when 2012 came out they said that the performance difference in Dynamic vs Fixed is negligible in the vast majority of cases, and Dynamic gives you more options.
17. Most VM’s you will want to run with Dynamic Memory, but that is not always the case. Exchange and SQL need to be fixed or they will run away with every bit of RAM they can muster.
21. This one is a little confusing. I think it’s more recommended to run Hyper-V Server instead of Windows Server 2012 R2 in Server Core mode. Less required updates, and little risk of anyone installing unwanted software in your hypervisor. Then when possible run your VM’s in Server Core as well for less OS overhead.
I haven’t had much opportunity to play with the new networking options in 2012 yet, so I’ll leave those alone
Thanks for your feedback Christopher. I’m sorry to hear you didn’t like the article though I appreciate you reaching out with your feedback.
I’ve updated the article with regards to your feedback on first and last points.
As for your other feedback, I don’t necessarily agree.
For example, Dynamic Memory is recommended by Microsoft and Dynamic Disks are not!
You can find more here:
http://technet.microsoft.com/en-us/library/ee941151(v=ws.10).aspx
Thank you,
Nirmal
I don’t agree that your article is poorly written. I think it’s got a lot of great information and am happy to have found it.
However, in regards to #16, according to Microsoft’s “Performance Tuning Guidelines for Windows Server 2012 R2” available as a PDF here: http://go.microsoft.com/fwlink/?LinkId=397832
* When using the VHDX format, we recommend that you use the dynamic type because it
offers resiliency guarantees in addition to space savings that are associated with allocating
space only when there is a need to do so.
* The fixed type is also recommended, irrespective of the format, when the storage on the
hosting volume is not actively monitored to ensure that sufficient disk space is present when
expanding the VHD file at run time.
The technet article you are referencing is for Server 2008 R2 which is old. 🙂
For #11 it would be useful to mention that a type 1 can be migrated to a type 2 with a powershell script like the one here: https://code.msdn.microsoft.com/windowsdesktop/Convert-VMGeneration-81ddafa2
Dynamic disk not recommanded? This acticle is a little outdated no?
Updated: April 27, 2010
Don’t you think Microsoft have made considerable change in Gen2 VM?
What about ssd dynamic disk performance?
Regards,
Hi Jonathan,
Thanks for the feedback, this was updated for Windows Server 2012 R2. Please let us know if you think anything is still outdated.
Best,
Symon Perriman
Altaro Editor
that applies only to Windows Server 2008 R2 and old VHD format disks
VHDX format disk do not have those problems and Microsoft actually recommend using Dynamic on 2012 r2 with VHDX disks
http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
Thanks for the useful article Nirmal. As a general guide it serves well.
I came across it while looking for the proper defrag switches. Do you recommend the standard defrag.exe (Volume) -h -k -g?
By the way I have to agree with Chris – Dynamic Memory is a big no-no for SQL servers. Also Dynamic Disks are recommended since Server 2012 – your reference is for Server 2008 R2 an a bit out of date; further performance difference between dynamic and fixed disks is negligible; lastly important cmdlets like Optimize-VHD don’t sipport fixed VHD(X).
Hi Boian,
For defrag.exe with Hyper-V you can run a regular defrag ( -d). If your disk is fragmented in an unusual way, you may wish to use some of the advanced commands, such as ( -x) to perform free space consolidation. More information is available here: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/defrag.
Hyper-V dynamic memory is a debated topic… back with older version there were some challenges, but these have been address in later versions with SQL Server 2012 and Windows Server 2012 R2. Here’s the best reference from Microsoft regarding SQL memory management: https://docs.microsoft.com/en-us/sql/relational-databases/memory-management-architecture-guide?view=sql-server-ver15#dynamic-memory-management.
Thanks for the other feedback, we’ve updated the reference about dynamic disks for Windows Server 2012.
Symon Perriman
Altaro Editor
That technet article you reference has last a last update date of April 27, 2010 and says it applies to Windows Server 2008 R2.
For Windows Server 2012 R2, Microsoft does recommend dynamic for VHDX disk files.
Hi Marc,
You are correct, Dynamically Expanding VHDs are recommended with Windows Server 2012 and later. We have updated the article and appreciate your feedback.
Thanks,
Symon Perriman
Altaro Editor
for point # 16 you mention.
It’s more of a licensing thing.
If you have a single 2012 server license and run only the hyper V role you can have two 2012 vm’s using that one license. So basically two servers for once windows license. If you use (free) Hyper V server you would need a separate license for the same two 2012 vm’s. I know this thread is old but maybe someone will find this info useful.
That is, in fact, completely wrong.
Yes, you can install Server 2012 R2 on a physical machine, and run two VMs on it with one Standard license.
But you can also just install whatever hypervisor you want, and use the 2 VM licenses on it.
The important rule is, both licenses must be run on the same physical machine. This blog actually has a lot of information on this topic, see for instance https://www.altaro.com/hyper-v/virtual-machine-licensing-hyper-v/ and https://www.altaro.com/hyper-v/choosing-management-os-hyper-v-windows-server/ .
So, using 2 2012 R2 Standard licenses to run 4 2012 R2 VMs on one host under ESXi? Absolutely.
2 licenses for 2 VMs, in a 2 host cluster with Hyper-V Server? Yup, you have a deal.
2 licenses for 4 VMs in a cluster? You’re out of compliance, because VMs are only covered on one host.
Maybe this myth can slowly be put to rest.
Great article. Thanks. I will definitely be taking another look at the running of out systems.
This might not be relevant to this article, but I thought I would mention it anyway.
Running a VM from multiple snapshots, on a permanent basis, might have an impact on the performance due it having to constantly read from multiple files, and can also make your life difficult in certain circumstances, when facing disaster recovery.
Thanks again and warm regards.
Even worse, while you have snapshots, every change made to the drive on that VM is recorded in the snapshot file – even deleting a file from the VM will actually cause more storage space on your host storage because the original virtual hard disk is frozen, and it has to record all of the changes. These files can get very large in short order in some cases!
Great article. Thanks. I will definitely be taking another look at the running of out systems.
This might not be relevant to this article, but I thought I would mention it anyway.
Running a VM from multiple snapshots, on a permanent basis, might have an impact on the performance due it having to constantly read from multiple files, and can also make your life difficult in certain circumstances, when facing disaster recovery.
Thanks again and warm regards.
Thanks for sharing, David! That’s definitely the case!
I wanted to make a checklist for auditing Hyper V servers, few of your best practices points help me in making a first iteration. Thanks for sharing.
Overall pretty good list of basics, but this could get novice admins in trouble. Dynamic memory is great for Microsoft’s infrastructure services such as AD, ADFS, Print Management, File Server, DFSR, DNS, DHCP, etc. etc. If you are using any I/O-heavy application or applications making extensive use of IIS (or really any other web server platform), then you can get into trouble really fast, unless you specify a properly sized minimum.
A great example would be a SharePoint farm Web or App host. memory sizing greatly depends on the size of your environment, of course, but for larger farms, a single web or app host may well need 32-64 GB (in some cases, more!), RAM to provide a top-shelf experience. If you size dynamically and go with a low minimum like 4GB-8GB, SharePoint services actually queries the existing RAM when services are started up and uses that to allocate memory. While Hyper-V would gladly facility a request to grant more RAM, if the application (such as in this case) doesn’t ask for it, it won’t get it. So that will lead to performance issues AND resource overloading because you’ll see this app host constantly sitting on the minimum and throw more VMS on the physical host.
I used to favor Dynamic for many workloads, but in the past two years, extensive research and benchmarking have revealed that you generally get much better performance with static RAM allocation that is RIGHT-SIZED from the get-go. As most datacenters are either using or migrating to hyper-converged infrastructure, computer and memory are cheaper than ever. If in doubt, OVER-SIZE.
I’ve seen a lot of third-party apps behave like this as well, using the startup RAM quantity to size heap requests — especially web apps and apps using JAVA in any way, shape or form.
Dynamic for low-priority, best-effort services, or test environments, or domain services is fine, but always be mindful of the application you’re virtualizing. When in doubt, test. If you have an app where the documentation calls for 8 GB of ram, but you allocate Dynamic 2-8, and you never see it going above 2-3 GB, check your counters. A lot of headaches in Hyper-V come from being too dogmatic trying to shoehorn all workloads into a handful of canned VM templates.
90% of Hyper-V VM performance is I/O & RAM. I see a lot of people go WAYYYY overboard with CPUs and skimping on storage. A great SAN/network and a mediocre pHost will almost always outperform a mediocre SAN/network and a great pHost.
Do not worry if you want to remove the blocked files or too long path files from your system, here I suggest a smooth way. Use “Long path tool” software and keep yourself cool.
On a Windows 2012 R2 server running 3 Virtual 2012 servers, Is it advisable to enable the Previous Versions feature in windows on the partition containing VHDs for hyper-v?
Hi Chris,
The ‘Previous Versions’ is a great feature built into Windows Server which automatically backs up files. While this may work for Hyper-V VMs, it is not optimized nor recommended. Instead, you want to use the Checkpoint features available in Hyper-V which provides similar functionality, but also gives your more control of merging or restoring different versions. More info available at https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/checkpoints
Thanks,
Symon Perriman
Altaro Editor
Hi,
I’m new to virtualization so forgive me if these questions seem obvious. Regarding point 17, what is the bottom line on SQL and Exchange running on VMs? ? It makes sense to me that both the server apps would like to use as much memory as they are allowed. Is it a question of limiting how much RAM they can actually use using high memory thresholds ir is it best to simply avoid using dynamic memory in these cases?
Hi Phil,
Great question about virtualizing these enterprise workloads. This blog provides some great general best practices, but you should follow the specific guidance for running Exchange and/or SQL within Hyper-V. These workloads can be optimized for running inside a Hyper-V VM, depending on which specific roles you are using.
Here’s the guidance for Exchange Server with Hyper-V: https://docs.microsoft.com/en-us/exchange/plan-and-deploy/virtualization?view=exchserver-2019
Here’s the guidance for SQL Server with Hyper-V: https://docs.microsoft.com/en-us/sql/database-engine/sql-server-business-continuity-dr?view=sql-server-ver15
Thanks,
Symon Perriman
Altaro Editor
Overall pretty good list of basics, but this could get novice admins in trouble. Dynamic memory is great for Microsoft’s infrastructure services such as AD, ADFS, Print Management, File Server, DFSR, DNS, DHCP, etc. etc. If you are using any I/O-heavy application or applications making extensive use of IIS (or really any other web server platform), then you can get into trouble really fast, unless you specify a properly sized minimum.
A great example would be a SharePoint farm Web or App host. memory sizing greatly depends on the size of your environment, of course, but for larger farms, a single web or app host may well need 32-64 GB (in some cases, more!), RAM to provide a top-shelf experience. If you size dynamically and go with a low minimum like 4GB-8GB, SharePoint services actually queries the existing RAM when services are started up and uses that to allocate memory. While Hyper-V would gladly facility a request to grant more RAM, if the application (such as in this case) doesn’t ask for it, it won’t get it. So that will lead to performance issues AND resource overloading because you’ll see this app host constantly sitting on the minimum and throw more VMS on the physical host.
I used to favor Dynamic for many workloads, but in the past two years, extensive research and benchmarking have revealed that you generally get much better performance with static RAM allocation that is RIGHT-SIZED from the get-go. As most datacenters are either using or migrating to hyper-converged infrastructure, computer and memory are cheaper than ever. If in doubt, OVER-SIZE.
I’ve seen a lot of third-party apps behave like this as well, using the startup RAM quantity to size heap requests — especially web apps and apps using JAVA in any way, shape or form.
Dynamic for low-priority, best-effort services, or test environments, or domain services is fine, but always be mindful of the application you’re virtualizing. When in doubt, test. If you have an app where the documentation calls for 8 GB of ram, but you allocate Dynamic 2-8, and you never see it going above 2-3 GB, check your counters. A lot of headaches in Hyper-V come from being too dogmatic trying to shoehorn all workloads into a handful of canned VM templates.
90% of Hyper-V VM performance is I/O & RAM. I see a lot of people go WAYYYY overboard with CPUs and skimping on storage. A great SAN/network and a mediocre pHost will almost always outperform a mediocre SAN/network and a great pHost.
I would tend to agree with Christopher Owens. They actually do now recommend dynamic VHD(X) over fixed size disks. They were saying that in the most recent TechEd. I actually found on their site somewhere some recommendations to Fixed size disk and dynamic disk types. If it is VHDX you should use Dynamic. If it is VHD, they still lean towards fixed, although I have found notes on some sites that the difference between the two is very marginal in VHD format. It is somewhere around page 140-160 on the link they talk about disk alignment and whether to use fixed or dynamic: http://go.microsoft.com/fwlink/?LinkId=397832
Hi Jamison,
Great comment, we’ve updated the article to the correct recommendation – you can use dynamic disks with VHDX.
Thanks,
Symon Perriman
Altaro Editor
hi, thanks for sharing your experience here.
maybe you could add this:
– CSV CACHE READ
https://blogs.msdn.microsoft.com/clustering/2013/07/19/how-to-enable-csv-cache/
– MAXIMIZE DRIVE SAN FOR BIGGEST IOPS
– Number and speed of Physical NETWORK to respect prerequisites microsoft. ( live migration, heartbeat, admin, backup, ISCSI, etc ..)
regards
maxime
Hi Max,
These are some great tips too!
– CSV cache helps optimize disk read times for frequently accessed Hyper-V files. It is turned on by default in the recent versions of Windows Server.
– Maximizing your storage drives can help, but you want to maximize your network bandwidth too. Consider using Quality of Service (QoS).
– Have a dedicated network for each of your different type of network traffic. Altaro will be posting a blog on this topic shortly.
Thanks,
Symon Perriman
Altaro Editor
The situation could be frustrating and can go a long way to slowing down your
system’s speed. What do you do at this time? There is need to delete those
unnecessary files.
Hi Marija,
If you are looking for an easy way to automatically delete unnecessary files, consider using Task Scheduler. This built-in utility will allow you to automatically schedule a task, which can be a script which deletes these unnecessary files.
Thanks,
Symon Perriman
Altaro Editor
Man, this helped push me to upgrade all of our 2008 R2 hypervisors to 2012. I didn’t realize how much awesomeness we have been missing out on! Thanks for writing this, I really appreciate it. -Ben
Interesting to read about Point 4. In case of VM using O/S that does not support the Integration services, as i have atleast 3 Linux servers in loadbalancing/mail antispam and such, That i would performance-wise use Another physical NIC, create a Virtual Switch (atleast thats the conclusion i made) for those Machines. Other Points made just sounds like common sense, but use static memory for SQL/Exchange. By the way, since i migrated from S2008, to 2008R2, to 2012 and now 2012 R2, is it possible to convert a Generation 1 to Generation 2 and not make a fubar of the VM ?
Hi Robert,
– Today almost every Linux distribution supports Hyper-V, and the integration services are built into the kernel. Here’s the support list from Microsoft: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows. If you are running a different distribution, then you would want to configure monitoring through the vSwitch.
– No, it is not possible to directly convert a Gen1 VM to Gen2. You would want to migrate the service between the VMs, or for some applications you can keep the data on a dedicated data VHD file which you can attach to a Gen2 VM.
Thanks,
Symon Perriman
Altaro Editor
#19 should not exist if you are using Gen 2 VM’s only then you are already using vSCSI instead of vIDE.
Hi Brad,
Yes, you bring up a good point, with Generation 2 VMs, you must use a virtual SCSI controller to connect to your VHD. Virtual IDS is not available with Gen2 VMs.
Thanks,
Symon Perriman
Altaro Editor