Save to My DOJO
Table of contents
I think Microsoft broke my NETSH! I had built a new Hyper-V 2012 cluster for testing purposes. I followed a similar standard to the one I used for 2008 R2 clusters, which included designating an adapter for iSCSI and enabling jumbo frames. I followed the same basic steps you can find under “Server Core Virtual Machines – Virtual NICs” on this earlier post. That particular post was written for virtual machines, but the same NETSH command works on any type of adapter that has TCP/IP bound to it. Or, maybe I should say, it used to.
I can confirm that I tested the connections using the command as listed on that post:
PING 192.168.50.50 -f -l 8000
It worked, and I moved on to other things. A few days later, I decided to test a different configuration. If you read my November commentary, you’ll know that I’ve started to consider new ways that teaming and converged fabric in Hyper-V 2012 can be beneficial. I have five physical adapters in each of this cluster’s systems. Instead of splitting them out into separate roles in which most cards will sit idle at times when others are working hard, I decided to combine them into a single 5Gb team. By using QoS, I can guarantee that my iSCSI traffic will never have less than the bandwidth of a full NIC, but most of the time it will have more available network bandwidth than the underlying drive system can put on the wire. Before anyone points it out, I realize that this isn’t as efficient as using MPIO on both ends (edit: I have recently come under vitriolic assault for using the term “MPIO on both ends”. This wording is expedient, but imprecise. Depending on your setup, you might not actually enable an option called “MPIO” on both sides. On some setups you do. It all depends on what you’re using. I apologize for being not exactly correct about MPIO in a post that was never intended to be about MPIO.)
Initial Configuration Procedure
If you don’t want the back story, you can just scroll down to the Solution section.
The first thing I did was to configure the relevant ports on my physical switch as a LAG (link aggregation group) in LACP mode (link aggregation protocol, 802.3ax) mode. On the newly created LAG, I allowed the VLANs that my adapters will be members of as tagged traffic and set the default VLAN as untagged traffic. If your switch is a smart layer-2 device, consult with your manufacturer on how to perform these steps. If your switch isn’t a smart device or if it doesn’t have all these features, you can use “Switch Independent” for the “TeamingMode” and/or skip the steps about VLANs. If you use “Switch Independent” mode, you will not have access to the full array of aggregation capabilities. If you don’t use VLANs, your various traffic types will be easier to compromise and your switch ports will have to deal with some broadcast traffic intended for other networks.
After configuring the switches, I fired up PowerShell on the host and created the team with a virtual switch, using extremely original and creative names:
New-NetLbfoTeam -Name "Converged Fabric" -TeamMembers "Onboard", "PCI Top Left", "PCI Top Right", "PCI Bottom Right", "PCI Bottom Left" -TeamingMode LACP -LoadBalancingAlgorithm TransportPorts New-VMSwitch -Name "Converged Switch" -AllowManagementOS 0 -MinimumBandwidthMode Weight -NetAdapterName "Converged Fabric"
Next, I created virtual adapters on the switch for each of my Hyper-V roles, also with decidedly original and creative names:
Add-VMNetworkAdapter –ManagementOS –Name "Management" –SwitchName "Converged Switch" Add-VMNetworkAdapter -ManagementOS -Name "Cluster-CSV" -SwitchName "Converged Switch" Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "Converged Switch" Add-VMNetworkAdapter -ManagementOS -Name "Storage" -SwitchName "Converged Switch"
I have smart layer-2 switches so I placed all but the management adapter into a separate VLAN. I left the management adapter alone, which means that its traffic will be untagged and, because of the above switch configuration, will travel on the default VLAN:
Set-VMNetworkAdapterVlan –ManagementOS –VMNetWorkAdapterName "Cluster-CSV" -Access –VlanId 15 Set-VMNetworkAdapterVlan –ManagementOS –VMNetWorkAdapterName "LiveMigration" -Access –VlanId 10 Set-VMNetworkAdapterVlan –ManagementOS –VMNetWorkAdapterName "Storage" -Access –VlanId 50
As previously mentioned, I’m going to use QoS to set reasonable minimums for my traffic. I certainly don’t mind if you duplicate these settings into your own environment, but be aware that these are the numbers I have determined are correct for my situation. You might need a different balance. You might want to set QoS on protocols instead of adapters. You might want to set maximums. You might have fancy hardware and want to use datacenter bridging instead. Whatever you do, you should keep the sum of your minimums at or below 100. The remaining space in mine will be left for virtual machines:
Set-VMNetworkAdapter –ManagementOS –Name "Management" –MinimumBandwidthWeight 5 Set-VMNetworkAdapter -ManagementOS -Name "Cluster-CSV" -MinimumBandwidthWeight 10 Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40 Set-VMNetworkAdapter -ManagementOS -Name "Storage" -MinimumBandwidthWeight 40
My next step was to set IP addresses. I used NETSH when I did this initially, but since Microsoft seems to want to get rid of NETSH, I’ll present them here in their PowerShell equivalents:
New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 192.168.25.11 -DefaultGateway 192.168.25.1 -PrefixLength 24 New-NetIPAddress -InterfaceAlias "vEthernet (Cluster-CSV)" -IPAddress 192.168.15.11 -PrefixLength 24 New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration)" -IPAddress 192.168.10.11 -PrefixLength 24 New-NetIPAddress -InterfaceAlias "vEthernet (Storage)" -IPAddress 192.168.50.11 -PrefixLength 24 Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses 192.168.25.5, 192.168.25.6
To wrap up, I used NETSH to set jumbo frames on all but the management adapter. I’ll show one for the sake of illustration, but keep in mind that while this command will be accepted, it doesn’t work:
NETSH INTERFACE IP SET SUBINTERFACE "vEthernet (Storage)" MTU=9000 STORE=PERSISTENT
Symptoms
When I tested, I first got one “Destination Host Unreachable” followed by three “General failure”s. Usually, that means a firewall or some sort of protection software is in the way. I ruled that out pretty quickly with firewall rules. I tinkered around with a few things, and finally resorted to searching the Internet. I found some things indicating that all the adapters in the team need to have their MTU set as well; I had already enabled it on the adapters I had originally planned for jumbo frames, so I used the above command to set the other as well. That didn’t help. I poked around in the registry looking for the VM Switch as it was in 2008 R2 and didn’t find anything. I suppose I assumed that Microsoft had just rolled it up into something else and it wouldn’t show up that way. In frustration, I tore down the virtual switch and team and put it back the way it was. With everything restored to the original configuration that had been working, I tried again. I was still getting the same drop! I used the following command to verify that everything was as I expected:
NETSH INTERFACE IP SHOW SUBINTERFACE
It indicated that all adapters had an MTU of 9000. They still wouldn’t pass jumbo frames, though. I even built a crossover cable and directly attached the ports just to rule out my switch.
Solution
I’ll spare you most of the gory details that I used to solve this problem. I read a lot of Internet blogs and forums and found a bit of helpful information here and there but finally just had to mostly trudge out on my own to get this fixed. I really need to acknowledge Didier Van Hoye for this post, though. I had to change several things to get his commands to work and they still didn’t actually fix my problem, but without them I probably would have given up and been out writing nasty letters. Anyway, NETSH just won’t do it anymore. You have to use PowerShell (or WMI, if you’re masochistic). As a side note, during all this, I discovered that my onboard adapter doesn’t support jumbo frames so I had to remove it from the team. I only bring it up in the event that some of the following commands or results seem inconsistent with a five-card team.
First, all the adapters in your team need to have their MTUs set properly. You can do this on multiple cards at once, combining Name strings and employing wildcards. The following is an example:
Get-NetAdapterAdvancedProperty -Name "Onboard", "PCI*" -DisplayName "Jumbo Frame" | Set-NetAdapterAdvancedProperty -RegistryValue "9216"
Besides the fact that you’ll have to modify the “Name” field to match what your adapter names are, there are two other things to be aware of here: first, your adapter’s setting might not be called “Jumbo Frame”. If that’s the case, you’ll get a fairly obtuse message about “No matching… objects”, which, unfortunately, is also what you’d get if you misspell your adapter name or if your adapter just doesn’t support jumbo frames. The second thing is that your adapter might have a “Jumbo Frame” setting but “9216” may not be a valid number. That message is much less cryptic and will return something like: “Set-NetAdapterAdvancedProperty : No matching keyword value found. The following are valid keyword values: 1514, 2048, 3072, 4096, 5120, 6144, 7168, 8192, 9216”. If you get the latter, just rerun the command with the number closest to but above 9000. To get a list of all the possible settings to see what name (if any) that your vendor uses for jumbo frames, run the previous command up to the “Name” field, such as:
Get-NetAdapterAdvancedProperty -Name "Onboard", "PCI*"
The output from that command is pretty busy, but you should be able to find what you’re looking for. Once you do, reissue the Set-NetAdapterAdvancedProperty with the proper parameters.
Next, you have to set the virtual adapters that you want to enable jumbo frames on. I recommend that you exclude the management adapter, as enabling jumbo frames on adapters that might send or receive traffic across/to switches/computers that don’t support them will make that traffic slower:
Get-NetAdapterAdvancedProperty -Name "vEthernet (LiveMigration)", "vEthernet (Storage)", "vEthernet (Cluster-CSV)" -DisplayName "Jumbo Packet" | Set-NetAdapterAdvancedProperty -RegistryValue "9014"
Quick version: Go to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318}008 and change *JumboPacket to 9014. Double-check that the ComponentId is “vms_pp” before making this change. Test. If it doesn’t work, reboot and test again.
Long version: Here’s where it got strange. My packets still weren’t getting through, and I was out of nobs to turn. I still hadn’t set anything for the virtual switch itself, but I couldn’t find it either. I went into the registry key where all the adapters are tracked (still HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318) and nothing stood out. Until I slowed down and took a closer look. I noticed that I had five entries with a ComponentId of “vms_mp” and a DriverDesc of “Hyper-V Virtual Ethernet Adapter”. What didn’t jump out at me at first was that I had only created four virtual adapters. Also, four of the five were right next to each other in the adapter list toward the end but one was up near the top (index 0008 on all three of my systems, what’s yours?). So, I manually changed the loner’s “*JumboPacket” key to 9014. A jumbo ping went through immediately (on my first stumble-through; when I retraced my steps to validate for this post, I had to reboot the server to get it to work).
Notes and Final Thoughts
Of course, you could just hop into the registry and set all these manually. All the disclaimers and scary stuff about how you can wreck your machine by messing with the registry apply, of course.
Fun little fact: on an adapters registry node, expand its “Ndi” subtree and then its “params” subtree and you’ll find all the advanced options listed. Expand one, and it will have an “Enum” subkey. There, you’ll find all the values it will accept for that particular setting.
The “Set-NetIPInterface” cmdlet has an -NlMtuBytes parameter. Like NETSH, this is a free-form field and will accept any digit entry up to whatever the maximum limit is for a TCP frame. It’s a bit easier to use than the above, but it doesn’t operate on hidden adapters (like the ones in your virtual switch) and it has no validation like the Set-NetAdvancedProperty does. I didn’t try this to see how its outcome would compare to my other findings.
I am disappointed that it came to this and I’m hoping that I just overlooked something. Microsoft did a lot of work to get us from PowerShell 2.0 in 2008 R2 to PowerShell 3.0 in 2012, and to have us digging in the registry to make that one last setting is disheartening. Scripting a host deployment with registry key modifications is going to be inelegant at best, especially if it turns out that the 0008 index isn’t static. My particular use case may or may not be common, but there are still going to be people who want their guest VMs to be able to use jumbo frames for various reasons, and this will also be their solution. I feel that the virtual switch should just have jumbo frames by default. Let administrators decide if they want to enable them on a per-adapter basis.
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
15 thoughts on "How To Adjust MTU (Jumbo Frames) on Hyper-V and Windows Server 2012"
Server manager help on Server 2012 has this:
“For iSCSI, you cannot use network adapter teaming (also called load balancing and failover, or LBFO), because they are not supported with iSCSI. You can consider using MPIO software instead.”
Number of iSCSI vendors also do not recommend using LBFO with iSCSI.
That’s correct. However, there is an abstraction layer here that confuses things.
What Server Manager is referring to is teaming two NICs and then making an iSCSI connection directly over them. That team is represented by a single IP address straddling two ports and trying to engage in multi-path conversations with storage. That is not what is happening here.
In this type of setup, if multi-path is desired, an iSCSI connection would be created by using two vNICs, each with their own IPs, with MPIO enabled. This results in a dual-NIC MPIO iSCSI connection to a switch that then connects to another switch over a multi-port LACP connection that then connects to the back-end storage. This is not an unusual configuration in the real world. The only difference is that one of the switches in this case is virtual.
The setup as presented above does not include MPIO. The lone “Storage” vNIC is the only path to storage and it will never use more than a single pNIC at a time. However, this post is about getting Windows Server 2012 teaming and jumbo frames to work together, not MPIO. I am regretting ever mentioning it.
Awesome find/tip… I spent some time trying to automate this process, since the key shows up in different locations, depending on how many NICs you have installed, and came up with a powershell script that worked for me – posting here in case it might help someone else:
get-childitem -path “HKLM:SYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318}” -recurse -ea SilentlyContinue | where-object {(get-itemproperty -path $_.PSPath) -match “vms_mp” -and (get-itemproperty -path $_.PSPath) -match “1514”} | foreach-object {set-itemproperty -path $_.PSPath -Name “*JumboPacket” -value”9014″}
Nicely done!
A word of caution, though: You may not want to enable jumbo frames on adapters that have Internet connections or go through other routers that have a standard frame size as it will impair performance and sometimes cause transmission failures.
Fantastic post! Short and to the point. I liked the way you made it interesting with story telling. Though this post is about 2012 teaming and jumbo frames, it also covered virtual nic and virtual switch concept.
After I finished reading, the first thing popped up into my mind was regarding MPIO, but then I scrolled down, and it was already answered. In a way I am happy that MPIO query came up, b’cos I am sure lot of us would have had the similar confusion. Thanks Eric. Looking forward to more such articles.
If the Jumbo frame size on the nic is 9014 what should be the mtu for the switch port.
Are you talking about the physical switch? Values to use for MTU settings are always defined by the device manufacturer. I can’t really be any more specific than that. Usually though, you don’t have to set MTU directly on a port. You can usually just set jumbo frames on or off for the entire switch.
Hi Eric ,
Thank you for your replay.
I think I need a bit of clarification here . Are talking about ethernet MTU or ip MTU? In my previous comment I refer to the physical switch (Ethernet farme MTU).
Physical switches usually talk about it in terms of the Ethernet frame. I’ve never owned a switch that required me to specify a frame size. Jumbo frames were either off or on. If yours is asking you for a specific number, I would use a minimum size of 9022.