In one of the server rooms I manage I’ve got a few VMware ESX 3.5 hosts attached to an OpenFiler 2.3 iSCSI SAN, running through an HP 2848 switch. I wanted to enable jumbo frames throughout the logical storage path to see how it would affect virtual disk speeds. Unfortunately, it is not possible to use jumbo frames or NIC load balancing with my relatively expensive EMC AX150i iSCSI SAN, only my cheap-as-dirt OpenFiler boxes. Go figure.
There are a couple different “standards” for jumbo frames (also known as “large MTU”), and not all vendors support the same size jumbo frames. The frame size I’m shooting for is 9000. Many vendors support 9K frames, but in reality, they support maximum frame sizes anywhere between 9000 and 9220 bytes, so I’m going to see if I can shoot for the smallest of the largest frame sizes. For example, OpenFiler apparently supports a maximum MTU of 9000 bytes.
First, I enabled jumbo frames on the 2848, which to the HP switch means an MTU of 9220 bytes.
Switch# config
Switch(config)# vlan 1 jumbo
Switch(config)# write memory
Second, I configured the ESX servers to accept jumbo frames. Because this required fiddling with the VMkernel NIC, I migrated running machines off the host being configured first.
Configure the vSwitch
/usr/sbin/esxcfg-vswitch -m 9000 vSwitch1
Remove the VMkernel NIC from Port Group
/usr/sbin/esxcfg-vmknic -d -i 192.168.0.1 VMkernel
Add the VMkernel NIC back to Port Group with large MTU:
/usr/sbin/esxcfg-vmknic -a -i 192.168.0.1 -n 255.255.255.0 -m 9000 VMkernel
Even though the VMs have been migrated to another host, I still had to pause them to configure OpenFiler because every time I fiddle with the network configuration on OpenFiler, it drops the network interface and I have to use remote management techniques to restart it. If VMs are running when this happens, that = bad news. I could have also migrated them to another storage medium, but they are test machines so no biggie if I just pause them for a bit.
With all the machines paused, I reconfigured OpenFiler:
Click System tab
Click Configure link next to the target NIC
Click Continue through boot protocol (DHCP/Static)
Change the MTU to 9000 and click Confirm
Use console to restart system (shutdown -r now)
I then restarted the VMs and enjoyed higher transfer rates and virtual disk speeds! I did find out later that removing and adding the VMkernel NIC in ESX requires you to re-enable vMotion on the VMkernel (if you had it enabled).