Retroactively enabling Jumbo iSCSI frames for a Windows 2008 R2 Hyper-V cluter - proper order and caveats?
greetings - not sure forum best post question in part iscsi, part networking, , part hyper-v clustering question, hoping in hyper-v forum has had experience or knows enough answer questions. moderators - if there better forum post question to, please move it, please if believe question answered better somewhere else this multi-component question.
background:
we have 10 node windows 2008 r2 hyper-v cluster 2 non-teamed 1gb iscsi nics in each host virtual server accessing iscsi network through mpio. quoroum , csvs hyper-v cluster luns on iscsi storage, in addition the iscsi luns guest virtual servers utilize iscsi storage directly.
both of host virtual server iscsi nic ports shard between host virtual server , guest virtual servers in form of 2 separate but shared hyper-v networks. reason shared nic ports between host , guests because have guest virtual machines need direct iscsi lun access , made sense not have 4 separate iscsi nics on each host virtual server (2 dedicated mpio iscsi nics host , 2 more dedicated mpio iscsi nics guests). the net result of configuration not do guest virtual servers need access iscsi luns virtual iscsi nics, host virtual server gets virtual iscsi nics. realize not dedicating iscsi nics host impact iscsi performance, using vmqd supposed offset performance loss of hyper-v networks.
we have iscsi san consisting of multiple storage devices, , both iscsi storage devices , host virtual servers connected dedicated pair of redundant switches in trunked (not stacked) configuration.
initially unable change frame size of virtual iscsi nics (for both host , guest virtual servers) default 1500 mtu because had iscsi storage component did not support jumbo frames , best practice vendor not mix , match jumbo frame sizes. due time constraints had deploy hyper-v cluster , iscsi storage in non-jumbo frame configuration. have sense removed component did not support jumbo frames , enable jumbo frames of 9000 take advantage of more effecient throughput.
we concerened accidentally cutting off communications between our hyper-v host virtual servers, our guest virtual servers, , multiple iscsi storage devices. our initial research indicates jumbo frames used when transmitter, receiver, , switch ports inbetween agree mtu larger 1500 ok, , if don't agree steps down mtu of 1500. lead following order enable jumbo frames existing 2008 r2 hyper-v cluster:
- enable jumbo frames on iscsi switches. require reboot of switches, since redudant shouldn't cause issue.
- enable jumbo frames on iscsi storage devices. potentially require reboot of stroage devices, checking vendor on , can sceduled maintenance window.
- enable jumbo frames on host virtual server physical nics. if requires reboot of host virtual server, shouldn't issue since cluster has plenty of elbow room , live virtual machines can moved host..
- enabel jumbo frames on virtual iscsi nics both host , guest virtual servers.
we want make sure limit impact production hyper-v enviornment as possible. willing in maintenance window if cause outage, need know cause outage regardless can notify appropriate people of outage.
questions:
- is order listed above correct 1 in need enable jumbo frames? if not , why?
- at point should concerned our hyper-v host virtual server or guest virtual servers loosing access iscsi luns? for example can virtual iscsi nics stay @ 1500 though switches, storage devices, , physical nic connections have mtu of 9000 set? otherwise not sure how roll change production without having complete outage.
- are there other gotchas and/or concerns people have planning on doing.
thanks!
- enable jumbo frames on iscsi switches. require reboot of switches, since redudant shouldn't cause issue.
- enable jumbo frames on iscsi storage devices. potentially require reboot of stroage devices, checking vendor on , can sceduled maintenance window.
- enable jumbo frames on host virtual server physical nics. if requires reboot of host virtual server, shouldn't issue since cluster has plenty of elbow room , live virtual machines can moved host..
- enabel jumbo frames on virtual iscsi nics both host , guest virtual servers.
we want make sure limit impact production hyper-v enviornment as possible. willing in maintenance window if cause outage, need know cause outage regardless can notify appropriate people of outage.
questions:
- is order listed above correct 1 in need enable jumbo frames? if not , why?
- at point should concerned our hyper-v host virtual server or guest virtual servers loosing access iscsi luns? for example can virtual iscsi nics stay @ 1500 though switches, storage devices, , physical nic connections have mtu of 9000 set? otherwise not sure how roll change production without having complete outage.
- are there other gotchas and/or concerns people have planning on doing.
reading action list brought me in time few months when had same in exact same order (better yet on 10 node hyper-v cluster :) )....
it gets little tricky when enabling jumbo packages on host , virtual servers when nic shared think action plan laid out perfectly.... there split second disconnection of network once apply settings nic. have enable in order off 1)physical nic 2)virtual nic corresponds , 3)emulated nic within vm.
you may want check driver , firmware level of nics @ point (before enabling jumbo) on host. in cases had settings revert or nic numbers changed after such upgrades make sure trace nic proper virtual network before continuing after these updates.
when apply jumbo settings physical nic , vn nic cluster node drop connection quorum (given using nic iscsi quorum well) register event on failover cluster saying , server lost communication quorum said move vms out of host before of these nothing effected. no restart necessary (unless firmware\driver upgrades requires so).
once have healthy host in cluster can move vm in question node , enable jumbo packages within virtual os on emulated nic. again, there split second disconnection if running exchange server , have data drive delivered via iscsi may want dismount stores. in other words stop io activity iscsi source vm avoid possible data corruption.
also, make have latest version of hyper-v integration tools installed on guest os.
Windows Server > Hyper-V
Comments
Post a Comment