Configure Software iSCSI Load-balance Multipathing to vSphere Datastores

VMware vSphere 4 gives us the ability to actively use multiple iSCSI paths to reach a single LUN. You will need to use vSphere Client, your iSCSI storage management tools, and an ESX command-line interface (such as ssh, RCLI or vSphere Management Assistant) to get it working. This procedure can be used to add up to eight iSCSI paths per datastore, provided each path uses a unique physical NIC and that each physical NIC has a corresponding NIC on the iSCSI SAN side. In other words, setting up more paths on the VMware side than your iSCSI SAN can actually accommodate would be pointless.

This tutorial assumes that you are familiar with vSphere Client and can find your way around. Read on beyond the break.

The Procedure

This entire procedure must be repeated for each ESX host you want to set up iSCSI multipathing on. If you have a clustered VMware environment, you should set up multipathing on each node in your cluster. Each VMkernel interface you create for the purposes of iSCSI multipathing will need an IP address, so it’s important to make sure you have enough available IP addresses in the appropriate subnet(s) before proceeding.

For convenience, here’s the procedure summary, with links to the tutorial sections.

  1. Create the vmkernel interfaces
  2. Use your iSCSI management tools to allow these new interfaces to connect
  3. Configure the software iSCSI initiator to use the new vmkx iSCSI ports
  4. Configure Round Robin path selection for all iSCSI datastores
  5. Reboot the ESX host

I. Create the vmkernel interfaces

The first step is to create at least two VMkernel ports, bound to unique physical NICs. This is done in the vSphere Client.

  1. Select the Networking view from the ESX host’s Configuration tab.
  2. Find the vSwitch that is connected to the physical NICs you’d like to use for iSCSI multipathing and click the Properties link.
  3. Click the Add button, select VMkernel and click Next.
  4. Give a descriptive network label. I recommend “vmkx-iSCSI”, where x is the actual port number. I’ll be using this naming convention during this tutorial. (If you only have one existing VMkernel port, then it is vmk0. The next one you create will be vmk1. The one after that will be vmk2, etc.) Do not enable VMotion on this port. Click Next.
  5. Enter the port’s IP address and subnet mask, then click Next and Finish.
    iscsi-1
  6. Select the new port from the list in the vSwitch Properties window, click the Edit button, and then select the NIC Teaming tab.
  7. Enable Override vSwitch failover order, and then select one vmnic to be in the Active Adapters section and move the rest to the Unused Adapters section. (Make sure you choose a different vmnic for each vmkx-iSCSI port!) Click OK.
    iscsi-2
  8. Repeat steps 3 – 7 for each additional you wish to create, then close the vSwitch Properties window.

II. Use your iSCSI management tools to allow these new interfaces to connect

Since all environments are different, I can’t tell you how to do this step. Our iSCSI setup uses IP address access lists, so in this section, I would add the IP addresses of the VMkernel interfaces I added in the previous section to the access list.

III. Configure the software iSCSI initiator to use the new vmkx iSCSI ports

  1. In vSphere Client, select the Storage Adapters view from the ESX host’s Configuration tab and note the iSCSI Software Adapter device name. It’s vmhba33 in our environment.
  2. Using a command-line interface to the ESX host, add a vmk adapter to the software iSCSI initiator with this command:
    esxcli swiscsi nic add -n vmkx -d vmhba33
  3. Repeat step 2 for each vmkx interface you created in section I.
  4. Back in vSphere Client, right-click on the iSCSI Software Adapter and choose Rescan to discover the additional paths to your storage. After a short wait, you should see in the Details section the available paths multiply by the number of VMkernel ports you added. E.g. If you are attached to 16 datastores and you added two iSCSI paths to them, your paths count should increase from 16 to 32.

NOTE: It is common for the path number to be incorrect at this stage. In my case, it showed 48 paths — which is the original 16 plus the 32 new ones. Rebooting the ESX host will fix this, but you can save the reboot until the entire process is complete.

IV. Configure Round Robin path selection for all iSCSI datastores

To actually take advantage of these multiple iSCSI paths, you need to set your datastores to use the Round Robin path selection method so all paths can be active. The following procedure must be completed for each iSCSI datastore your ESX host accesses.

  1. In vSphere Client, select the Storage view from the ESX host’s Configuration tab.
  2. Select one of your iSCSI datastores and click the Properties link.
  3. Click the Manage Paths button.
    iscsi-3
  4. Select Round Robin (VMware) from the Path Selection menu and click Close.
    iscsi-4
  5. Click Close, and then wait for the task to finish.
  6. Repeat steps 2 – 6 for each iSCSI datastore.

V. Reboot the ESX host

If your environment is clustered, rebooting an individual host shouldn’t be too much trouble. After the reboot, the correct number of paths to your datastores should be reported.

Advertisements

7 thoughts on “Configure Software iSCSI Load-balance Multipathing to vSphere Datastores

  1. to make sure you are getting the performance make sure you change the defaults to

    esxcli nmp roundrobin setconfig –type “iops” –iops=1 –device

  2. For ESX 4.x you may want to note that if you are planning on using jumbo frames for iSCSI communication then you would have to create the VMkernels using the CLI.

    MTU for the VMkernels cannot be changed after they are created. Here is the command to create the VMkernel with MTU set to 9000:

    First list out your vSwitches ———– esxcfg-vswitch -l
    Set MTU for SAN vSwitch ————– esxcfg-vswitch -m 9000 vSwitch#
    Set MTU for Portgroup on vswitch — esxcfg-vswitch vSwitch# -m 9000 -p “PortGroupHere”

    This is a great article though!

Comments are closed.