In block-based storage environments (iSCSI, Fibre Channel, FCoE) ESXi hosts can use more NICs or HBAs to connect to the LUNs via (FC) switches and Storage Processors (SP). Because of this physical design an ESXi host has multiple ways to access to the storage, using multipathing. This technique provides failover and load balancing. The following logical design from VMware shows a simple multipathing architecture.
With vSphere 4 VMware has introduced a redesigned multipathing storage subsystem inside the VMkernel, which is called Pluggable Storage Architecture also known as PSA. This modular framework a collection of plugins, offers Storage APIs for 3rd party developers in order to integrate their storage solutions and capabilities into the VMkernel. It also manages the plugins, handles the path discovery with scanning.
The Native Multipath Plugin (NMP) is created by VMware. NMP manages the sub plug-ins (SATP and PSP), either VMware created or 3rd party. Provides default support for the storage arrays which are listed in the VMware Compatibility Guide. (aka HCL)
From esxcli run the following to display multipathing modules. In this case we have only NMP.
~ # esxcli storage core plugin list
NMP has two modules. Both are developed by VMware, but can be also 3rd party.
Storage Array Type Plugin (SATP)
Monitors each physical path and reports changes, handles the path failover. There are a pre-defined SATPs to every storage arrays, which VMware supports.
This commands shows the currently loaded SATP plug-ins on the ESXi host:
~ # esxcli storage nmp satp list
Every SATP plug-ins have one or more claiming rules. The following output is a huge list which contains claiming rules for each supported arrays. I have filtered it to VMW_SATP_ALUA:
~ # esxcli storage nmp satp rule list -s VMW_SATP_ALUA
For example, IBM Storwize V7000 (2145 is the Product ID) has also the VMW_SATP_ALUA plugin assigned with a specified PSP.
Path Selection Plug-In (PSP)
Is the second module inside the NMP. It responsible for choosing a path for every I/O request. Based on the loaded SATP plug-in (see esxcli nmp satp list above) the PSP will be selected automatically by the NMP for each LUN. (but it can be overridden). By default, vSphere supports three PSPs
~ # esxcli storage nmp psp list
- VMW_PSP_MRU, aka Most Recently Used: The path, which was available first will be selected by the PSP. During failover it chooses another working path. The I/O remains on this new path even if the original becomes available again. So it doesn’t fail back. This is the default policy for most Active / Passive and ALUA arrays.
- WMV_PSP_RR, aka Round Robin: The I/O will be rotated through all active available optimized paths, provides basic load balancing. By default, Round Robin will use the next path after 1000 IOPS. Can be used with Active / Active and Active / Passive and ALUA arrays.
- WMV_PSP_FIXED, aka Fixed: The ESXi host will use a preferred path, which can be selected manually, or it will be the first available path during the ESXi boot. After a path failover during an outage it returns back to the original when the path becomes available again. This is the default policy for most Active / Active arrays. Do not use for Active / Passive arrays, because path thrashing could occur if using this PSP.
Storage vendors can provide own Multipathing Plug-ins (MPPs), which can run parallel with the NMP. This 3rd party plug-ins completely replace the NMP, SATP and PSP. It can optimize path failover, load-balancing and can also provide sophisticated path selection based on e.g. queue depth. Only a few vendors have developed MPPs, such as EMC (PowerPath/VE) or Veritas (Dynamic Multi-Pathing).
In the Part 2 we will configure proper SATP and PSP on GUI and also with esxcli and PowerCLI.