Pearson IT Certification

vSphere Pluggable Storage Architecture (PSA)

By

Date: Sep 26, 2012

Return to the article

This chapter covers PSA (VMware Pluggable Storage Architecture) components. Learn how to list PSA plug-ins and how they interact with vSphere ESXi 5, as well as how to list, modify, and customize PSA claim rules and how to work around some common issues. It also covers how ALUA-capable devices interact with SATP claim rules for the purpose of using a specific PSP.

vSphere 5.0 continues to utilize the Pluggable Storage Architecture (PSA) which was introduced with ESX 3.5. The move to this architecture modularizes the storage stack, which makes it easier to maintain and to open the doors for storage partners to develop their own proprietary components that plug into this architecture.

Availability is critical, so redundant paths to storage are essential. One of the key functions of the storage component in vSphere is to provide multipathing (if there are multiple paths, which path should a given I/O use) and failover (when a path goes down, I/O failovers to using another path).

VMware, by default, provides a generic Multipathing Plugin (MPP) called Native Multipathing (NMP).

Native Multipathing

To understand how the pieces of PSA fit together, Figures 5.1, 5.2, 5.4, and 5.6 build up the PSA gradually.

Figure 5.1

Figure 5.1. Native MPP

NMP is the component of vSphere 5 vmkernel that handles multipathing and failover. It exports two APIs: Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP), which are implemented as plug-ins.

NMP performs the following functions (some done with help from SATPs and PSPs):

PSA communicates with NMP for the following operations:

Storage Array Type Plug-in (SATP)

Figure 5.2 depicts the relationship between SATP and NMP.

Figure 5.2

Figure 5.2. SATP

SATPs are PSA plug-ins specific to certain storage arrays or storage array families. Some are generic for certain array classes—for example, Active/Passive, Active/Active, or ALUA-capable arrays.

SATPs handle the following operations:

NMP communicates with SATPs for the following operations:

Examples of SATPs are listed in Table 5.1:

Table 5.1. Examples of SATPs

SATP

Description

VMW_SATP_CX

Supports EMC CX that do not use the ALUA protocol

VMW_SATP_ALUA_CX

Supports EMC CX that use the ALUA protocol

VMW_SATP_SYMM

Supports EMC Symmetrix array family

VMW_SATP_INV

Supports EMC Invista array family

VMW_SATP_EVA

Supports HP EVA arrays

VMW_SATP_MSA

Supports HP MSA arrays

VMW_SATP_EQL

Supports Dell Equalogic arrays

VMW_SATP_SVC

Supports IBM SVC arrays

VMW_SATP_LSI

Supports LSI arrays and others OEMed from it (for example, DS4000 family)

VMW_SATP_ALUA

Supports non-specific arrays that support ALUA protocol

VMW_SATP_DEFAULT_AA

Supports non-specific active/active arrays

VMW_SATP_DEFAULT_AP

Supports non-specific active/passive arrays

VMW_SATP_LOCAL

Supports direct attached devices

How to List SATPs on an ESXi 5 Host

To obtain a list of SATPs on a given ESXi 5 host, you may run the following command directly on the host or remotely via an SSH session, a vMA appliance, or ESXCLI:

# esxcli storage nmp satp list

An example of the output is shown in Figure 5.3.

Figure 5.3

Figure 5.3. Listing SATPs

Notice that each SATP is listed in association with a specific PSP. The output shows the default configuration of a freshly installed ESXi 5 host. To modify these associations, refer to the “Modifying PSA Plug-in Configurations Using the UI” section later in this chapter.

If you installed third-party SATPs, they are listed along with the SATPs shown in Table 5.1.

Path Selection Plugin (PSP)

Figure 5.4 depicts the relationship between SATP, PSP, and NMP.

Figure 5.4

Figure 5.4. PSP

PSPs are PSA plug-ins that handle path selection policies and are replacements of failover policies used by the Legacy-MP (or Legacy Multipathing) used in releases prior to vSphere 4.x.

PSPs handle the following operations:

NMP communicates with PSPs for the following operations:

How to List PSPs on an ESXi 5 Host

To obtain a list of PSPs on a given ESXi 5 host, you may run the following command directly on the host or remotely via an SSH session, a vMA appliance, or ESXCLI:

# esxcli storage nmp psp list

An example of the output is shown in Figure 5.5.

Figure 5.5

Figure 5.5. Listing PSPs

The output shows the default configuration of a freshly installed ESXi 5 host. If you installed third-party PSPs, they are also listed.

Third-Party Plug-ins

Figure 5.6 depicts the relationship between third-party plug-ins, NMP, and PSA.

Figure 5.6

Figure 5.6. Third-party plug-ins

Because PSA is a modular architecture, VMware provided APIs to its storage partners to develop their own plug-ins. These plug-ins can be SATPs, PSPs, or MPPs.

Third-party SATPs and PSPs can run side by side with VMware-provided SATPs and PSPs.

The third-party SATPs and PSPs providers can implement their own proprietary functions relevant to each plug-in that are specific to their storage arrays. Some partners implement only multipathing and failover algorithms, whereas others implement load balancing and I/O optimization as well.

Examples of such plug-ins in vSphere 4.x that are also planned for vSphere 5 are

See Chapter 8 for further details.

Multipathing Plugins (MPPs)

Figure 5.7 depicts the relationship between MPPs, NMP, and PSA.

Figure 5.7

Figure 5.7. MPPs, including third-party plug-ins

MPPs that are not implemented as SATPs or PSPs can be implemented as MPPs instead. MPPs run side by side with NMP. An example of that is EMC PowerPath/VE. It is certified with vSphere 4.x and is planned for vSphere 5.

See Chapter 8 for further details.

Anatomy of PSA Components

Figure 5.8 is a block diagram showing the components of PSA framework.

Figure 5.8

Figure 5.8. NMP components of PSA framework

Now that we covered the individual components of PSA framework, let’s put its pieces together. Figure 5.8 shows the NMP component of the PSA framework. NMP provides facilities for configuration, general device management, array-specific management, and path selection policies.

The configuration of NMP-related components can be done via ESXCLI or the user interface (UI) provided by vSphere Client. Read more on this topic in the “Modifying PSA Plug-in Configurations Using the UI” section later in this chapter.

Multipathing and failover policy is set by NMP with the aid of PSPs. For details on how to configure the PSP for a given array, see the “Modifying PSA Plug-in Configurations Using the UI” section later in this chapter.

Arrray-specific functions are handled by NMP via the following functions:

I/O Flow Through PSA and NMP

In order to understand how I/O sent to storage devices flows through the ESXi storage stack, you first need to understand some of the terminology relevant to this chapter.

Classification of Arrays Based on How They Handle I/O

Arrays can be one of the following types:

Paths and Path States

From a storage perspective, the possible routes to a given LUN through which the I/O may travel is referred to as paths. A path consists of multiple points that start from the initiator port and end at the LUN.

A path can be in one of the states listed in Table 5.2.

Table 5.2. Path States

Path State

Description

Active

A path via an Active SP. I/O can be sent to any path in this state.

Standby

A path via a Passive or Standby SP. I/O is not sent via such a path.

Disabled

A path that is disabled usually by the vSphere Administrator.

Dead

A path that lost connectivity to the storage network. This can be due to an HBA (Host Bus Adapter), Fabric or Ethernet switch, or SP port connectivity loss. It can also be due to HBA or SP hardware failure.

Unknown

The state could not be determined by the relevant SATP.

Preferred Path Setting

A preferred path is a setting that NMP honors for devices claimed by VMW_PSP_FIXED PSP only. All I/O to a given device is sent over the path configured as the Preferred Path for that device. When the preferred path is unavailable, I/O is sent via one of the surviving paths. When the preferred path becomes available, I/O fails back to that path. By default, the first path discovered and claimed by the PSP is set as the preferred path. To change the preferred path setting, refer to the “Modifying PSA Plug-in Configurations Using the UI” section later in this chapter.

Figure 5.9 shows an example of a path to LUN 1 from host A (interrupted line) and Host B (interrupted line with dots and dashes). This path goes through HBA0 to target 1 on SPA.

Figure 5.9

Figure 5.9. Paths to LUN1 from two hosts

Such a path is represented by the following Runtime Name naming convention. (Runtime Name is formerly known as Canonical Name.) It is in the format of HBAx:Cn:Ty:Lz—for example, vmhba0:C0:T0:L1—which reads as follows:

vmhba0, Channel 0, Target 0, LUN1

It represents the path to LUN 0 broken down as the following:

Flow of I/O Through NMP

Figure 5.10 shows the flow of I/O through NMP.

Figure 5.10

Figure 5.10. I/O flow through NMP

The numbers in the figure represent the following steps:

  1. NMP calls the PSP assigned to the given logical device.
  2. The PSP selects an appropriate physical path on which to send the I/O. If the PSP is VMW_PSP_RR, it load balances the I/O over paths whose states are Active or, for ALUA devices, paths via a target port group whose AAS is Active/Optimized.
  3. If the array returns I/O error, NMP calls the relevant SATP.
  4. The SATP interprets the error codes, activates inactive paths, and then fails over to the new active path.
  5. PSP selects new active path to which it sends the I/O.

Listing Multipath Details

There are two ways by which you can display the list of paths to a given LUN, each of which are discussed in this section:

Listing Paths to a LUN Using the UI

To list all paths to a given LUN in the vSphere 5.0 host, you may follow this procedure, which is similar to the procedure for listing all targets discussed earlier in Chapter 2, “Fibre Channel Storage Connectivity” Chapter 3 and Chapter 4:

  1. Log on to the vSphere 5.0 host directly or to the vCenter server that manages the host using the VMware vSphere 5.0 Client as a user with Administrator privileges.
  2. While in the Inventory—Hosts and Clusters view, locate the vSphere 5.0 host in the inventory tree and select it.
  3. Navigate to the Configuration tab.
  4. Under the Hardware section, select the Storage option.
  5. Under the View field, click the Devices button.
  6. Under the Devices pane, select one of the SAN LUNs (see Figure 5.11). In this example, the device name starts with DGC Fibre Channel Disk.
    Figure 5.11

    Figure 5.11. Listing storage devices

  7. Select Manage Paths in the Device Details pane.
  8. Figure 5.12 shows details for an FC-attached LUN. In this example, I sorted on the Runtime Name column in ascending order. The Paths section shows all available paths to the LUN in the format:
    • Runtime Name—vmhbaX:C0:Ty:Lz where X is the HBA number, y is the target number, and z is the LUN number. More on that in the “Preferred Path Setting” section later in this chapter.
    • Target—The WWNN followed by the WWPN of the target (separated by a space).
    • LUN—The LUN number that can be reached via the listed paths.
    • Status—This is the path state for each listed path.
    Figure 5.12

    Figure 5.12. Listing paths to an FC-attached LUN

  9. The Name field in the lower pane is a permanent one compared to the Runtime Name listed right below it. It is made up of three parts: HBA name, Target Name, and the LUN’s device ID separated by dashes (for FC devices) or commas (for iSCSI devices). The HBA and Target names differ by the protocol used to access the LUN.

Figure 5.12 shows the FC-based path Name, which is comprised of

Figure 5.13 shows the iSCSI-based path Name which is comprised of

Figure 5.13

Figure 5.13. Listing paths to an iSCSI-attached LUN

Figure 5.14 shows a Fibre Channel over Ethernet (FCoE)-based path name, which is identical to the FC-based pathnames. The only difference is that fcoe is used in place of fc throughout the name.

Figure 5.14

Figure 5.14. Listing paths to an FCoE-attached LUN

Listing Paths to a LUN Using the Command-Line Interface (CLI)

ESXCLI provides similar details to what is covered in the preceding section. For details about the various facilities that provide access to ESXCLI, refer to the “Locating HBA’s WWPN and WWNN in vSphere 5 Hosts” section in Chapter 2.

The namespace of ESXCLI in vSphere 5.0 is fairly intuitive! Simply start with esxcli followed by the area of vSphere you want to manage—for example, esxcli network, esxcli software, esxcli storage—which enables you to manage Network, ESXi Software, and Storage, respectively. For more available options just run esxcli –help. Now, let’s move on to the available commands:

Figure 5.15 shows the esxcli storage nmp namespace.

Figure 5.15

Figure 5.15. esxcli storage nmp namespace

The namespace of esxcli storage nmp is for all operations pertaining to native multipathing, which include psp, satp, device, and path.

I cover all these namespaces in detail later in the “Modifying PSA Plug-in Configurations Using the UI” section. The relevant operations for this section are

The first command provides a list of paths to all devices regardless of how they are attached to the host or which protocol is used.

The second command lists the paths to the device specified by the device ID (for example, NAA ID) by using the -d option.

The command in this example is

esxcli storage nmp path list -d naa.6006016055711d00cff95e65664ee011

You may also use the verbose command option --device instead of -d.

You can identify the NAA ID of the device you want to list by running a command like this:

esxcfg-mpath -b |grep -B1 "fc Adapter"| grep -v -e "--" |sed 's/

Adapter.*//'

You may also use the verbose command option --list-paths instead of –b.

The output of this command is shown in Figure 5.16.

Figure 5.16

Figure 5.16. Listing paths to an FC-attached LUN via the CLI

This output shows all FC-attached devices. The Device Display Name of each device is listed followed immediately by the Runtime Name (for example, vmhba3:C0:T0:L1) of all paths to that device. This output is somewhat similar to the lagacy multipathing outputs you might have seen with ESX server release 3.5 and older.

The Device Display Name is actually listed after the device NAA ID and a colon.

From the runtime name you can identify the LUN number and the HBA through which they can be accessed. The HBA number is the first part of the Runtime Name, and the LUN number is the last part of that name.

All block devices conforming to the SCSI-3 standard have an NAA device ID assigned, which is listed at the beginning and the end of the Device Display Name line in the preceding output.

In this example, FC-attached LUN 1 has NAA ID naa.6006016055711d00cff95e65664ee011 and that of LUN0 is naa.6006016055711d00cef95e65664ee011. I use the device ID for LUN 1 in the output shown in Figure 5.17.

Figure 5.17

Figure 5.17. Listing pathnames to an FC-attached device

You may use the verbose version of the command shown in Figure 5.17 by using --device instead of -d.

From the outputs of Figure 5.16 and 5.17, LUN 1 has four paths.

Using the Runtime Name, the list of paths to LUN1 is

This translates to the list shown in Figure 5.18 based on the physical pathnames. This output was collected using this command:

esxcli storage nmp path list -d naa.6006016055711d00cff95e65664ee011 |grep
fc
Figure 5.18

Figure 5.18. Listing physical pathnames of an FC-attached LUN

Or the verbose option using the following:

esxcli storage nmp path list --device naa.6006016055711d00cff95e65664ee011
|grep fc

This output is similar to the aggregate of all paths that would have been identified using the corresponding UI procedure earlier in this section.

Using Table 2.1, “Identifying SP port association with each SP,” in Chapter 2, we can translate the targets listed in the four paths as shown in Table 5.3:

Table 5.3. Identifying SP Port for LUN Paths

Runtime Name

Target WWPN

Sp Port Association

vmhba3:C0:T1:L1

5006016941e06522

SPB1

vmhba3:C0:T0:L1

5006016141e06522

SPA1

vmhba2:C0:T1:L1

5006016841e06522

SPB0

vmhba2:C0:T0:L1

5006016041e06522

SPA0

Identifying Path States and on Which Path the I/O Is Sent—FC

Still using the FC example (refer to Figure 5.17), two fields are relevant to the task of identifying the path states and the I/O path: Group State and Path Selection Policy Path Config. Table 5.4 shows the values of these fields and their meanings.

Table 5.4. Path State Related Fields

Runtime Name

Group State

PSP Path Config

Meaning

vmhba3:C0:T1:L1

Standby

non-current path; rank: 0

Passive SP—no I/O

vmhba3:C0:T0:L1

Active

non-current path; rank: 0

Active-SP—no I/O

vmhba2:C0:T1:L1

Standby

non-current path; rank: 0

Passive SP—no I/O

vmhba2:C0:T0:L1

Active

current path; rank: 0

Active SP—I/O

Combining the last two tables, we can extrapolate the following:

Example of Listing Paths to an iSCSI-Attached Device

To list paths to a specific iSCSI-attached LUN, try a different approach for locating the device ID:

esxcfg-mpath -m |grep iqn

You can also use the verbose command option:

esxcfg-mpath --list-map |grep iqn

The output for this command is shown in Figure 5.19.

Figure 5.19

Figure 5.19. Listing paths to an iSCSI-attached LUN via the CLI

In the output, the lines wrapped. Each line actually begins with vmhba35 for readability. From this ouput, we have the information listed in Table 5.5.

Table 5.5. Matching Runtime Names with Their NAA IDs

Runtime Name

NAA ID

vmhba35:C0:T1:L0

naa.6006016047301a00eaed23f5884ee011

vmhba35:C0:T0:L0

naa.6006016047301a00eaed23f5884ee011

This means that these two paths are to the same LUN 0 and the NAA ID is naa.6006016047301a00eaed23f5884ee011.

Now, get the pathnames for this LUN. The command is the same as what you used for listing the FC device:

esxcli storage nmp path list -d naa.6006016047301a00eaed23f5884ee011

You may also use the verbose version of this command:

esxcli storage nmp path list --device naa.6006016047301a00eaed23f5884ee011

The output is shown in Figure 5.20.

Figure 5.20

Figure 5.20. Listing paths to an iSCSI-attached LUN via CLI

Note that the path name was wrapped for readability.

Similar to what you observed with the FC-attached devices, the output is identical except for the actual path name. Here, it starts with iqn instead of fc.

The Group State and Path Selection Policy Path Config shows similar content as well. Based on that, I built Table 5.6.

Table 5.6. Matching Runtime Names with Their Target IDs and SP Ports

Runtime Name

Target IQN

Sp Port Association

vmhba35:C0:T1:L0

iqn.1992-04.com.emc:cx.apm00071501971.b0

SPB0

vmhba35:C0:T0:L0

iqn.1992-04.com.emc:cx.apm00071501971.a0

SPA0

To list only the pathnames in the output shown in Figure 5.20, you may append |grep iqn to the command.

The output of the command is listed in Figure 5.21 and was wrapped for readability. Each path name starts with iqn:

esxcli storage nmp path list --device naa.6006016047301a00eaed23f5884ee011
|grep iqn
Figure 5.21

Figure 5.21. Listing pathnames of iSCSI-attached LUNs

Identifying Path States and on Which Path the I/O Is Sent—iSCSI

The process of identifying path states and I/O path for iSCSI protocol is identical to that of the FC protocol listed in the preceding section.

Example of Listing Paths to an FCoE-Attached Device

The process of listing paths to FCoE-attached devices is identical to the process for FC except that the string you use is fcoe Adapter instead of fc Adapter.

A sample output from an FCoE configuration is shown in Figure 5.22.

Figure 5.22

Figure 5.22. List of runtime paths of FCoE-attached LUNs via CLI

The command used is the following:

esxcfg-mpath -b |grep -B1 "fcoe Adapter" |sed 's/Adapter.*//'

You may also use the verbose command:

esxcfg-mpath --list-paths |grep -B1 "fcoe Adapter" |sed 's/Adapter.*//'

Using the NAA ID for LUN 1, the list of pathnames is shown in Figure 5.23.

Figure 5.23

Figure 5.23. List of pathnames of an FCoE-attached LUN

You may also use the verbose version of the command shown in Figure 5.23 by using --device instead of -d.

This translates to the physical pathnames shown in Figure 5.24.

Figure 5.24

Figure 5.24. List of paths names of an FCoE LUN

The command used to collect the ouput shown in Figure 5.24 is

esxcli storage nmp path list -d 6006016033201c00a4313b63995be011 |grep fcoe

Using Table 2.1, “Identifying SP Port Association with Each SP,” in Chapter 2, you can translate the targets listed in the returned four paths as shown in Table 5.7.

Table 5.7. Translation of FCoE Targets

Runtime Name

Target WWPN

SP Port Association

vmhba34:C0:T1:L1

5006016141e0b7ec

SPA1

vmhba34:C0:T0:L1

5006016941e0b7ec

SPB1

vmhba33:C0:T1:L1

5006016041e0b7ec

SPA0

vmhba33:C0:T0:L1

5006016841e0b7ec

SPB0

Identifying Path States and on Which Path the I/O Is Sent—FC

Still following the process as you did with the FC example (refer to Figure 5.17), two fields are relevant to the task of identifying the path states and the I/O path: Group State and Path Selection Policy Path Config. Table 5.8 shows the values of these fields and their meaning.

Table 5.8. Interpreting Path States—FCoE

Runtime Name

Group State

PSP Path Config

Meaning

vmhba34:C0:T1:L1

Standby

non-current path; rank: 0

Passive SP—no I/O

vmhba34:C0:T0:L1

Active

current path; rank: 0

Active-SP—I/O

vmhba33:C0:T1:L1

Standby

non-current path; rank: 0

Passive SP—no I/O

vmhba33:C0:T0:L1

Active

non-current path; rank: 0

Active SP—no I/O

Combining the last two tables, we can extrapolate the following:

Claim Rules

Each storage device is managed by one of the PSA plug-ins at any given time. In other words, a device cannot be managed by more than one PSA plug-in.

For example, a host that has a third-party MPP installed alongside with NMP, devices managed by the third-party MPP cannot be managed by NMP unless the configuration is changed to assign these devices to NMP. The process of associating certain devices with certain PSA plug-ins is referred to as claiming and is defined by Claim Rules. These rules define the correlation between a device and NMP or MPP. NMP has additional association between the claimed device and a specific SATP and PSP.

This section shows you how to list the various claim rules. The next section discusses how to change these rules.

Claim rules can be defined based on one or a combination of the following:

MP Claim Rules

The first set of claim rules defines which MPP claims which devices. Figure 5.25 shows the default MP claim rules.

Figure 5.25

Figure 5.25. Listing MP Claim Rules

The command to list these rules is

esxcli storage core claimrule list

The namespace here is for the Core Storage because the MPP definition is done on the PSA level. The output shows that this rule class is MP, which indicates that these rules define the devices’ association to a specific multipathing plug-in.

There are two plugins specified here: NMP and MASK_PATH. I have already discussed NMP in the previous sections. The MASK_PATH plug-in is used for masking paths to specific devices and is a replacement for the deprecated Legacy Multipathing LUN Masking vmkernel parameter. I provide some examples in the “Modifying PSA Plug-in Configurations Using the UI” section.

Table 5.9 lists each column name in the ouput along with an explanation of each column.

Table 5.9. Explanation of Claim Rules Fields

Column Name

Explanation

Rule Class

The plugin class for which this claim rule set is defined. This can be MP, Filter, or VAAI.

Rule

The rule number. This defines the order the rules are loaded. Similar to firewall rules, the first match is used and supersedes rules with larger numbers.

Class

The value can be runtime or file. A value of file means that the rule definitions were stored to the configuration files (more on this later in this section). A value of Runtime means that the rule was read from the configuration files and loaded into memory. In other words, it means that the rule is active. If a rule is listed as file only and no runtime, the rule was just created but has not been loaded yet. Find out more about loading rules in the next section.

Type

The type can be vendor, model, transport, or driver. See the explanation in the “Claim Rules” section.

Plugin

The name of the plug-in for which this rule was defined.

Matches

This is the most important field in the rule definition. This column shows the “Type” specified for the rule and its value. When the specified type is vendor, an additional parameter, model, must be used. The model string must be an exact string match or include an * as a wild card. You may use a ^ as “begins with” and then the string followed by an *—for example, ^OPEN-*.

The highest rule number in any claim rules set is 65535. It is assigned here to a Catch-All rule that claims devices from “any” vendor with “any” model string. It is placed as the last rule in the set to allow for lower numbered rules to claim their specified devices. If the attached devices have no specific rules defined, they get claimed by NMP.

Figure 5.26 is an example of third-party MP plug-in claim rules.

Figure 5.26

Figure 5.26. Listing EMC PowerPath/VE claim rules.

Here you see that rules number 250 through 320 were added by PowerPath/VE, which allows PowerPath plug-in to claim all the devices listed in Table 5.10.

Table 5.10. Arrays Claimed by PowerPath

Storage Array

Vendor

Model

EMC CLARiiON Family

DGC

Any (* is a wild card)

EMC Symmetrix Family

EMC

SYMMETRIX

EMC Invista

EMC

Invista

HITACHI

HITACHI

Any

HP

HP

Any

HP EVA HSV111 family (Compaq Branded)

HP

HSV111 (C) COMPAQ

EMC Celerra

EMC

Celerra

IBM DS8000 family

IBM

2107900

Plug-in Registration

New to vSphere 5 is the concept of plug-in registration. Actually this existed in 4.x but was not exposed to the end user. When a PSA plug-in is installed, it gets registered with the PSA framework along with their dependencies, if any, similar to the output in Figure 5.27.

Figure 5.27

Figure 5.27. Listing PSA plug-in registration

This output shows the following:

SATP Claim Rules

Now that you understand how NMP plugs into PSA, it’s time to examine how SATP plugs into NMP.

Each SATP is associated with a default PSP. The defaults can be overridden using SATP claim rules. Before I show you how to list these rules, first review the default settings.

The command used to list the default PSP assignment to each SATP is

esxcli storage nmp satp list

The output of this command is shown in Figure 5.28.

Figure 5.28

Figure 5.28. Listing SATPs and their default PSPs

The name space is Storage, NMP, and finally SATP.

Knowing which PSP is the default policy for which SATP is half the story. NMP needs to know which SATP it will use with which storage device. This is done via SATP claim rules that associate a given SATP with a storage device based on matches to Vendor, Model, Driver, and/or Transport.

To list the SATP rule, run the following:

esxcli storage nmp satp rule list

The output of the command is too long and too wide to capture in one screenshot. I have divided the output to a set of images in which I list a partial output then list the text of the full output in a subsequent table. Figures 5.29, 5.30, 5.31, and 5.32 show the four quadrants of the output.

Figure 5.29

Figure 5.29. Listing SATP claim rules—top-left quadrant of output.

Figure 5.30

Figure 5.30. Listing SATP claim rules—top-right quadrant of output.

Figure 5.31

Figure 5.31. Listing SATP claim rules—bottom-left quadrant of output

Figure 5.32

Figure 5.32. Listing SATP claim rules—bottom-right quadrant of output

To make things a bit clearer, let’s take a couple of lines from the output and explain what they mean.

Figure 5.33 shows the relevant rules for CLARiiON arrays both non-ALUA and ALUA capable. I removed three blank columns (Driver, Transport, and Options) to fit the content on the lines.

Figure 5.33

Figure 5.33. CLARiiON Non-ALUA and ALUA Rules

The two lines show the claim rules for EMC CLARiiON CX family. Using this rule, NMP identifies the array as CLARiiON CX when the Vendor string is DGC. If NMP stopped at this, it would have used VMW_SATP_CX as the SATP for this array. However, this family of arrays can support more than one configuration. That is the reason the value Claim Options column comes in handy! So, if that option is tpgs_off, NMP uses the VMW_SATP_CX plug-in, and if the option is tpgs_on, NMP uses VMW_SATP_ALUA_CX. I explain what these options mean in Chapter 6.

Figure 5.34 shows another example that utilizes additional options. I removed the Device column to fit the content to the display.

Figure 5.34

Figure 5.34. Claim rule that uses Claim Options

In this example, NMP uses VMW_SATP_DEFAULT_AA SATP with all arrays returning HITACHI as a model string. However, the default PSP is selected based on the values listed in the Claim Options column:

Modifying PSA Plug-in Configurations Using the UI

You can modify PSA plug-ins’ configuration using the CLI and, to a limited extent, the UI. Because the UI provides far fewer options for modification, let me address that first to get it out of the way!

Which PSA Configurations Can Be Modified Using the UI?

You can change the PSP for a given device. However, this is done on a LUN level rather than the array.

Are you wondering why you would want to do that?

Think of the following scenario:

You have Microsoft Clustering Service (MSCS) cluster nodes in Virtual Machines (VMs) in your environment. The cluster’s shared storage is Physical Mode Raw Device Mappings (RDMs), which are also referred to as (Passthrough RDMs). Your storage vendor recommends using Round-Robin Path Selection Policy (VMW_PSP_RR). However, VMware does not support using that policy with the MSCS clusters in shared RDMs.

The best approach is to follow your storage vendor’s recommendations for most of the LUNs, but follow the procedure listed here to change just the RDM LUNs’ PSP to their default PSPs.

Procedure to Change PSP via UI

  1. Use the vSphere client to navigate to the MSCS node VM and right-click the VM in the inventory pane. Select Edit Settings (see Figure 5.35).
    Figure 5.35

    Figure 5.35. Editing VM’s settings via the UI

    The resulting dialog is shown in Figure 5.36.

    Figure 5.36

    Figure 5.36. Virtual Machine Properties dialog

  2. Locate the RDM listed in the Hardware tab. You can identify this by the summary column showing Mapped Raw LUN. On the top right-hand side you can locate the Logical Device Name, which is prefixed with vml in the field labeled Physical LUN and Datastore Mapping File.
  3. Double-click the text in that field. Right-click the selected text and click Copy (see Figure 5.37).
    Figure 5.37

    Figure 5.37. Copying RDM’s VML ID (Logical Device Name) via the UI

  4. I use the copied text to follow Steps 4 and 5 of doing the same task via the CLI in the next section. However, for this section, click the Manage Paths button in the dialog shown in Figure 5.37.

    The resulting Manage Paths dialog is shown in Figure 5.38.

    Figure 5.38

    Figure 5.38. Modifying PSP selection via the UI

  5. Click the pull-down menu next to the Path Selection field and change it from Round Robin (VMware) to the default PSP for your array. Click the Change button. To locate which PSP is the default, check VMware HCL. If the PSP listed there is Round Robin, follow the examples listed in the previous section, “SATP Claim Rules,” to identify which PSP to select.
  6. Click Close.

Modifying PSA Plug-ins Using the CLI

The CLI provides a range of options to configure, customize, and modify PSA plug-in settings. I provide the various configurable options and their use cases as we go.

Available CLI Tools and Their Options

New to vSphere 5.0 is the expansion of using esxcli as the main CLI utility for managing ESXi 5.0. The same binary is used whether you log on to the host locally or remotely via SSH. It is also used by vMA or vCLI. This simplifies administrative tasks and improves portability of scripts written to use esxcli.

ESXCLI Namespace

Figure 5.39 shows the command-line help for esxcli.

Figure 5.39

Figure 5.39. Listing esxcli namespace

The relevant namespace for this chapter is storage. This is what most of the examples use. Figure 5.40 shows the command-line help for the storage namespace:

esxcli storage

Figure 5.40

Figure 5.40. Listing esxcli storage namespace

Table 5.11 lists ESXCLI namespaces and their usage.

Table 5.11. Available Namespaces in the storage Namespace

Name Space

Usage

core

Use this for anything on the PSA level like other MPPs, PSA claim rules, and so on.

nmp

Use this for NMP and its “children,” such as SATP and PSP.

vmfs

Use this for handling VMFS volumes on snapshot LUNs, managing extents, and upgrading VMFS manually.

filesystem

Use this for listing, mounting, and unmounting supported datastores.

nfs

Use this to mount, unmount, and list NFS datastores.

Adding a PSA Claim Rule

PSA claim rules can be for MP, Filter, and VAAI classes. I cover the latter two in Chapter 6.

Following are a few examples of claim rules for the MP class.

Adding a Rule to Change Certain LUNs to Be Claimed by a Different MPP

In general, most arrays function properly using the default PSA claim rules. In certain configurations, you might need to specify a different PSA MPP.

A good example is the following scenario:

You installed PowerPath/VE on your ESXi 5.0 host but then later realized that you have some MSCS cluster nodes running on that host and these nodes use Passthrough RDMs (Physical compatibility mode RDM). Because VMware does not support third-party MPPs with MSCS, you must exclude the LUNs from being managed by PowerPath/VE.

You need to identify the device ID (NAA ID) of each of the RDM LUNs and then identify the paths to each LUN. You use these paths to create the claim rule.

Here is the full procedure:

  1. Power off one of the MSCS cluster nodes and locate its home directory. If you cannot power off the VM, skip to Step 6.

    Assuming that the cluster node is located on Clusters_Datastore in a directory named node1, the command and its output would look like Listing 5.1.

    Listing 5.1. Locating the RDM Filename

    #cd /vmfs/volumes/Clusters_datastore/node1
    
    
    #fgrep scsi1 *.vmx |grep fileName
    
    
    scsi1:0.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/
    node1/quorum.vmdk"
    
    
    scsi1:1.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/
    node1/data.vmdk"

    The last two lines are the output of the command. They show the RDM filenames for the node’s shared storage, which are attached to the virtual SCSI adapter named scsi1.

  2. Using the RDM filenames, including the path to the datastore, you can identify the logical device name to which each RDM maps as shown in Listing 5.2.

    Listing 5.2. Identifying RDM’s Logical Device Name Using the RDM Filename

    #vmkfstools --queryrdm /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk
    
    
    Disk /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk is
    a Passthrough Raw Device Mapping
    Maps to: vml.02000100006006016055711d00cff95e65664ee011524149442035

    You may also use the shorthand version using -q instead of --queryrdm.

    This example is for the quorum.vmdk. Repeat the same process for the remaining RDMs. The device name is prefixed with vml and is highlighted.

  3. Identify the NAA ID using the vml ID as shown in Listing 5.3.

    Listing 5.3. Identifying NAA ID Using the Device vml ID

    #esxcfg-scsidevs --list --device vml.02000100006006016055711d00cff95e65664
    ee011524149442035 |grep Display
    
    
    Display Name: DGC Fibre Channel Disk (naa.6006016055711d00cff95e65664ee011)

    You may also use the shorthand version:

    #esxcfg-scsidevs -l -d vml.02000100006006016055711d00cff95e65664
    ee011524149442035 |grep Display
  4. Now, use the NAA ID (highlighted in Listing 5.3) to identify the paths to the RDM LUN.

    Figure 5.41 shows the output of command:

    esxcfg-mpath -m |grep naa.6006016055711d00cff95e65664ee011 | sed 's/
    fc.*//'
    Figure 5.41

    Figure 5.41. Listing runtime pathnames to an RDM LUN

    You may also use the verbose version of the command:

    esxcfg-mpath --list-map |grep naa.6006016055711d00cff95e65664ee011 |
    sed 's/fc.*//'

    This truncates the output beginning with “fc” to the end of the line on each line. If the protocol in use is not FC, replace that with “iqn” for iSCSI or “fcoe” for FCoE.

    The output shows that the LUN with the identified NAA ID is LUN 1 and has four paths shown in Listing 5.4.

    Listing 5.4. RDM LUN’s Paths

    vmhba3:C0:T1:L1 
    vmhba3:C0:T0:L1 
    vmhba2:C0:T1:L1 
    vmhba2:C0:T0:L1

    If you cannot power off the VMs to run Steps 1–5, you may use the UI instead.

  5. Use the vSphere client to navigate to the MSCS node VM. Right-click the VM in the inventory pane and then select Edit Settings (see Figure 5.42).
    Figure 5.42

    Figure 5.42. Editing VM’s settings via the UI

  6. In the resulting dialog (see Figure 5.43), locate the RDM listed in the Hardware tab. You can identify this by the summary column showing Mapped Raw LUN. On the top right-hand side you can locate the Logical Device Name, which is prefixed with vml in the field labeled Physical LUN and Datastore Mapping File.
    Figure 5.43

    Figure 5.43. Virtual machine properties dialog

  7. Double-click the text in that field. Right-click the selected text and click Copy as shown in Figure 5.44.
    Figure 5.44

    Figure 5.44. Copying RDM’s VML ID (Logical Device Name) via the UI

  8. You may use the copied text to follow Steps 4 and 5. Otherwise, you may instead get the list of paths to the LUN using the Manage Paths button in the dialog shown in Figure 5.44.
  9. In the Manage Paths dialog (see Figure 5.45), click the Runtime Name column to sort it. Write down the list of paths shown there.
    Figure 5.45

    Figure 5.45. Listing the runtime pathnames via the UI

  10. The list of paths shown in Figure 5.45 are
    vmhba1:C0:T0:L1
    vmhba1:C0:T1:L1
    vmhba2:C0:T0:L1
    vmhba2:C0:T1:L1
  11. Create the claim rule.

I use the list of paths obtained in Step 5 for creating the rule from the ESXi host from which it was obtained.

The Ground Rules for Creating the Rule

For example, you have previously created rules numbered 102–110 and that rule 109 cannot be listed prior to the new rules you are creating. If the new rules count is four, you need to assign them rule numbers 109–112. To do that, you need to move rules 109 and 110 to numbers 113 and 114. To avoid having to do this in the future, consider leaving gaps in the rule numbers among sections.

An example of moving a rule is

esxcli storage core claimrule move --rule 109 --new-rule 113
esxcli storage core claimrule move --rule 110 --new-rule 114
esxcli storage core claimrule move -r 109 -n 113
esxcli storage core claimrule move -r 110 -n 114

Now, let’s proceed with adding the new claim rules:

  1. The set of four commands shown in Figure 5.46 create rules numbered 102–105. The rules criteria are
    • The claim rule type is “location” (-t location).
    • The location is specified using each path to the same LUN in the format:
      • –A or --adapter vmhba(x) where X is the vmhba number associated with the path.
      • –C or --channel (Y) where Y is the channel number associated with the path.
      • T or --target (Z) where Z is the target number associated with the path.
      • L or --lun (n) where n is the LUN number.
    • The plug-in name is NMP, which means that this claim rule is for NMP to claim the paths listed in each rule created.
  2. Repeat Step 1 for each LUN you want to reconfigure.
  3. Verify that the rules were added successfully. To list the current set of claim rules, run the command shown in Figure 5.47:
    esxcli storage core claimrule list.
    Figure 5.47

    Figure 5.47. Listing added claim rules

    Notice that the four new rules are now listed, but the Class column shows them as file. This means that the configuration files were updated successfully but the rules were not loaded into memory yet.

    Figure 5.48 shows a sample command line that implements a wildcard for the target. Notice that this results in creating two rules instead of four and the “target” match is *.

    Figure 5.48

    Figure 5.48. Adding MP claim rules using a wildcard

  4. Before loading the new rules, you must first unclaim the paths to the LUN specified in that rule set. You use the NAA ID as the device ID:
    esxcli storage core claiming unclaim --type device –-device naa.600601
    6055711d00cff95e65664ee011

    You may also use the shorthand version:

    esxcli storage core claiming unclaim -t device –d naa.6006016055711d00
    cff95e65664ee011
  5. Load the new claim rules so that the paths to the LUN get claimed by NMP:
    esxcli storage core claimrule load
  6. Use the following command to list the claim rules to verify that they were successfully loaded:
esxcli storage core claimrule list

Now you see that each of the new rules is listed twice—once with file class and once with runtime class—as shown in Figure 5.49.

Figure 5.49

Figure 5.49. Listing MP claim rules

How to Delete a Claim Rule

Deleting a claim rule must be done with extreme caution. Make sure that you are deleting the rule you intend to delete. Prior to doing so, make sure to collect a “vm-support” dump by running vm-support from a command line at the host or via SSH. Alternatively, you can select the menu option Collect Diagnostics Data via the vSphere client.

To delete a claim rule, follow this procedure via the CLI (locally, via SSH, vCLI, or vMA):

  1. List the current claim rules set and identify the claim rule or rules you want to delete. The command to list the claim rules is similar to what you ran in Step 6 and is shown in Figure 5.49.
  2. For this procedure, I am going to use the previous example and delete the four claim rules I added earlier which are rules 102–105. The command for doing that is in Figure 5.50.
    Figure 5.50

    Figure 5.50. Removing claim rules via the CLI

    You may also run the verbose command:

    esxcli storage core claimrule remove --rule <rule-number>
  3. Running the claimrule list command now results in an output similar to Figure 5.51. Observe that even though I just deleted the claim rules, they still show up on the list. The reason for that is the fact that I have not loaded the modified claim rules. That is why the deleted rules show runtime in their Class column.
    Figure 5.51

    Figure 5.51. Listing MP claim rules

  1. Because I know from the previous procedure the device ID (NAA ID) of the LUN whose claim rules I deleted, I ran the unclaim command using the -t device or --type option and then specified the -d or --device option with the NAA ID. I then loaded the claim rules using the load option. Notice that the deleted claim rules are no longer listed see Figure 5.52.
Figure 5.52

Figure 5.52. Unclaiming a device using its NAA ID and then loading the claim rules

You may also use the verbose command options:

esxcli storage core claiming unclaim --type device --device <Device-ID>

You may need to claim the device after loading the claim rule by repeating the claiming command using the “claim” instead of the “unclaim” option:

esxcli storage core claiming claim -t device -d <device-ID>

How to Mask Paths to a Certain LUN

Masking a LUN is a similar process to that of adding claim rules to claim certain paths to a LUN. The main difference is that the plug-in name is MASK_PATH instead of NMP as used in the previous example. The end result is that the masked LUNs are no longer visible to the host.

  1. Assume that you want to mask LUN 1 used in the previous example and it still has the same NAA ID. I first run a command to list the LUN visible by the ESXi host as an example to show the before state (see Figure 5.53).
    Figure 5.53

    Figure 5.53. Listing LUN properties using its NAA ID via the CLI

    You may also use the verbose command option --device instead of -d.

  2. Add the MASK_LUN claim rule, as shown in Figure 5.54.
    Figure 5.54

    Figure 5.54. Adding Mask Path claim rules

    As you see in Figure 5.54, I added rule numbers 110 and 111 to have MASK_PATH plug-in claim all targets to LUN1 via vmhba2 and vmhba3. The claim rules are not yet loaded, hence the file class listing and no runtime class listings.

  3. Load and then list the claim rules (see Figure 5.55).
    Figure 5.55

    Figure 5.55. Loading and listing claim rules after adding Mask Path rules

    Now you see the claim rules listed with both file and runtime classes.

  4. Use the reclaim option to unclaim and then claim the LUN using its NAA ID. Check if it is still visible (see Figure 5.56).
Figure 5.56

Figure 5.56. Reclaiming the paths after loading the Mask Path rules

You may also use the verbose command option --device instead of -d.

Notice that after reclaiming the LUN, it is now an Unknown device.

How to Unmask a LUN

To unmask this LUN, reverse the preceding steps and then reclaim the LUN as follows:

  1. Remove the MASK_PATH claim rules (numbers 110 and 111) as shown in Figure 5.57.
    Figure 5.57

    Figure 5.57. Removing the Mask Path claim rules

    You may also use the verbose command options:

    esxcli storage core claimrule remove --rule <rule-number>
  2. Unclaim the paths to the LUN in the same fashion you used while adding the MASK_PATH claim rules—that is, using the –t location and omitting the –T option so that the target is a wildcard.
  3. Rescan using both HBA names.
  4. Verify that the LUN is now visible by running the list command.

Figure 5.58 shows the outputs of Steps 2–4.

Figure 5.58

Figure 5.58. Unclaiming the Masked Paths

You may also use the verbose command options:

esxcli storage core claiming unclaim --type location --adapter vmhba2
--channel 0 --lun 1 --plugin MASK_PATH

Changing PSP Assignment via the CLI

The CLI enables you to modify the PSP assignment per device. It also enables you to change the default PSP for a specific storage array or family of arrays. I cover the former use case first because it is similar to what you did via the UI in the previous section. I follow with the latter use case.

Changing PSP Assignment for a Device

To change the PSP assignment for a given device, you may follow this procedure:

  1. Log on to the ESXi 5 host locally or via SSH as root or using vMA 5.0 as vi-admin.
  2. Identify the device ID for each LUN you want to reconfigure:
    esxcfg-mpath -b |grep -B1 "fc Adapter"| grep -v -e "--" |sed 's/
    Adapter.*//'

    You may also use the verbose version of this command:

    esxcfg-mpath --list-paths grep -B1 "fc Adapter"| grep -v -e "--" | sed
    's/Adapter.*//'

    Listing 5.5 shows the output of this command.

    Listing 5.5. Listing Device ID and Its Paths

    naa.60060e8005275100000027510000011a : HITACHI Fibre Channel Disk (naa.6006
    0e8005275100000027510000011a)
         vmhba2:C0:T0:L1 LUN:1 state:active fc
         vmhba2:C0:T1:L1 LUN:1 state:active fc
         vmhba3:C0:T0:L1 LUN:1 state:active fc
         vmhba3:C0:T1:L1 LUN:1 state:active fc

    From there, you can identify the device ID (in this case, it is the NAA ID). Note that this output was collected using a Universal Storage Platform®V (USP V), USP VM, or Virtual Storage Platform (VSP).

    This output means that LUN1 has device ID naa.60060e8005275100000027510000011a.

  3. Using the device ID you identified, run this command:
esxcli storage nmp device set -d <device-id> --psp=<psp-name>

You may also use the verbose version of this command:

esxcli storage nmp device set --device <device-id> --psp=<psp-name>

For example:

esxcli storage nmp device set -d naa.60060e8005275100000027510000011a
--psp=VMW_PSP_FIXED

This command sets the device with ID naa.60060e8005275100000027510000011a to be claimed by the PSP named VMW_PSP_FIXED.

Changing the Default PSP for a Storage Array

There is no simple way to change the default PSP for a specific storage array unless that array is claimed by an SATP that is specific for it. In other words, if it is claimed by an SATP that also claims other brands of storage arrays, changing the default PSP affects all storage arrays claimed by the SATP. However, you may add an SATP claim rule that uses a specific PSP based on your storage array’s Vendor and Model strings:

  1. Identify the array’s Vendor and Model strings. You can identify these strings by running
    esxcli storage core device list -d <device ID> |grep 'Vendor\|Model'

    Listing 5.6 shows an example for a device on an HP P6400 Storage Array.

    Listing 5.6. Listing Device’s Vendor and Model Strings

    esxcli storage core device list -d naa.600508b4000f02cb0001000001660000
    |grep 'Model\|Vendor'
       Vendor: HP
       Model: HSV340
    • In this example, the Vendor String is HP and the Model is HSV340.
  2. Use the identified values in the following command:
esxcli storage nmp satp rule add --satp  <current-SATP-USED> --vendor
<Vendor string> --model <Model string> --psp <PSP-name> --description
<Description>

In this example, the command would be like this:

esxcli storage nmp satp rule add --satp VMW_SATP_EVA --vendor HP
--model HSV340 --psp VMW_PSP_FIXED --description "Manually added to
use FIXED"

It runs silently and returns an error if it fails.

Example of an error:

"Error adding SATP user rule: Duplicate user rule found for SATP VMW_
SATP_EVA  matching  vendor HP model HSV340 claim Options PSP VMW_PSP_
FIXED and PSP Options"

This error means that a rule already exists with these options. I simulated this rule by first adding it and then rerunning the same command. To view the existing SATP claim rules list for all HP storage arrays, you may run the following command:

esxcli storage nmp satp rule list |less -S |grep 'Name\|---\|HP'|less
-S

Figure 5.59 shows the output of this command (I cropped some blank columns, including Device, for readability):

Figure 5.59

Figure 5.59. Listing SATP rule list for HP devices

You can easily identify non-system rules where the Rule Group column value is user. Such rules were added by a third-party MPIO installer or manually added by an ESXi 5 administrator. The rule in this example shows that I had already added VMW_PSP_FIXED as the default PSP for VMW_SATP_EVA when the matching vendor is HP and Model is HSV340.

I don’t mean to state by this example that HP EVA arrays with HSV340 firmware should be claimed by this specific PSP. I am only using it for demonstration purposes. You must verify which PSP is supported by and certified for your specific storage array from the array vendor.

As a matter of fact, this HP EVA model happens to be an ALUA array and the SATP must be VMW_SATP_ALUA see Chapter 6. How did I know that? Let me explain!

esxcli storage nmp satp rule list |grep 'Name\|---\|tpgs_on' |less -S

Listing 5.7 shows the output of that command:

Listing 5.7. Listing SATP Claim Rules List

Name                 Device  Vendor   Model     Rule Group Claim Options
-------------------  ------  -------  --------  ----------  ------------
VMW_SATP_ALUA                NETAPP             system      tpgs_on
VMW_SATP_ALUA                IBM      2810XIV   system      tpgs_on
VMW_SATP_ALUA                                   system      tpgs_on
VMW_SATP_ALUA_CX             DGC                system      tpgs_on

What does this mean anyway?

It means that the claim rule that I added for the HSV340 is wrong because it will force it to be claimed by an SATP that does not handle ALUA. I must remove the rule that I added then create another rule that does not violate the default SATP assignment:

  1. To remove the SATP claim rule, use the same command used to add, substituting the add option with remove:
    esxcli storage nmp satp rule remove --satp VMW_SATP_EVA --vendor HP
    --model HSV340 --psp VMW_PSP_FIXED
  2. Add a new claim rule to have VMW_SATP_ALUA claim the HP EVA HSV340 when it reports Claim Options value as tpgs_on:
    esxcli storage  nmp satp  rule add  --satp  VMW_SATP_ALUA --vendor
    HP  --model  HSV340  --psp  VMW_PSP_FIXED  --claim-option  tpgs_on
    --description "Re-added manually for HP HSV340"
  3. Verify that the rule was created correctly. Run the same command used in Step 2 in the last procedure:
esxcli storage nmp satp rule list |grep 'Name\|---\|tpgs_on' |less -S

Figure 5.60 shows the output.

Figure 5.60

Figure 5.60. SATP rule list after adding rule

Notice that the claim rule has been added in a position prior to the catch-all rule described earlier. This means that this HP EVA HSV340 model will be claimed by VMW_SATP_ALUA when the Claim Options value is tpgs_on.

Summary

This chapter covered PSA (VMware Pluggable Storage Architecture) components. I showed you how to list PSA plug-ins and how they interact with vSphere ESXi 5. I also showed you how to list, modify, and customize PSA claim rules and how to work around some common issues.

It also covered how ALUA-capable devices interact with SATP claim rules for the purpose of using a specific PSP.

800 East 96th Street, Indianapolis, Indiana 46240