Home > Uncategorized > Clustering V7000 Unified Storage Controllers

Clustering V7000 Unified Storage Controllers

We have been getting this question about clustering the storage controllers on the V7000 Unified (V7kU) more and more as people start expanding their systems beyond their initial controllers. But let’s step back a few steps and understand what we are working with first.

  1. V7kU is a mixed protocol storage platform. It uses Spectrum Scale as the file system and Storwize as the operating system. This is important as people get interested in how they can adopt a high speed, parallel file system with grace and ease. The V7kU comes preloaded so no need to understand the knobs and switches of installing and configuring Spectrum Scale (formerly known as GPFS). The V7kU supports SMB (CIFS), NFS, FC, FCoE, iSCSI and can be used with other building blocks like Openstack to support Object Storage too.
  2. V7kU can scale up to 20 disk enclosures per controller. This platform can cluster up to four controllers giving customers a chance to max out around 7.5 PBs of storage. The best part is you can mix and match drives types and sizes. You can have flash drives in the same enclosure as SAS and NLSAS drives.
  3. Single interface is the best part of this solution. You can provision both block and file access from the same gui/cli. Data protection like snapshot s and flash copies, replication and remote cache copies.
  4. Policy based data management. One of my favorite parts of the solution is I can create policies to manage the data on the box. For example, I can create a policy that says if my flash pool becomes 75% full start moving the oldest data to the NLSAS pool. Not only does this make my job easier not having to manage the data move, but it frees up the flash pool and extends the buying power of the flash. Since flash is the most expensive part of the storage, I want the best bang for the buck there.

Now comes the part of can we cluster these V7000s to make a bigger pool, yes we can. Not only can we cluster the systems (multiple IO groups) we can mix the file and block independently. The best part as you add IO groups you add more performance, capacity all the while managing it from the same single interface.

This was taken from the V7000 Infocenter:

Perform these steps to add a node to a clustered system:

Procedure

  1. Issue this CLI command to list the node candidates:
    lsnodecandidate

    This output is an example of what you might see after you issue the lsnodecandidate command:

    id               panel_name        UPS_serial_number     UPS_unique_id     hardware
    50050768010037DA 104615            10004BC047            20400001124C0107  8G4
    id               panel_name        UPS_serial_number     UPS_unique_id     hardware
    5005076801000149 106075            10004BC031            20400001124C00C1  8G4
  2. Issue this CLI command to add the node:

    addnode -panelname panel_name -name new_name_arg -iogrp iogroup_name

    where panel_name is the name that is noted in step 1 (in this example, the panel name is 000279). The number is printed on the front panel of the node that you are adding back into the system. The new_name_arg is optional to specify a name for the new node; iogroup_name is the I/O group that was noted when the previous node was deleted from the system.

    Note: In a service situation, add a node back into a clustered system using the original node name. As long as the partner node in the I/O group has not been deleted too, the default name is used if -name is not specified.

    This example shows the command that you might issue:

    addnode -panelname 000279 -name newnode -iogrp io_grp1

    This output is an example of what you might see:

    Node, id [newnode], successfully added
    Attention: If more than one candidate node exists, ensure that the node that you add into an I/O group is the same node that was deleted from that I/O group. Failure to do so might result in data corruption. If you are uncertain about which candidate node belongs to the I/O group, shut down all host systems that access this clustered system before you proceed. Reboot each system when you have added all the nodes back into the clustered system.
  3. Issue this CLI command to ensure that the node was added successfully:
    lsnode

    This output is an example of what you might see when you issue the lsnode command:

Results

id name   UPS_serial_number WWNN             status  IO_group_id IO_group_name config_node UPS_unique_id    hardware
1  node1  1000877059        5005076801000EAA online  0           io_grp0       yes         20400002071C0149 8F2
2  node2  1000871053        500507680100275D online  0           io_grp0       no          2040000207040143 8F2

All nodes are now online.

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: