How to Save Money by Buying Dumber Flash

October 19, 2016 Leave a comment

Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.

1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.

2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call ‘stickiness’. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say “I want these <insert vendor name> out of here”, are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?

I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.

Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.

The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to ‘cloud’. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.


Storwize 7.7 Update

June 1, 2016 Leave a comment

From the press release…

IBM Spectrum Virtualize Software V7.7 delivers new software features for the Spectrum storage family

Spectrum Virtualize software delivers a powerful solution for storage virtualization, offering storage capabilities such as HA, real-time compression, EasyTier data tiering, encryption, distributed RAID, clustering for performance and capacity scalability, and enterprise local and remote copy services.

Spectrum Virtualize V7.7 adds the following new benefits:

  • Improved reliability, availability and serviceability:
    • NPIV (N_Port ID Virtualization) technology enables virtualization of the host port fabric of Spectrum Virtualize hardware which delivers continuous connectivity to hosts
    • Distributed RAID (DRAID) supports to encryption protection of data in DRAID volumes
    • GUI support to IP Quorum for easier setup and management of IP Quorum in HA solutions using Stretched Cluster or HyperSwap
  • Increased external virtualization flexibility through iSCSI, expanding the capability to support a broader spectrum of storage infrastructures
  • Higher performance with extended cache for read operations, potentially increasing cache hits and reducing disk reads
  • IP Link compression to improve usage of IP networks for remote-copy data transmission. This will reduce the volume of data that must be transmitted during remote copy operations, with compression capabilities similar to those experienced with existing real time Compression implementations.

Compression support is formally being added to the External Virtualization software package as a separate feature when used with Storwize V5030 and Spectrum Virtualize V7.6, and later. Support for compression is also included in the Full Feature Bundle under the External Virtualization PID with the Storwize V5030. Compression is not supported on first-generation Storwize V5000 machines.

The IBM SVC software licensing metric is being changed to Differential Licensing to help align the cost of SVC licenses with customer use cases. In Differential Licensing, licenses changes from per terabyte to per storage capacity unit (SCU).

SCU is defined in terms of the category of the storage capacity use case:

  • Category 1: Flash and SSDs
  • Category 2: SAS drives, FC drives, and systems using Category 3 drives with advanced architectures to deliver high-end storage performance
  • Category 3: Near Line SAS (NL-SAS) and SATA drives

Any storage capacity use case that is not listed above is classified as Category 1.

There are no changes to the licensing of SVC FlashCopy, SVC Remote Mirror, or SVC Encryption.

SCU licensing leverages current SW PIDs with new Part Numbers in Passport Advantage and Feature Codes in AAS.

Spectrum Virtualize encryption restrictions were reviewed. Optional Encryption software (PID 5641-B01 or 5725-Y35 for Passport Advantage) is not available in the following countries: Armenia, Belarus, Kazakhstan, Kyrzykhstan, Russia

Software requirements

  • SAN Volume Controller V7.7 (5641-VC7, 5641-CP7, 5725-M19, and 5725-H04) has the same software prerequisites as SAN Volume Controller V7.6.
  • Spectrum Virtualize Software for SAN Volume Controller Encryption Software V7.7 (5641-B01 and 5725-Y35) requires PID 5641-VC7.
  • Storwize V7000 V7.7 (5639-CB7, 5639-XB7, 5639-EB7, 5639-VM7, 5639-EV7, 5639-RM7, and 5639-CP7) has the same software prerequisites as Storwize V7000 V7.6.
  • Spectrum Virtualize Software V7.7 is not available for Flex System V7000. Flex System V7000 V7.2 (5639-NZ7, 5639-EX7, 5639-RE7, and 5639-CM7) has the same software prerequisites as Flex System V7000 V7.1. Storwize V5000 V7.7 (5639-CT7, 5639-XT7, and 5639-ET7) has the same software prerequisites as IBM Storwize V5000 V7.6.
  • IBM Spectrum Virtualize Software for Storwize V5010 V7.7 (5639-SV1) requires the base software feature. All other features are optional.
  • Spectrum Virtualize Software for Storwize V5020 V7.7 (5639-SV2) requires the base software feature. All other features are optional.
  • Spectrum Virtualize Software for Storwize V5030 V7.7 (5639-SV3) requires the base software feature. All other features are optional.
  • Spectrum Virtualize Software for Storwize V50x0 Expansion V7.7 (5639-SV4) requires the base software feature. There are no optional software features on the expansions.
  • For all current storage systems and host environments supported with the FlashSystem V840, go to the IBM FlashSystem V840 website.
  • For all current storage systems and host environments supported with the IBM FlashSystem V9000, go to the IBM FlashSystem V9000 website.

Hardware requirements

  • Spectrum Virtualize Software for SAN Volume Controller requires at least one pair of M System Storage SAN Volume Controller (2145-DH8, 2145-CF8, or 2145-CG8) storage engines for installation.
  • Spectrum Virtualize Software for SAN Volume Controller Encryption Software V7.7 (5641-B01 and 5725-Y35) requires the Encryption Enablement Feature ACE2.
  • Spectrum Virtualize Software for Storwize V7000 requires at least one Storwize V7000 Control Enclosure (2076-524, 2076-112, 2076-124, 2076-312, or 2076-324) for installation.
  • Spectrum Virtualize Software V7.7 is not available for the Flex System V7000. Storwize Family Software for Flex System V7000 V7.2 requires at least one Flex System V7000 (4939-A49, 4939-H49, or 4939-X49) Control Enclosure for installation.
  • Spectrum Virtualize Software for Storwize V5000 requires at least one Storwize V5000 control enclosure (2077-112, 2077-124, 2077-212, 2077-224, 2077-312, 2077-324, 2078-112, 2078-124, 2078-212, 2078-224, 2078-312, 2078-324) for installation.
  • Spectrum Virtualize Software for Storwize V5010 V7.7 (5639-SV1) requires at least one Storwize V5010 control enclosure (2077-112 or 124, 2078-112 or 124) for installation.
  • Spectrum Virtualize Software for Storwize V5020 V7.7 (5639-SV2) requires at least one Storwize V5020 control enclosure (2077-212 or 224, 2078-212 or 224) for installation.
  • Spectrum Virtualize Software for Storwize V5030 V7.7 (5639-SV3) requires at least one Storwize V5030 control enclosure (2077-312 or 324, 2078-312 or 324) for installation. The compression feature for V5030 requires the 64GB cache feature (#ACHD) on models 312 and 324 for machine types 2077 and 2078.
  • Spectrum Virtualize Software for Storwize V50x0 Expansion V7.7 (5639-SV4) requires at least one Storwize V50x0 expansion enclosure (2077-12F or 24F, 2078-12F or 24F) for installation.
  • FlashSystem V840 Software V7.7 (5639-FS7) requires an FlashSystem V840 Storage System (9846-AC1 and 9846-AE1, or 9848-AC1 and 9848-AE1) for installation.
  • FlashSystem V9000 Software V7.7 (5639-RB7) requires at least two FlashSystem V9000 Control Enclosures (9846-AC2 or 9848-AC2) and at least one FlashSystem V9000 Storage Enclosure (9846-AE2 or 9848-AE2).

Current support summaries, including specific software, hardware, and firmware levels supported, are maintained at their

respective support websites.

Planned availability date: June 10, 2016. Optional Encryption software (PID 5641-B01 or 5725-Y35 for Passport Advantage) is not available in the following countries: Armenia, Belarus, Kazakhstan, Kyrzykhstan, Russia

Categories: Uncategorized

Building a Hybrid Cloud Using IBM Spectrum Scale

May 11, 2016 Leave a comment

Cloud is changing the storage business in more ways than just price per unit. It is fundamentally changing how we design our storage systems and which way we deploy, protect and recover them. For those most fortunate companies who are just starting out the cloud is an easy task as there is no legacy systems or tried and true methods, it has always been on the ‘cloud’.

For most companies that are trying to find ways to cut their storage cost while keeping some control of their storage, cloud seems to be the answer. But getting there is not an easy tasks as most have seen. The transfer of data, code that has to be rewritten, systems and processes that all have to be changed just to report back to their CIO that they are using the cloud.

Now there are many ways to get to the cloud but one that I am excited about is using technology originally deployed back in the late 90s. GPFS (errr, $1 in the naughty jar) Spectrum Scale is a parralel file system that can spread the data across many different tiers of storage. From flash to spinning drives to tape, Scale has the ability to alleviate storage administration by policy based movement of data. This movement is based on the metadata and is written, moved and deleted based on policies set by the storage admin.

So how does this help you get to the cloud? Glad you asked. IBM released a new plug in for Scale that treats the cloud as another tier of storage. This could be from multiple cloud vendors like IBM Cleversafe, IBM Softlayer, Amazon S3 or a private cloud (Think Openstack). The cloud provider is attached to the cloud node over ethernet and allows your Scale system to either write directly to the cloud tier or move data as it ages/cools.


This will do a couple of things for you.

  1. Because we are looking at the last read date, data that is still needed but the chance you will read it is highly unlikely can be moved automatically to the cloud. If a system needs the file/object there is no re-coding that needs to be done as the namespace doesn’t change.
  2. If you run out of storage and need to ‘burst’ out because of some monthly/yearly job you can move data around to help free up space on-perm or write directly out to the cloud.
  3. Data protection such as snapshots and backups can still take place. This is valuable to many customers as they know the data doesn’t change often but like the idea they don not have to change their recovery process every time they want to add new technology.
  4. Cheap Disaster Recovery. Scale does have the ability to replicate to another system but as these systems grow larger and beyond multiple petabytes, replication becomes more difficult. For the most part you are going to need to recover the most recent (~90 Days) of data that runs your business. Inside of Scale is the ability to create mirrors of data pools. One of those mirrors could be the cloud tier where your most recent data is kept in case there is a problem in the data center.
  5. It allows you to start small and work your way into a cloud offering. Part of the problem some clients have is they want to take on too much too quickly. Because Scale allows customers to have data in multiple clouds, you can start with a larger vendor like IBM and then when your private cloud on Openstack is up and running you can use them both or just one. The migration would be simple as both share the same namespace under the same file system. This frees the client up from having to make changes on the front side of the application.

Today this feature is offered as an open beta only. The release is coming soon as they are tweaking and doing some bug fixes before it is generally  available. Here is the link to the DevWorks page that goes into more about the beta and how to download a VM that will let you test these features out.

I really believe this is going to help many of my customers move into that hybrid cloud platform. Take a look at the video below and how it can help you as well.

NAS vs. Object: Supporting Next Apps

Here is a great little article talking about how NAS is giving way to Object based storage. Thanks to Mr. Backup at for the article. Look for an update on the Spectrum Scale product this week on object based storage for hybrid clouds here on The Storage Tank. - The Home of Storage Switzerland

Today’s apps aren’t your father’s apps. Applications developed today take for granted things that were not even thinkable not that long ago — especially in the storage space. The scale that is needed by modern day applications was never envisioned when traditional shared storage was invented. (NFS and SMB were released in 1984, five years before Tim Berners-Lee would invent the world-wide web.)

It wasn’t that long ago that developers would assume an app would run on a single server and access a single filesystem. Once apps started to run on clusters, most “clusters” were really just active-active pairs, or even active-passive pairs. Of course, there were a few truly clustered applications that ran on several nodes. However, all of these systems assumed one thing: a filesystem or LUN that could be shared by all nodes, or synchronously replicated storage that mimicked that behavior.

Modern day developers assume they can…

View original post 416 more words

Categories: Uncategorized

IBM Assigned Twelve Storage Patents

Yes, IBM is at it again with it’s storage innovation receiving 12 new patents for tape systems. What? You thought tape was a dead? Again? Tape is very much alive and kicking and while you may be jaded one way or another, tape is still the cheapest most reliable long term storage platform out there.

IBM is known for it’s innovation and the patents it is awarded every year. For the last 23 years, it has been awarded more patents in the US than any other company. Just in 2015, IBM was awarded 7355 patents compared to 7852 patents for Google, Microsoft GE and HP combined. Roughly 40% of the 18172 patents awarded went to IBM.

2015 Patents IBM vs Competitors

When you look at the 12 storage patents (listed here), you notice they are all from 2010- to 2014/15. They range from how the data is written to abrasion check. The people behind these technologies are brilliant to say the least and it shows in the details of the filing. While they are sometimes hard to read, the technology being introduced will save IBM customers time and money down the road.

IBM also uses its patents as a revenue source. Just in the last year, IBM sold patents to both Pure Storage and Western Digital. Since Pure and IBM compete in the all flash array environment, IBM must of gotten a huge sum of money for those patents to offset the ability to crush your competitor. None the less, IBM utilizes its investment of R&D buy selling the technology to others who may be spending their money elsewhere (like marketing and selling).

If you want to learn more about the IBM Storage Patents, click over here to read about them in detail.


Categories: Uncategorized

Value of Spectrum Control to Spectrum Scale

April 15, 2016 Leave a comment

Great new Blog from my friend Ravi Prakash. Follow him for all things Spectrum Control!….


Today if you are a customer in a sector like financial, retail, digital media, biotechnology, science or government and you use applications like big data analytics, gene sequencing, digital media or scalable file serving, there is a strong possibility that you are already using IBM Spectrum Scale (previously called General Parallel File System or GPFS).

Spectrum scale

A question foremost in your mind may be: “If Spectrum Scale has its own element manager – the Scale GUI, what would I gain from using Spectrum Control?”
The Spectrum Scale GUI focuses on a single Spectrum Scale cluster. In contrast, the Spectrum Control GUI offers a single pane-of-glass to manage multiple Scale clusters, it gives you higher level analytics, a view of relationships between clusters, the relationships between clusters and SAN attached storage.  In future, we expect to extend this support to Spectrum Scale in hybrid cloud scenarios where Spectrum Scale may be backed…

View original post 382 more words

Categories: Uncategorized

Do RDMs need MPIO?

April 11, 2016 Leave a comment

Aussie Storage Blog

I got a great question the other day regarding VMware Raw Device Mappings:

If an RDM is a direct pass though of a volume from Storage Device to VM, does the VM need MPIO software like a physical machine does?

The short answer is NO,  it doesn’t.  But I thought I would show why this is so, and in fact why adding MPIO software may help.

First up, to test this, I created two volumes on my Storwize V3700.


I mapped them to an ESXi server as LUN ID 2 and LUN ID 3.  Note the serials of the volumes end in 0040 and 0041:


On ESX I did a Rescan All and discovered two new volumes, which we know match the two I just made on my V3700, as the serial numbers end in 40 and 41 and the LUN IDs are 2 and 3:


I confirmed that the…

View original post 356 more words

Categories: Uncategorized