Archive

Archive for the ‘SVC’ Category

How to Save Money by Buying Dumber Flash

October 19, 2016 Leave a comment

Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.

1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.

2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call ‘stickiness’. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say “I want these <insert vendor name> out of here”, are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?

I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.

Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.

The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to ‘cloud’. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.

IBM FlashSystem: Whats better Fast or Flash?

April 15, 2013 Leave a comment

werewolfBy now everyone has seen the great marketing adverts from AT&T where a interviewer is sitting in a classroom asking some younger students basic questions like, “Would like you like more or less? and my favorite “Which is better: fast or slow?”. Of course all of the kids scream FAST! Even to the point where one little girl talks about being slow and then being turned into a werewolf and then crying because all it wants is to be human again… Then the tagline, “It’s not complicated”.
No matter if we are talking about mobile phone, sandwich deliver (Jimmy John’s ROCKS!) or flash based areas, the “how fast can you …” is real. We can time a person delivering food, we can meter the signal on a mobile phone and we can measure the response time of a flash array. But what is more important than being fast? Simplicity.
There are many new start ups entering the flash storage market and some of the larger vendors are snapping up these little guys like little bunny foo foo (Not sure about the bopping part).

What I have seen is most of the solutions sound good on paper and power point but once in the sandbox they are complicated and take weeks of tuning to get the performance as advertised. There seems to be a growing trend in the flash market of “just get it in there” and then deal with tuning later mentality. This is not surprising from where its coming as they typically have more engineers on a customer to make sure their legacy gear behaves properly.

dr-evil
The IBM FlashSystems 710/720 and 810/820 were announced last week with a huge kick off meeting in New York and a huge social media push which I thought was marvelous. There was announcements of IBM setting $1 Billion (Yes, Dr. Evil does now work at Almadean Labs) for research and product enhancement of Flash technology. I think someone actually said something on our tweet chat that day like “Who else in the world could spend as much money on one particular part of hardware but IBM?” and I think the answer is fairly clear, no one.
What will come from this huge R&D investment you may ask? I suspect IBM is looking at how to take the technology and sprinkle it through out the storage portfolio and into server side as well. There are already some vendors who are getting started with adding flash PCIe cards into the hosts but they are typically storage companies trying to talk to server people. Not saying server people are different than storage people, but its a different language at times. Again the question comes up “Who else but IBM can talk servers, switches, storage and software and offer good pricing and support?”; no one.
I suspect the neigh sayers will be ramping up with their FUD about the IBM FlashSystem and how its this and not that, but let me tell you:

  • It is fast
  • It is easy to manage
  • It is efficient

One aspect that I see as HUGE win is having the FlashSystem behind our SAN Volume Controller (SVC) and allowing the Easy Tier program figure out what data needs to be on flash and leave the rest on old rusting spinning drives.  This becomes very interesting because now you get better performance out of your storage and increase the usability of the FlashSystem by always keeping the hot data on the fastest device.  Per the IBM SSIC the FlashSystems are supported as a device that SVC can virtualize with different OS types.

SAN-Volume-Controller-Fig1

I believe this is a very interesting time to be in storage as we are seeing the change in storage designs. I hope the new flash systems can be adopted by main stream customers and helps drive the cost down even further so we can start looking at solutions like Flash and Tape  only.  Needless to say there are a lot of ideas and cool things coming out of IBM Labs and Flash is going to be one of the biggest.

Stretching SVC to meet new business availability needs.

January 18, 2013 Leave a comment

IBM has been secretly working on a technology that is changing the way administrators will deploy their VMware environments.  SAN Volume Controller (SVC) Split Cluster solution allows live application mobility across data centers based on VMware Metro vMotion. This solution can now alleviate unplanned outages as well as provide additional flexibility.

SVC can now support a split cluster configuration where nodes can be separated by a distance up to 300 KM.  This type of configuration does require clustering software at the application and server layer in order to failover to a server at the corresponding site in order to continue business and resume access to the disks. SVC will keep both copies of the storage in sync and mirror the cache between both nodes so the loss of one location does not disrupt data access at the other site.

The next big advantage SVC has is a small quorum disk that can allievate any split-brain issues.  Split-brain problems occur when nodes are no longer abale to communicate with each other and they start allowing writes to their own data.  SVC creates a tie break from a third site in order to ensure the survival of at least one location.

The SVC Split Cluster configuration uses a term called failure domain.  This allows the SVC cluster to know which components are in a certain boundary where any failure may occur (ie power, fire, flood). The entire SVC Split Cluster must comprise of three such boundaries, two for the storage controllers containing the customer data and a third for the active quorum disk.

Using VMware’s VMFS file system, the SVC can supply access to all of the vSphere hosts in both locations. During vMotion, the virtual machine can switch to a new physical host while keeping the network identity. By using the Metro vMotion to migrate VMs, customers can now plan for disaster avoidance, load balancing as well as data center optimization of power and cooling.

This solution is also for customers looking to add high availability with in their single datacenter. Imagine two servers in different parts of the data center. They could be on different power feeds and different SAN fabrics. You can now provide continuous availability to these servers with out having an administrator called in the middle of the night.

For more information about SVC Split Cluster and even step by step instructions on how to setup the SVC, VMware and Layer2 network check out this Redbook.

HDS Still Looking up at SVC Bar

February 17, 2012 Leave a comment

This week has been so busy I have not had time to sit down and put together some thoughts around a topic that I think needs time to explain.  There was big news coming from HDS this week around their new non-disruptive upgrade software as part of their VSP.  It looks like you can now do a non disruptive upgrade when migrating from Hitachi virtualization controllers.  Seeing they have the USP, NSC 55, USP V, USP VM  I can see why they would need something to help people get from one platform to the next.

One has to ask why so many different and separate systems. Could it be due to a growth in separate lines of business or just acquisitions that are now being integrated into the VSP line?  HDS has a history of telling customers I’m sorry in order to get the performance you need you have to get the forklift and buy this new shinier widget.

But just you can migrate, doesn’t mean you can keep those older models of VSP.  HDS in the past has not allowed different versions of the virtualization controller (USP) or between the USP and VSP.  IBM SAN Volume Controller (SVC) allows various models of the hardware engine to be mixed as long as they run the same version of software.

A couple of things I have noticed over the years about the HDS VSP.  The system does not scale in throughput beyond the capabilities of the single VSP (or USP) storage array that hosts the virtualized environment.  Even though HDS will claim the VSP can scale out to two control chassis it is really a single global cache system with an extended switched fabric.  The IBM SVC has no architectural limitation in scale out clusters.  This is shown in the latest SPC benchmark with a system that has 8 nodes in a cluster.

I also have a problem with HDS requiring all volumes that use the SSD drives to be thin provisioned.  In addition, they require external storage to be configured as the lowest tier.  This doesn’t really come into play unless you want to do Dynamic Tiering between two external storage resources.  IBM SVC can perform the Easy Tier process between internal to external with both thin and thick provisioned volumes.

When I started to really thinking about how customers who use Texas Memory RAMSAN or Violin Memory 3200 and how SVC could bring more intelligence in how data moves in and out of those systems, it made the VSP look like it was HD DVD and SVC is more Blu-ray.

I also have a customer who wants to do spit IO group clustering that would enable his active/active data collaboration for a distance of roughly 300km.  With quorum disks, the split IO group can be used for automated disaster recovery with a RPO and RTO of zero. USP and VSP doesn’t allow for this capability.

Now not everything is bad, I did read on Claus Mikkelsen’s blog that VSP supports the mainframe FICON protocol. He does go on to say that VSP is the only platform that supports virtualization for the mainframe.  That might be true but I think why does mainframe customers want their storage virtualized?  It’s not like they want their ERP solution that runs their multi-billion dollar business running on the same disk that the geek down the hall is using as a quake server. If you are running a mainframe, its probably best to keep your data on the proven DS8800.  nuf said.

I do applaud HDS for the effort and wish they would spend some more time trying to get to the same standard as that of the SVC.  But with this latest pass, they fall a little short but maybe they can spend time watching videos on their HD DVD player.

 

Categories: SVC Tags: , , , , ,