Archive

Posts Tagged ‘SVC’

How to Save Money by Buying Dumber Flash

October 19, 2016 Leave a comment

Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.

1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.

2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call ‘stickiness’. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say “I want these <insert vendor name> out of here”, are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?

I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.

Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.

The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to ‘cloud’. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.

Advertisements

IBM FlashSystem: Whats better Fast or Flash?

April 15, 2013 Leave a comment

werewolfBy now everyone has seen the great marketing adverts from AT&T where a interviewer is sitting in a classroom asking some younger students basic questions like, “Would like you like more or less? and my favorite “Which is better: fast or slow?”. Of course all of the kids scream FAST! Even to the point where one little girl talks about being slow and then being turned into a werewolf and then crying because all it wants is to be human again… Then the tagline, “It’s not complicated”.
No matter if we are talking about mobile phone, sandwich deliver (Jimmy John’s ROCKS!) or flash based areas, the “how fast can you …” is real. We can time a person delivering food, we can meter the signal on a mobile phone and we can measure the response time of a flash array. But what is more important than being fast? Simplicity.
There are many new start ups entering the flash storage market and some of the larger vendors are snapping up these little guys like little bunny foo foo (Not sure about the bopping part).

What I have seen is most of the solutions sound good on paper and power point but once in the sandbox they are complicated and take weeks of tuning to get the performance as advertised. There seems to be a growing trend in the flash market of “just get it in there” and then deal with tuning later mentality. This is not surprising from where its coming as they typically have more engineers on a customer to make sure their legacy gear behaves properly.

dr-evil
The IBM FlashSystems 710/720 and 810/820 were announced last week with a huge kick off meeting in New York and a huge social media push which I thought was marvelous. There was announcements of IBM setting $1 Billion (Yes, Dr. Evil does now work at Almadean Labs) for research and product enhancement of Flash technology. I think someone actually said something on our tweet chat that day like “Who else in the world could spend as much money on one particular part of hardware but IBM?” and I think the answer is fairly clear, no one.
What will come from this huge R&D investment you may ask? I suspect IBM is looking at how to take the technology and sprinkle it through out the storage portfolio and into server side as well. There are already some vendors who are getting started with adding flash PCIe cards into the hosts but they are typically storage companies trying to talk to server people. Not saying server people are different than storage people, but its a different language at times. Again the question comes up “Who else but IBM can talk servers, switches, storage and software and offer good pricing and support?”; no one.
I suspect the neigh sayers will be ramping up with their FUD about the IBM FlashSystem and how its this and not that, but let me tell you:

  • It is fast
  • It is easy to manage
  • It is efficient

One aspect that I see as HUGE win is having the FlashSystem behind our SAN Volume Controller (SVC) and allowing the Easy Tier program figure out what data needs to be on flash and leave the rest on old rusting spinning drives.  This becomes very interesting because now you get better performance out of your storage and increase the usability of the FlashSystem by always keeping the hot data on the fastest device.  Per the IBM SSIC the FlashSystems are supported as a device that SVC can virtualize with different OS types.

SAN-Volume-Controller-Fig1

I believe this is a very interesting time to be in storage as we are seeing the change in storage designs. I hope the new flash systems can be adopted by main stream customers and helps drive the cost down even further so we can start looking at solutions like Flash and Tape  only.  Needless to say there are a lot of ideas and cool things coming out of IBM Labs and Flash is going to be one of the biggest.

Stretching SVC to meet new business availability needs.

January 18, 2013 Leave a comment

IBM has been secretly working on a technology that is changing the way administrators will deploy their VMware environments.  SAN Volume Controller (SVC) Split Cluster solution allows live application mobility across data centers based on VMware Metro vMotion. This solution can now alleviate unplanned outages as well as provide additional flexibility.

SVC can now support a split cluster configuration where nodes can be separated by a distance up to 300 KM.  This type of configuration does require clustering software at the application and server layer in order to failover to a server at the corresponding site in order to continue business and resume access to the disks. SVC will keep both copies of the storage in sync and mirror the cache between both nodes so the loss of one location does not disrupt data access at the other site.

The next big advantage SVC has is a small quorum disk that can allievate any split-brain issues.  Split-brain problems occur when nodes are no longer abale to communicate with each other and they start allowing writes to their own data.  SVC creates a tie break from a third site in order to ensure the survival of at least one location.

The SVC Split Cluster configuration uses a term called failure domain.  This allows the SVC cluster to know which components are in a certain boundary where any failure may occur (ie power, fire, flood). The entire SVC Split Cluster must comprise of three such boundaries, two for the storage controllers containing the customer data and a third for the active quorum disk.

Using VMware’s VMFS file system, the SVC can supply access to all of the vSphere hosts in both locations. During vMotion, the virtual machine can switch to a new physical host while keeping the network identity. By using the Metro vMotion to migrate VMs, customers can now plan for disaster avoidance, load balancing as well as data center optimization of power and cooling.

This solution is also for customers looking to add high availability with in their single datacenter. Imagine two servers in different parts of the data center. They could be on different power feeds and different SAN fabrics. You can now provide continuous availability to these servers with out having an administrator called in the middle of the night.

For more information about SVC Split Cluster and even step by step instructions on how to setup the SVC, VMware and Layer2 network check out this Redbook.

Three storage technologies that will help your 2013 budget plan

January 7, 2013 2 comments

As the old saying goes, “Out with the old and in with the new” technology keeps rolling down the road. If you are like me, I am always looking for the next thing coming and can’t wait until that hits the market. Watching the blogs, keeping up with pod-casts, asking around the labs keep the buzz inside me going til that day when I can see that new shinny thing in the wild.  Over the holiday break I was catching up on some things that interest me and I was reading a Forbes article that described the storage market for 2013 as flat.  The article had asked some ‘large enterprises’ about what their spend for 2013 was going to be and one reported they needed to make due with what they had.

“…. the CIO believed that his IT department had been on a storage buying binge and it was high time to in fact use more of the raw disk capacity on the machine room floor that was already powered-up and spinning. Utilization rates of between 20 and 30% of total capacity are common for first-line production storage systems. The CIOs point was simply; let’s use more of what we already have before I’ll sign-off on another PO.”

I would agree with the fact there are companies who have very low utilization rates, especially on those main production storage systems.  We as storage architects have been designing this systems to out perform, out last and out live most of the applications supported.  Long gone are the days were we simply put everything in RAID 10 that needed performance and up time and newer technologies were adopted like RAID 6 or even no RAID (ala XIV).  This has driven the ability to keep systems up longer and lower the cost. Even with drives getting larger and automatic tiering helping storage administrators get a better bang for the buck, we are facing the same paradigm that we faced 5 years ago.  Storage is driven by the number of arms and the bandwidth to and from the machine.

This will lead to more smaller spinning drives, more expensive SSDs, and more system sprawl which leads to a management nightmare.  If we are going to be asked to increase the utilization, there are three tried and true technologies that can help.  Much like the server market, the storage market is going to need to embrace virtualization.  Virtualization does something very cool with storage, it gives it a level playing field and allows some of the trapped storage to be released.  IBM has been selling the SAN Volume Controller for over 10 years and has a very wide interopability list of storage systems that can fit under the hyperisor.  The SVC allows administrators to move volumes from one system to another with out disrupting the application/host.  Much like a virtual machine that moved from server to the next, the storage admin can utilize the SVC to take storage from a EMC VMax and move it to a IBM DS8870 with out downtime with only a couple of mouse clicks.  This helps customers take parts of the storage that are under performing and start ‘stacking’ the storage pools so that a CIO can see the utilization start going up.  If you are worried about performance, the SVC can increase performance with a couple of SSD drives that help keep more reads at the virtualization layer and not on the storage array.

Another technology that we look to is flash arrays.  You think this is new? Texas Memory Systems have been doing their flash based array for thirty years.  It has only been very recently with the adoption of larger drives have forced companies to look at augmenting their storage arrays with faster, low latency systems.  Now with integrating these flash systems with larger drive arrays, we can start putting more on those drives while not compromising  performance.  The best analogy someone told me was the difference is like going from a corroborator based engine to a fuel injected engine.  Not only will your car run better but it will be more efficient and drive up your miles per gallon.

The third technology is another tried and true technology, compression.  Yes, I know most of you are shaking your heads and thinking about performance, but this is not the same algorithm you are used to.  A few years ago, IBM acquired a  small company that was doing some very cool compression technology.  This new take on an old standard is based on a software package called RACE or Random Access Compression Engine that can compress data real time.  The difference is mainly when the data that is compressed has to be changed.  Older technology has to uncompress the data, make the change and the compress it again as the write happens.  With RACE, the changes to the data does not need to be uncompressed.  A big difference when looking at whether or not to use compression in your storage.

With these technologies you can find ways to help your company get a better utilization out of your storage.  Even though we are all looking for that new shinny object, its actually tried and true technology that is going to help us this year.  maybe 2014 will be the year of the shinny object based storage.

DRAM vs Flash who wins?

October 6, 2012 Leave a comment

I have spent most of the day looking over the products from TMS (Texas Memory Systems) that IBM just acquired. One of the questions I have always wondered is how to map performance back to a certain technology.  When dealing with DRAM and Flash devices there seems to trade offs on each. The first that comes to mind is DRAM requires some sort of battery backup as it will loose the data contents when power is lost. Most DRAM cards and appliances have this under control with some sort of destaging to SSD or they have some sort of battery attached to the IO card that allows the DRAM time to hold information until power is restored.
DRAM is typically faster than its flash cousin as well as reliable and more durable. Typically there is less controller latency due to the lack of complexity of wear leveling and garbage collection. DRAM is still more expensive that Flash and has the problem of needing power all the time.
When looking at systems to find out how to decide which solution fits your environment it comes down to price and IO. The DRAM solutions are usually smaller in size but can push more IO. For Example the TMS 420 is 256GB of storage in a 4U frame that can push 600,000 IOPs. Not bad if you need 256GB of really fast space. This could be used for very high transaction volumes.  This can be deployed with traditional storage and used for the frequently used database tables and indexes while lower IO tables can be thrown over to the spinning hard disk side.
In comparison the TMS 820 Flash array delivers a whopping 24TB in a 1U space and can push a meek 450,000 IOPS. This is somewhat incredible as the footprint is small and dense but still gives you the punch needed to beef up parts of your infrastructure. I stared running the numbers to compare this with say a V7000 with all SSD drives and we can’t come close.  You could virtualize  the system under the SVC code (included in the V7000) and use the Easy Tier function to move hot data to and from the TMS array which gives you the performance needed. I see why IBM decided to acquire TMS now.
So who wins in a DRAM and Flash discussion? The vendors of course, they are the ones who are going after this market aggressively. I think most consumers are trying to figure out if its needed to spend money on moving a database from 1500 disks at sub 20 ms response to using 200 larger disk and adding the DRAM or Flash device to keep the same latency. As an architect I want to keep in mind how much space and environmentals all of those disk eat up and having an alternative even if it costs more up front, is appealing.

Got VNX? IBM can virtualize that.

March 5, 2012 Leave a comment

Anytime Anthony puts the word ‘wags’ in a post, it makes me smile.

When IBM brought out the SAN Volume Controller (SVC) in 2003, the goal was clear: support as many storage vendors and products as possible.  Since then IBM has put a huge ongoing effort into interoperation testing, which has allowed them to continue expanding the SVC support matrix, making it one of the most comprehensive in the industry.   When the Storwize V7000 was released in 2010 it was able to leverage that testing heritage, allowing it to have an amazingly deep interoperation matrix on launch date.  It almost felt like cheating.However I recently got challenged on this with a simple question:  Where is the VNX?   If you check out the Supported Hardware list for SVC V6.3 or Storwize V7000 V6.3 you can find the Clariion up to a CX4-960, but no VNX.The short answer is that while the VNX is not listed there yet,  IBM are actively supporting customers using VNX virtualized behind SVC and Storwize V7000.   If you have a VNX 5100, 5300, 5500, 5700 or 7500 then ask your IBM pre-sales Technical Support to open an Interoperation Support Request.   The majority are being approved very quickly.   The official support sites that I referenced above will be updated soon (but don’t wait, if you need support, ask for it).  IBM is working methodically with EMC to be certain that when a general publication of support is released for VNX (soon!), both companies will agree with the published details.

Read more here

http://aussiestorageblog.wordpress.com/2012/03/05/got-vnx-ibm-can-virtualize-that/

Categories: Uncategorized Tags: , , ,

SAN Volume Controller: Best Practices and Performance Guidelines

February 17, 2012 Leave a comment

To go with my ealier rant of why SVC is better than HDS VSP. A good part of the redbook is about performance and tuning.  Have a great weekend!

http://www.redbooks.ibm.com/redpieces/abstracts/sg247521.html?Open

Categories: Uncategorized Tags: ,