Archive

Posts Tagged ‘virtualization’

Three storage technologies that will help your 2013 budget plan

January 7, 2013 2 comments

As the old saying goes, “Out with the old and in with the new” technology keeps rolling down the road. If you are like me, I am always looking for the next thing coming and can’t wait until that hits the market. Watching the blogs, keeping up with pod-casts, asking around the labs keep the buzz inside me going til that day when I can see that new shinny thing in the wild.  Over the holiday break I was catching up on some things that interest me and I was reading a Forbes article that described the storage market for 2013 as flat.  The article had asked some ‘large enterprises’ about what their spend for 2013 was going to be and one reported they needed to make due with what they had.

“…. the CIO believed that his IT department had been on a storage buying binge and it was high time to in fact use more of the raw disk capacity on the machine room floor that was already powered-up and spinning. Utilization rates of between 20 and 30% of total capacity are common for first-line production storage systems. The CIOs point was simply; let’s use more of what we already have before I’ll sign-off on another PO.”

I would agree with the fact there are companies who have very low utilization rates, especially on those main production storage systems.  We as storage architects have been designing this systems to out perform, out last and out live most of the applications supported.  Long gone are the days were we simply put everything in RAID 10 that needed performance and up time and newer technologies were adopted like RAID 6 or even no RAID (ala XIV).  This has driven the ability to keep systems up longer and lower the cost. Even with drives getting larger and automatic tiering helping storage administrators get a better bang for the buck, we are facing the same paradigm that we faced 5 years ago.  Storage is driven by the number of arms and the bandwidth to and from the machine.

This will lead to more smaller spinning drives, more expensive SSDs, and more system sprawl which leads to a management nightmare.  If we are going to be asked to increase the utilization, there are three tried and true technologies that can help.  Much like the server market, the storage market is going to need to embrace virtualization.  Virtualization does something very cool with storage, it gives it a level playing field and allows some of the trapped storage to be released.  IBM has been selling the SAN Volume Controller for over 10 years and has a very wide interopability list of storage systems that can fit under the hyperisor.  The SVC allows administrators to move volumes from one system to another with out disrupting the application/host.  Much like a virtual machine that moved from server to the next, the storage admin can utilize the SVC to take storage from a EMC VMax and move it to a IBM DS8870 with out downtime with only a couple of mouse clicks.  This helps customers take parts of the storage that are under performing and start ‘stacking’ the storage pools so that a CIO can see the utilization start going up.  If you are worried about performance, the SVC can increase performance with a couple of SSD drives that help keep more reads at the virtualization layer and not on the storage array.

Another technology that we look to is flash arrays.  You think this is new? Texas Memory Systems have been doing their flash based array for thirty years.  It has only been very recently with the adoption of larger drives have forced companies to look at augmenting their storage arrays with faster, low latency systems.  Now with integrating these flash systems with larger drive arrays, we can start putting more on those drives while not compromising  performance.  The best analogy someone told me was the difference is like going from a corroborator based engine to a fuel injected engine.  Not only will your car run better but it will be more efficient and drive up your miles per gallon.

The third technology is another tried and true technology, compression.  Yes, I know most of you are shaking your heads and thinking about performance, but this is not the same algorithm you are used to.  A few years ago, IBM acquired a  small company that was doing some very cool compression technology.  This new take on an old standard is based on a software package called RACE or Random Access Compression Engine that can compress data real time.  The difference is mainly when the data that is compressed has to be changed.  Older technology has to uncompress the data, make the change and the compress it again as the write happens.  With RACE, the changes to the data does not need to be uncompressed.  A big difference when looking at whether or not to use compression in your storage.

With these technologies you can find ways to help your company get a better utilization out of your storage.  Even though we are all looking for that new shinny object, its actually tried and true technology that is going to help us this year.  maybe 2014 will be the year of the shinny object based storage.

Advertisements

DRAM vs Flash who wins?

October 6, 2012 Leave a comment

I have spent most of the day looking over the products from TMS (Texas Memory Systems) that IBM just acquired. One of the questions I have always wondered is how to map performance back to a certain technology.  When dealing with DRAM and Flash devices there seems to trade offs on each. The first that comes to mind is DRAM requires some sort of battery backup as it will loose the data contents when power is lost. Most DRAM cards and appliances have this under control with some sort of destaging to SSD or they have some sort of battery attached to the IO card that allows the DRAM time to hold information until power is restored.
DRAM is typically faster than its flash cousin as well as reliable and more durable. Typically there is less controller latency due to the lack of complexity of wear leveling and garbage collection. DRAM is still more expensive that Flash and has the problem of needing power all the time.
When looking at systems to find out how to decide which solution fits your environment it comes down to price and IO. The DRAM solutions are usually smaller in size but can push more IO. For Example the TMS 420 is 256GB of storage in a 4U frame that can push 600,000 IOPs. Not bad if you need 256GB of really fast space. This could be used for very high transaction volumes.  This can be deployed with traditional storage and used for the frequently used database tables and indexes while lower IO tables can be thrown over to the spinning hard disk side.
In comparison the TMS 820 Flash array delivers a whopping 24TB in a 1U space and can push a meek 450,000 IOPS. This is somewhat incredible as the footprint is small and dense but still gives you the punch needed to beef up parts of your infrastructure. I stared running the numbers to compare this with say a V7000 with all SSD drives and we can’t come close.  You could virtualize  the system under the SVC code (included in the V7000) and use the Easy Tier function to move hot data to and from the TMS array which gives you the performance needed. I see why IBM decided to acquire TMS now.
So who wins in a DRAM and Flash discussion? The vendors of course, they are the ones who are going after this market aggressively. I think most consumers are trying to figure out if its needed to spend money on moving a database from 1500 disks at sub 20 ms response to using 200 larger disk and adding the DRAM or Flash device to keep the same latency. As an architect I want to keep in mind how much space and environmentals all of those disk eat up and having an alternative even if it costs more up front, is appealing.

IBM V7000 sets the bar with SPC1 all Flash results

June 18, 2012 9 comments

Last week IBM published some very interesting results on the Storage Performance Council website.  Using the SPC-1 test method, IBM raised more than just a few eye brows.  IBM configured 18 200GB SSDs in a mirrored configuration which was able to attain 120k IOPS with less than 5 ms response time even at the 100% load.

IBM used an all SSD array that fit into a single 2U space and mirrored the drives in RAID 1 fashion. These were all put into a pool and 8 volumes were carved out for the AIX server. The previous SPC1 run IBM performed used spinning media and virtualized the systems using the SAN Volume Controller virtualization engine. This gave IBM the top spot in the SPC1 benchmark with over 520,000 IOPS costing a wopping $6.92 per IOP.

This has been compared to the Oracle/Sun ZFS 7420 who published 137k IOPS late last year.  When matched to the IBM V7000 results we see the V7000 was $181,029  with roughly $1.50 per IOP compared to the SUN 7420 who came in at $409,933 and $2.99 per IOP. The V7000 was able to perform 88% of the work at 44% of the price. Oracle has not come back with any type of statement but I can only bet they are seething over this and will be in the labs trying to find a way to lower their cost while still performing.

The SPC1 benchmark tests the performance of a storage array doing mainly random I/O in a business environment. It has been called out for not being a very realistic workload as it tends to carter to higher end, cache heavy systems.  Now we have a mid range box that not only has the most IOPs trophy but also wins in the ‘Best Bang for your Buck’ category too.

I would have liked to see what the results would have been with the addition of compression to the software feature stack. This  going to be a game changer for IBM as the inline compression of block data is way beyond what other vendors are doing.  Couple that with the virtualization engine and now I can compress data on anyone’s storage array. The V7000 is definitely becoming a smarter storage solution for a growing storage market.

 

HDS Still Looking up at SVC Bar

February 17, 2012 Leave a comment

This week has been so busy I have not had time to sit down and put together some thoughts around a topic that I think needs time to explain.  There was big news coming from HDS this week around their new non-disruptive upgrade software as part of their VSP.  It looks like you can now do a non disruptive upgrade when migrating from Hitachi virtualization controllers.  Seeing they have the USP, NSC 55, USP V, USP VM  I can see why they would need something to help people get from one platform to the next.

One has to ask why so many different and separate systems. Could it be due to a growth in separate lines of business or just acquisitions that are now being integrated into the VSP line?  HDS has a history of telling customers I’m sorry in order to get the performance you need you have to get the forklift and buy this new shinier widget.

But just you can migrate, doesn’t mean you can keep those older models of VSP.  HDS in the past has not allowed different versions of the virtualization controller (USP) or between the USP and VSP.  IBM SAN Volume Controller (SVC) allows various models of the hardware engine to be mixed as long as they run the same version of software.

A couple of things I have noticed over the years about the HDS VSP.  The system does not scale in throughput beyond the capabilities of the single VSP (or USP) storage array that hosts the virtualized environment.  Even though HDS will claim the VSP can scale out to two control chassis it is really a single global cache system with an extended switched fabric.  The IBM SVC has no architectural limitation in scale out clusters.  This is shown in the latest SPC benchmark with a system that has 8 nodes in a cluster.

I also have a problem with HDS requiring all volumes that use the SSD drives to be thin provisioned.  In addition, they require external storage to be configured as the lowest tier.  This doesn’t really come into play unless you want to do Dynamic Tiering between two external storage resources.  IBM SVC can perform the Easy Tier process between internal to external with both thin and thick provisioned volumes.

When I started to really thinking about how customers who use Texas Memory RAMSAN or Violin Memory 3200 and how SVC could bring more intelligence in how data moves in and out of those systems, it made the VSP look like it was HD DVD and SVC is more Blu-ray.

I also have a customer who wants to do spit IO group clustering that would enable his active/active data collaboration for a distance of roughly 300km.  With quorum disks, the split IO group can be used for automated disaster recovery with a RPO and RTO of zero. USP and VSP doesn’t allow for this capability.

Now not everything is bad, I did read on Claus Mikkelsen’s blog that VSP supports the mainframe FICON protocol. He does go on to say that VSP is the only platform that supports virtualization for the mainframe.  That might be true but I think why does mainframe customers want their storage virtualized?  It’s not like they want their ERP solution that runs their multi-billion dollar business running on the same disk that the geek down the hall is using as a quake server. If you are running a mainframe, its probably best to keep your data on the proven DS8800.  nuf said.

I do applaud HDS for the effort and wish they would spend some more time trying to get to the same standard as that of the SVC.  But with this latest pass, they fall a little short but maybe they can spend time watching videos on their HD DVD player.

 

Categories: SVC Tags: , , , , ,