Archive

Posts Tagged ‘V7000’

DRAM vs Flash who wins?

October 6, 2012 Leave a comment

I have spent most of the day looking over the products from TMS (Texas Memory Systems) that IBM just acquired. One of the questions I have always wondered is how to map performance back to a certain technology.  When dealing with DRAM and Flash devices there seems to trade offs on each. The first that comes to mind is DRAM requires some sort of battery backup as it will loose the data contents when power is lost. Most DRAM cards and appliances have this under control with some sort of destaging to SSD or they have some sort of battery attached to the IO card that allows the DRAM time to hold information until power is restored.
DRAM is typically faster than its flash cousin as well as reliable and more durable. Typically there is less controller latency due to the lack of complexity of wear leveling and garbage collection. DRAM is still more expensive that Flash and has the problem of needing power all the time.
When looking at systems to find out how to decide which solution fits your environment it comes down to price and IO. The DRAM solutions are usually smaller in size but can push more IO. For Example the TMS 420 is 256GB of storage in a 4U frame that can push 600,000 IOPs. Not bad if you need 256GB of really fast space. This could be used for very high transaction volumes.  This can be deployed with traditional storage and used for the frequently used database tables and indexes while lower IO tables can be thrown over to the spinning hard disk side.
In comparison the TMS 820 Flash array delivers a whopping 24TB in a 1U space and can push a meek 450,000 IOPS. This is somewhat incredible as the footprint is small and dense but still gives you the punch needed to beef up parts of your infrastructure. I stared running the numbers to compare this with say a V7000 with all SSD drives and we can’t come close.  You could virtualize  the system under the SVC code (included in the V7000) and use the Easy Tier function to move hot data to and from the TMS array which gives you the performance needed. I see why IBM decided to acquire TMS now.
So who wins in a DRAM and Flash discussion? The vendors of course, they are the ones who are going after this market aggressively. I think most consumers are trying to figure out if its needed to spend money on moving a database from 1500 disks at sub 20 ms response to using 200 larger disk and adding the DRAM or Flash device to keep the same latency. As an architect I want to keep in mind how much space and environmentals all of those disk eat up and having an alternative even if it costs more up front, is appealing.

Advertisements

IBM V7000 sets the bar with SPC1 all Flash results

June 18, 2012 9 comments

Last week IBM published some very interesting results on the Storage Performance Council website.  Using the SPC-1 test method, IBM raised more than just a few eye brows.  IBM configured 18 200GB SSDs in a mirrored configuration which was able to attain 120k IOPS with less than 5 ms response time even at the 100% load.

IBM used an all SSD array that fit into a single 2U space and mirrored the drives in RAID 1 fashion. These were all put into a pool and 8 volumes were carved out for the AIX server. The previous SPC1 run IBM performed used spinning media and virtualized the systems using the SAN Volume Controller virtualization engine. This gave IBM the top spot in the SPC1 benchmark with over 520,000 IOPS costing a wopping $6.92 per IOP.

This has been compared to the Oracle/Sun ZFS 7420 who published 137k IOPS late last year.  When matched to the IBM V7000 results we see the V7000 was $181,029  with roughly $1.50 per IOP compared to the SUN 7420 who came in at $409,933 and $2.99 per IOP. The V7000 was able to perform 88% of the work at 44% of the price. Oracle has not come back with any type of statement but I can only bet they are seething over this and will be in the labs trying to find a way to lower their cost while still performing.

The SPC1 benchmark tests the performance of a storage array doing mainly random I/O in a business environment. It has been called out for not being a very realistic workload as it tends to carter to higher end, cache heavy systems.  Now we have a mid range box that not only has the most IOPs trophy but also wins in the ‘Best Bang for your Buck’ category too.

I would have liked to see what the results would have been with the addition of compression to the software feature stack. This  going to be a game changer for IBM as the inline compression of block data is way beyond what other vendors are doing.  Couple that with the virtualization engine and now I can compress data on anyone’s storage array. The V7000 is definitely becoming a smarter storage solution for a growing storage market.

 

Got VNX? IBM can virtualize that.

March 5, 2012 Leave a comment

Anytime Anthony puts the word ‘wags’ in a post, it makes me smile.

When IBM brought out the SAN Volume Controller (SVC) in 2003, the goal was clear: support as many storage vendors and products as possible.  Since then IBM has put a huge ongoing effort into interoperation testing, which has allowed them to continue expanding the SVC support matrix, making it one of the most comprehensive in the industry.   When the Storwize V7000 was released in 2010 it was able to leverage that testing heritage, allowing it to have an amazingly deep interoperation matrix on launch date.  It almost felt like cheating.However I recently got challenged on this with a simple question:  Where is the VNX?   If you check out the Supported Hardware list for SVC V6.3 or Storwize V7000 V6.3 you can find the Clariion up to a CX4-960, but no VNX.The short answer is that while the VNX is not listed there yet,  IBM are actively supporting customers using VNX virtualized behind SVC and Storwize V7000.   If you have a VNX 5100, 5300, 5500, 5700 or 7500 then ask your IBM pre-sales Technical Support to open an Interoperation Support Request.   The majority are being approved very quickly.   The official support sites that I referenced above will be updated soon (but don’t wait, if you need support, ask for it).  IBM is working methodically with EMC to be certain that when a general publication of support is released for VNX (soon!), both companies will agree with the published details.

Read more here

http://aussiestorageblog.wordpress.com/2012/03/05/got-vnx-ibm-can-virtualize-that/

Categories: Uncategorized Tags: , , ,

New V7000 Unified Code Release Today 1.3.0.3

February 28, 2012 Leave a comment

IBM released new code today for the V7000 and V7kU platforms.  Here is a snipet of changes from the release notes:

The following functions are available for use in production environments:
- Use of Asynchronous file replication between Storwize V7000 Unified systems
- Use of Network Data Management Protocol (NDMP) for file backup/restore functions
- Use of the GPFS Information Lifecycle Management functions, for file placement, 
     migration and deletion on internal or external disk.
- Use of IBM Tivoli Storage Manager for Space Management as a Hierarchical Storage 
     Manager (HSM), for migrating data to an external TSM server
New features in Storwize V7000 6.3.0.1:
  Support for multi-session iSCSI host attachment
  Language Support for Brazilian Portuguese, French, German, Italian, Japanese, 
  Korean, Spanish, Turkish, Simplified Chinese and Traditional Chinese

For more details on this release click here!

Categories: SVC/V7000U Tags: , , ,

When to Gateway and when to really virtualize

January 5, 2012 1 comment

For the last six years IBM has been selling the N series gateway and it has been a great tool to add file based protocols to traditional block storage.  A gateway takes luns from the SAN storage and overlays its own operating system.  One of the ‘gotchas’ with the gateway is the storage has to be net new, meaning it can not take an existing lun that has data and present that to another device.


Traditionally the gateway was used to put in front of older storage to refit the old technology with new features.  In the case of N series, a gateway would be able to add features like snapshots, deduplication and replication. In the past few years, we have added the option to use both external and internal disk to a gateway system.  The only caveat to this solution is you have to order the gateway license when the system is initially ordered.  A filer can not be changed into a gateway system.
Another solution that we see in the field is when a customer is looking to purchase a new system and most of the requirement is SAN based and only a small portion is NAS.  Putting a gateway in front of a XIV became a very popular solution for many years and still is today.  IBM did release the SONAS platform that can be used as a NAS gateway in front of the V7000, SVC, and XIV.


I have seen some architects that wanted to use a gateway in an all NAS solution for new disks.  This only complicates the solution by having to add switches and multiple operating systems.
If we look at virtualization of storage, the gold standard has been the SAN Volume Controller (SVC).  This system can take new or existing luns from other storage systems and presents them as a lun to another host.  This data can be moved from one storage system to another without bringing the lun offline.  The IBM V7000 also has this virtualization feature as the code base for both systems are the same.  The cool feature that IBM has added to the V7000 is now the system has the ability to do NAS and SAN protocols.  This now competes in the same space as the EMC VNX and Netapp FAS systems.
The virtualization in the SVC code is somewhat similar to the gateway code in the N series.  They both can virtualuze the lun from another storage platform.  If you need to keep the data that is on the older system intact, then a SVC device is needed.  I would also mention that the movement of data between storage systems is much easier with the SVC.  I would also mention the N series gateway has more functionality like deduplication and easy replication than the SVC.
Finally, the SVC code was built by IBM to sit on top of complicated SAN environments.  Its robust nature is complimented with an easier to use gui from the XIV platform.  The N series gateway is somewhat easier to setup but is not to be used for large complicated SAN environments.
Both systems are good at what they do, and people try to compare them in the same manner.  I would tell them, Yes they both virtualize storage but are used in a different manner.

Categories: Nseries Tags: , , , , , ,

Out with the old and in with the new!

December 29, 2011 Leave a comment

As is customary for many bloggers at the end of the year we take time to reflect on the past year and make some predictions for next year.  This is always fun because I get a chance to see what people predicted for this year and who was right / wrong.  Some people are more right than wrong but its fun to guess at what will happen next year none the less.

2011 was a great year for storage and IT as a whole.  A couple of highlights I think were important points this year:

  • SSD pricing drops significantly to approximately $3 per GB.  With the flooding in Thailand, the price for spinning drives went sky high back in October.  Since then, the prices have started to decline but not as quickly as the SSDs.
  • Tape is still around and is giving people options.  There are only a handful of vendors than even like to talk about tape as another storage tier. Those who do have it in the bag of options levered this as something the others can not provide as a full solution.
  • Archive and backup were debated, debated again and hopefully the marketing people have learned the difference.  I think there are times were backups can be archives but not the other way around.  There are people out there that backup their archives but that is a whole blog article to it’s self.
  • Mobile apps were plentiful from Fruit Ninja to Facebook to business app like Quick Office flooded the market place.  Not only were people developing for the iPad, iPod and iPoop platforms but we saw the rise of the Droid (Google) and Blackberry (RIM) even Windows is now reporting over 50,000 apps available for downloads.
  • Clouds got a little more defined and people are starting to see the benefits of having the right ‘cloud’ for the right job.  This time year the future of clouds was a little cloudier than what we see them as today.  The funny thing I believe most people realized this year was we have been clouding in IT for a long time just under different names.
  • Social media was the biggest part of IT in 2011 in my opinion.  I saw a fundamental shift in how people got information and how that influenced their decisions.  From CEOs blogging to Charlie Sheen going up in flames on Twitter, the warlocks and trolls out there were craving something more.  Social media is the new dot com era and now we wait for the bubble to burst.

Now for the good part.  Here is what I think 2012 will bring to the IT / Storage world.  Note: If any of these do or don’t come true I will deny any involvement.

  • Big Data moves into full swing.  If you think you heard a lot of people talking about Big Data in 2011 then prepare yourselves for the avalanche of bloggers, researchers, analysts, marketers, sales people, you name it to bombard you with not just what Big Data is but what to do with it.  I suspect technologies like Hadoop and Analytics will drive products more than typical structured data storage.
  • Protection of remote data on mobile devices like tablets and phones will be a bigger concern.  With the rise of these devices people have started to move from the traditional desktop or even laptop.  There is already an uptick on the number of viruses in the wild that are designed for mobile users.  As more data is generated and stored on them the higher the risk companies face in loosing data.  I predict companies will either rely on public clouds like Amazon S3 or Dropbox to help protect from data loss or even go private and force users to back their data to a central repository.
  •  Software will continue to drive innovation over hardware.  Virtualization was a big part of 2011 and that will only continue to grow bigger in the datacenter.  Storage systems for the most part are made up of the same parts from the same suppliers.  It’s the software that is put to use that drives the efficiency, performance and utilization of the hardware.  Those storage vendors who can get out of the hardware speeds and features will show you how their solution solves the problems of today.  There are still some customers who want to know how many 15K RPM drives are in the solution.  I think there will always be gear-heads who want to know these things, but they are getting fewer.
  • Scale out grid clustering with global name space will dominate the new dynamic of how to deploy units.  Netapp should be releasing the long anticipated Cluster Mode of Data Ontap sometime in 2012.  I hear not everyone will be getting carte blanche on the download so be ready to be told its still not prime time.  Even though it took Netapp nine years to get a real product in the market for general consumption.  Other vendors like EMC, IBM and HP will all be touting their own version of scale out / grid / you name it as the best way to drive up the efficiency number.  Do your research and make sure to compare apples to apples.  Just because they all say the same thing doesn’t mean its done the same way.
  • Tiering will be on everyone’s mind.  Even with the price of SSDs coming down they still are a bit pricey for a SSD only solution.  Companies like Violin Memory or Fusion IO will help keep I/O up at the server instead of hitting the storage system. Automatic file based tiering like that of  the Active Cloud Engine from IBM will determine from policies where data needs to be written and moved down the $/GB slope as it ages.  IBM also has a great automatic tiering solution called Easy Tier that is on the DS8000, SVC and the V7000 takes the guess work out of when to put data on faster disks.
  • Consolidation of smaller systems with fewer islands of storage will be a key initiative in 2012.  As the economy flat lines on the global operating table IT budgets will be looking for ways to cut line items out of the cost of running the shop.   This is a continuance from 2011 but with the push of ‘Big Data’ companies will be asked to take on more data demand with the same budget as last year.  As customers pool their resources together to meet these demands they will pool their storage platforms together for better utilization.  Look for data migration tools like SVC from IBM to help make these transitions easier.

Finally, I send you the best for a new year full of exciting challenges.  Happy New Year!