Archive

Posts Tagged ‘Netapp’

When to Gateway and when to really virtualize

January 5, 2012 1 comment

For the last six years IBM has been selling the N series gateway and it has been a great tool to add file based protocols to traditional block storage.  A gateway takes luns from the SAN storage and overlays its own operating system.  One of the ‘gotchas’ with the gateway is the storage has to be net new, meaning it can not take an existing lun that has data and present that to another device.


Traditionally the gateway was used to put in front of older storage to refit the old technology with new features.  In the case of N series, a gateway would be able to add features like snapshots, deduplication and replication. In the past few years, we have added the option to use both external and internal disk to a gateway system.  The only caveat to this solution is you have to order the gateway license when the system is initially ordered.  A filer can not be changed into a gateway system.
Another solution that we see in the field is when a customer is looking to purchase a new system and most of the requirement is SAN based and only a small portion is NAS.  Putting a gateway in front of a XIV became a very popular solution for many years and still is today.  IBM did release the SONAS platform that can be used as a NAS gateway in front of the V7000, SVC, and XIV.


I have seen some architects that wanted to use a gateway in an all NAS solution for new disks.  This only complicates the solution by having to add switches and multiple operating systems.
If we look at virtualization of storage, the gold standard has been the SAN Volume Controller (SVC).  This system can take new or existing luns from other storage systems and presents them as a lun to another host.  This data can be moved from one storage system to another without bringing the lun offline.  The IBM V7000 also has this virtualization feature as the code base for both systems are the same.  The cool feature that IBM has added to the V7000 is now the system has the ability to do NAS and SAN protocols.  This now competes in the same space as the EMC VNX and Netapp FAS systems.
The virtualization in the SVC code is somewhat similar to the gateway code in the N series.  They both can virtualuze the lun from another storage platform.  If you need to keep the data that is on the older system intact, then a SVC device is needed.  I would also mention that the movement of data between storage systems is much easier with the SVC.  I would also mention the N series gateway has more functionality like deduplication and easy replication than the SVC.
Finally, the SVC code was built by IBM to sit on top of complicated SAN environments.  Its robust nature is complimented with an easier to use gui from the XIV platform.  The N series gateway is somewhat easier to setup but is not to be used for large complicated SAN environments.
Both systems are good at what they do, and people try to compare them in the same manner.  I would tell them, Yes they both virtualize storage but are used in a different manner.

Advertisements
Categories: Nseries Tags: , , , , , ,

Out with the old and in with the new!

December 29, 2011 Leave a comment

As is customary for many bloggers at the end of the year we take time to reflect on the past year and make some predictions for next year.  This is always fun because I get a chance to see what people predicted for this year and who was right / wrong.  Some people are more right than wrong but its fun to guess at what will happen next year none the less.

2011 was a great year for storage and IT as a whole.  A couple of highlights I think were important points this year:

  • SSD pricing drops significantly to approximately $3 per GB.  With the flooding in Thailand, the price for spinning drives went sky high back in October.  Since then, the prices have started to decline but not as quickly as the SSDs.
  • Tape is still around and is giving people options.  There are only a handful of vendors than even like to talk about tape as another storage tier. Those who do have it in the bag of options levered this as something the others can not provide as a full solution.
  • Archive and backup were debated, debated again and hopefully the marketing people have learned the difference.  I think there are times were backups can be archives but not the other way around.  There are people out there that backup their archives but that is a whole blog article to it’s self.
  • Mobile apps were plentiful from Fruit Ninja to Facebook to business app like Quick Office flooded the market place.  Not only were people developing for the iPad, iPod and iPoop platforms but we saw the rise of the Droid (Google) and Blackberry (RIM) even Windows is now reporting over 50,000 apps available for downloads.
  • Clouds got a little more defined and people are starting to see the benefits of having the right ‘cloud’ for the right job.  This time year the future of clouds was a little cloudier than what we see them as today.  The funny thing I believe most people realized this year was we have been clouding in IT for a long time just under different names.
  • Social media was the biggest part of IT in 2011 in my opinion.  I saw a fundamental shift in how people got information and how that influenced their decisions.  From CEOs blogging to Charlie Sheen going up in flames on Twitter, the warlocks and trolls out there were craving something more.  Social media is the new dot com era and now we wait for the bubble to burst.

Now for the good part.  Here is what I think 2012 will bring to the IT / Storage world.  Note: If any of these do or don’t come true I will deny any involvement.

  • Big Data moves into full swing.  If you think you heard a lot of people talking about Big Data in 2011 then prepare yourselves for the avalanche of bloggers, researchers, analysts, marketers, sales people, you name it to bombard you with not just what Big Data is but what to do with it.  I suspect technologies like Hadoop and Analytics will drive products more than typical structured data storage.
  • Protection of remote data on mobile devices like tablets and phones will be a bigger concern.  With the rise of these devices people have started to move from the traditional desktop or even laptop.  There is already an uptick on the number of viruses in the wild that are designed for mobile users.  As more data is generated and stored on them the higher the risk companies face in loosing data.  I predict companies will either rely on public clouds like Amazon S3 or Dropbox to help protect from data loss or even go private and force users to back their data to a central repository.
  •  Software will continue to drive innovation over hardware.  Virtualization was a big part of 2011 and that will only continue to grow bigger in the datacenter.  Storage systems for the most part are made up of the same parts from the same suppliers.  It’s the software that is put to use that drives the efficiency, performance and utilization of the hardware.  Those storage vendors who can get out of the hardware speeds and features will show you how their solution solves the problems of today.  There are still some customers who want to know how many 15K RPM drives are in the solution.  I think there will always be gear-heads who want to know these things, but they are getting fewer.
  • Scale out grid clustering with global name space will dominate the new dynamic of how to deploy units.  Netapp should be releasing the long anticipated Cluster Mode of Data Ontap sometime in 2012.  I hear not everyone will be getting carte blanche on the download so be ready to be told its still not prime time.  Even though it took Netapp nine years to get a real product in the market for general consumption.  Other vendors like EMC, IBM and HP will all be touting their own version of scale out / grid / you name it as the best way to drive up the efficiency number.  Do your research and make sure to compare apples to apples.  Just because they all say the same thing doesn’t mean its done the same way.
  • Tiering will be on everyone’s mind.  Even with the price of SSDs coming down they still are a bit pricey for a SSD only solution.  Companies like Violin Memory or Fusion IO will help keep I/O up at the server instead of hitting the storage system. Automatic file based tiering like that of  the Active Cloud Engine from IBM will determine from policies where data needs to be written and moved down the $/GB slope as it ages.  IBM also has a great automatic tiering solution called Easy Tier that is on the DS8000, SVC and the V7000 takes the guess work out of when to put data on faster disks.
  • Consolidation of smaller systems with fewer islands of storage will be a key initiative in 2012.  As the economy flat lines on the global operating table IT budgets will be looking for ways to cut line items out of the cost of running the shop.   This is a continuance from 2011 but with the push of ‘Big Data’ companies will be asked to take on more data demand with the same budget as last year.  As customers pool their resources together to meet these demands they will pool their storage platforms together for better utilization.  Look for data migration tools like SVC from IBM to help make these transitions easier.

Finally, I send you the best for a new year full of exciting challenges.  Happy New Year!

Top 10 Reasons clients choose to go with IBM N series

December 22, 2011 Leave a comment

Top 10 Reasons clients choose to go with IBM N series

 Image

Some years ago I put together a list of reasons why people choose to buy from IBM rather than purchase directly from Netapp.  IBM has an OEM agreement with Netapp and rebrands the FAS and V-series as their N series product line.  They are both made at the same plant and the only difference between them is the front bezel.  You can even take a Netapp bezel off and stick it on an N series box and it fits exactly. 

 

The Software is the same exactly.  All we change is the logos and readme files.  The entire functionality of the product is exactly the same.  IBM does not add or take away any of the features built into the systems.  The only difference is it takes IBM about 90 days once Netapp releases a product to get it put online and change the necessary documents. 

 

Support for N series is done both at IBM and Netapp.  Much like our other OEM partners, they stand behind IBM as the developers and IBM handles the issues.  Customers still call the same 1.800.IBM.SERV for support and speak to trained engineers who have been working on N series equipment for 6+ years now.  IBM actually has lower turn over than Netapp in their support division and has won awards for providing top-notch support.  The call home features that most people are used to still go to Netapp via IBM servers. 

 

 

10.  The IBM customer engineer (CE) that is working with you today will be the same person who helps you with the IBM N series system.

9.  IBM GBS team can provide consultation, installation and even administration of your environment.

8.  IBM is able to provide financing for clients.

7.  When you purchase your N series system from IBM, you can bundle it with servers, switches, other storage and software.  This gives you one bill, one place to go to if you need anything and one support number to call.

6.  IBM has two other support offerings to help our clients, Our Supportline offering allows customers to call in and ask installation or configuration questions.  We also have an Enhanced Technical Support (ETS) team that will assign a personal engineer that will know everything about your environment and will provide you with everything you need.  They will help you with health checks to be sure the system is running optimally, updates on the latest technology and single point of contact in case you need to speak to someone immediately.

5.  IBM N series warranty support is done by IBM technicians and engineers at Level 1 and Level 2.  If your issue can not be resolved by our Level 2 team they have a hotline into the Netapp Top Enterprise Account team.  This is a team only a few very large Netapp accounts can afford and we provide this support to ALL of the IBM N series accounts no matter how large or small.

4.  Our support teams from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with another and when tough issues come up we are able to scale to the size of the issue.  We can bring in experts that know the SAN, Storage, Servers, and Software all under one umbrella.  With those tough cases we assign a coordinator to make sure the client does not have to call all of these resources themselves.  This person can reach out to all the teams, assigns duties and will coordinate calls with you the customer.

3.  All IBM N series Hardware and Software undergoes an Open Source Committee who validates there are no violations, copy right infringements or patent infringements.

2.  All IBM N series Hardware and Software is tested in our Tucson testing facility for interoperability.  We have a team of distinguished engineers who not only support N series but other hardware and software platforms within in the IBM portfolio.

1.  All IBM N series equipment comes with a standard 3 year warranty for both Hardware and Software.  This warranty can be extended beyond the three years as IBM supports equipment well beyond the normal 3-5 years of a system.

 

When it gets down to it, customers buy because they happy.  Since the systems are exactly the same it comes down to what makes them happy.  For some, the Netapp offering makes them happy because they like their sales engineer, for others they like IBM because they have been doing business with us for over 30 years. 

 

For more information about IBM N series, check out our landing page on http://www-03.ibm.com/systems/storage/network/