Archive

Posts Tagged ‘#IBMStorage’

Data Hoarding: How much is it really costing you?

March 13, 2017 Leave a comment

n-HOARDING-SIGNS-large640I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
But on more than one occasion I have had to go to the closet of ‘junk’ to get something that helped me in completing a project. A Cat5 cable for my son’s computer, an extra wireless mouse when my other one died. Yes I could go through it all and sort it out and come up with some nice labels for it all, but that takes time. It’s just easier to close the container lid and forget about it until I realize I need something and its easy enough to grab it.
Now this is not a hoarding issue like those you see on TV where people fill their house, garage, sheds and barns with all kinds of things. Those people who show up on TV have taking the ‘collecting’ business to another level and some call them ‘hoarders’. But if you watch shows like “American Pickers” on the History Chanel, you will notice that most of the ‘hoarders’ know what they have and where, a meta data knowledge of their antiques.
When you look at how businesses are storing their data today, most are looking to keep as much as possible in production. Data that is no longer serving a real purpose but storage admins are too gun shy to hit the delete button on it for fear of some VMWare admin calling up to see why their Windows NT 4 server is not responding. If you have tools that can move data around based on the age or last accessed then you have made a great leap into making savings. But these older ILM systems can not handle the growth of unstructured data of 2017.
Companies want to be able to create a container for the data and not have to worry if the data is on prem, off prem, on disk or tape. Set it and forget it is the basic rule of thumb. But this becomes difficult due to the nature of data as it has many different values depending on who you ask. A 2 year old invoice is not as valuable to someone in Engineering as it is to the AR person who is using it to base their next billing cycle.
One of the better ways to cut through the issue is to have a flexible platform that can move data from expensive flash down to tape and cloud with out changing the way people access the data. If the user can not tell the difference where his data is coming from and does not have to change the way he gets to it then why not look at putting the cold data on something low cost like tape and cloud tape.scale_cloudTier
This type of system can be accomplished but using the IBM Spectrum Scale platform. The file system has a global name space across all of the different types of media and can even use the cloud as a place to store data without changing the way the end user will access the data. The file movement is policy based and allows admins to not ask the user if the data is needed, it simply can move it to a lower cost as it gets older/colder. The best part is because of a new licensing scheme, customers only pay the TB license for data that is on disk and flash. Any data that sits on Tape does not contribute to the overall license cost.
For example: 500TB of data, 100 TBs that is less than 30 days old and 400 that will greater than 30 days. If stored on a Spectrum Scale file system, you only have to pay for the 100 TBs that is being stored on disk and not the 400 TB on tape. This greatly reduces the cost to store data as while not taking features away from our customers.
For more great information on the IBM Spectrum Scale go here to this link and catch up.

Building a Hybrid Cloud Using IBM Spectrum Scale

May 11, 2016 Leave a comment

Cloud is changing the storage business in more ways than just price per unit. It is fundamentally changing how we design our storage systems and which way we deploy, protect and recover them. For those most fortunate companies who are just starting out the cloud is an easy task as there is no legacy systems or tried and true methods, it has always been on the ‘cloud’.

For most companies that are trying to find ways to cut their storage cost while keeping some control of their storage, cloud seems to be the answer. But getting there is not an easy tasks as most have seen. The transfer of data, code that has to be rewritten, systems and processes that all have to be changed just to report back to their CIO that they are using the cloud.

Now there are many ways to get to the cloud but one that I am excited about is using technology originally deployed back in the late 90s. GPFS (errr, $1 in the naughty jar) Spectrum Scale is a parralel file system that can spread the data across many different tiers of storage. From flash to spinning drives to tape, Scale has the ability to alleviate storage administration by policy based movement of data. This movement is based on the metadata and is written, moved and deleted based on policies set by the storage admin.

So how does this help you get to the cloud? Glad you asked. IBM released a new plug in for Scale that treats the cloud as another tier of storage. This could be from multiple cloud vendors like IBM Cleversafe, IBM Softlayer, Amazon S3 or a private cloud (Think Openstack). The cloud provider is attached to the cloud node over ethernet and allows your Scale system to either write directly to the cloud tier or move data as it ages/cools.

mcstorescalediagram

This will do a couple of things for you.

  1. Because we are looking at the last read date, data that is still needed but the chance you will read it is highly unlikely can be moved automatically to the cloud. If a system needs the file/object there is no re-coding that needs to be done as the namespace doesn’t change.
  2. If you run out of storage and need to ‘burst’ out because of some monthly/yearly job you can move data around to help free up space on-perm or write directly out to the cloud.
  3. Data protection such as snapshots and backups can still take place. This is valuable to many customers as they know the data doesn’t change often but like the idea they don not have to change their recovery process every time they want to add new technology.
  4. Cheap Disaster Recovery. Scale does have the ability to replicate to another system but as these systems grow larger and beyond multiple petabytes, replication becomes more difficult. For the most part you are going to need to recover the most recent (~90 Days) of data that runs your business. Inside of Scale is the ability to create mirrors of data pools. One of those mirrors could be the cloud tier where your most recent data is kept in case there is a problem in the data center.
  5. It allows you to start small and work your way into a cloud offering. Part of the problem some clients have is they want to take on too much too quickly. Because Scale allows customers to have data in multiple clouds, you can start with a larger vendor like IBM and then when your private cloud on Openstack is up and running you can use them both or just one. The migration would be simple as both share the same namespace under the same file system. This frees the client up from having to make changes on the front side of the application.

Today this feature is offered as an open beta only. The release is coming soon as they are tweaking and doing some bug fixes before it is generally  available. Here is the link to the DevWorks page that goes into more about the beta and how to download a VM that will let you test these features out.

http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html

I really believe this is going to help many of my customers move into that hybrid cloud platform. Take a look at the video below and how it can help you as well.

Cloud vs Tape, keep the kittens off your data!

February 4, 2016 2 comments

Currently, I am working with a customer on their archive data and we are discussing which is the better medium for their data that never gets read back into their environment.  They have about 200TB of data that is sitting on their Tier 1 that is not being accessed, ever. The crazy part is this data is growing faster than the database that is being accessed by their main program.

This is starting to pop up more and more as the unstructured data is eating up storage systems and not being used very frequently. I have heard this called dark data or cold data. In this case its frozen data.

We started looking at what it would cost them over a 5 year period to store their data on both tape and cloud. Yes, that four letter word is still a very good option for most customers.  We wanted to keep the exercise simple so we agreed that 200TB would be the size of the data and there would be no recalls on the data. We know most cloud providers charge extra for the recalls so we wanted and of course the tape system doesn’t have that extra cost so we wanted an apples to apples comparison. As close as we could.

For the cloud we used Amazon Glacier pricing which is about $0.007 per GB per month. Our formula for cloud:

200TB X 1000GB X $0.007 x 60 months = $84,000

The tape side of the equation was a little more tricky but we decided that we would just look at the tape media and tape library in comparison. I picked an middle of the road tape library and the new LTO7 media.

Tape Library TS3200 street price $10,000 + 48 LTO7 tapes (@ $150 each) = $17,200

We then looked at the ability to scale and what would happen if they factored in their growth rate. They are growing at 20% annually which translates to 40TB a year. Keeping the same platforms what would be their 5 year cost? Cloud was..

200TB + (Growth of 3.33TB per month) x 1000GB x 60 months = $125,258

Tape was calculated at:

$10,000 for the library + (396TB/6TB LTO7s capacity)x$150 per tape = $19,900

We all here how cloud is so much cheap and easier to scale but after doing this quick back of the napkin math I am not so sure. I know what some of you are saying that we didn’t calculate the server costs and the 4 FTEs it takes to manage a tape system. I agree this is basic but in this example this is a small to medium size company that is trying to invest money into getting their product off the ground. The tape library is fairly small and should be a set it and forget it type of solution. I doubt there will much more overhead for the tape solution than a cloud. Maybe not as cool or flashy but for $100,000 over 5 years they can go out and buy their 5 person IT staff a $100 lunch everyday, all five years.

So to those who think tape is a four letter word and is that thing in the corner that no one wants to deal with, I say embrace it and squeeze the value out of them. Most IT shops have tape still and can show to their finical teams how they can lower their cost with out putting their data at risk in the cloud with this:

 

IBM Bundles Spectrum SDS Licenses

January 27, 2016 Leave a comment

IBM changed the way they are going to market with the Spectrum Storage family of software defined storage platform. Since the initial re-branding of their software formerly known as Tivioli, XIV, GPFS, SVC, TPC and LTFS, the plan was to create a portfolio of packages that would aid in protecting and storing data on existing hardware or in the cloud. This lines up with how Big Blue is looking for better margins and cloud ready everything.

These platforms, based on a heritage of IBM products, now are available as a suite where a customer can order the license (per TB) with unlimited usage for all six offerings. The now allows customers to move more rapidly into the SDS environment not have a complex license agreement to manage. All of the Spectrum family is based on a similar look and feel and support is all done through IBM.

Clients will have to license the software only for production capacity. Since all of the software is part of the suite, clients can also test and deploy different items and mix and match as they see fit. If you need 100TB of data protection, this allows you to have 50TB or Spectrum Protect and maybe 50 TB of Spectrum Archive. If you then need to add storage monitoring IE Spectrum Control, then your license count doesn’t start from 0 but at 100TB. If anything has taught me working with IBM, the more you buy of the same thing the cheaper per unit it will be in the end.

For more information on the Spectrum Storage Suite go to the IBM home here:

http://www.ibm.com/spectrumstorage

IBM Releases Slew of Storeage Related Systems and Software Features

October 3, 2012 Leave a comment

So many things to talk about but a couple of notes of interest from today:

  1. DS8870 is a new system not just an upgrade. IBM went from the P6 server to the P7 which should give them a huge performance bump. I heard there are some impressive SPC numbers coming soon.
  2. XIV gets a GUI improvement with Multi-system manager. This will help drive some efficiency in management of those environments with larger deployments.
  3. V7000 Unified gets compression for file. Same story as on block but now for file objects.

Here are links to the hardware and software announcements from today.

Hardware

 

IBM System Storage DS8870 (Machine type 2423) Models 961 and 96E with three-year warranty

 

IBM System Storage TS1060 Tape Drive offers an Ultrium 6 Tape Drive for the TS3500 Tape Library

 

IBM Virtualization Engine TS7700 supports disk-based encryption

 

IBM System Storage DS8870 (Machine type 2421) Models 961 and 96E with one-year warranty

 

IBM System Storage DS8870 (Machine type 2424) Models 961 and 96E with four-year warranty

 

IBM System Storage DS8000 series high-performance flagship – Function Authorizations for machine type 239x

 

IBM System Storage DS8870 (Machine type 2422) Models 961 and 96E with two-year warranty

 

 Software

 

IBM Systems Director Standard Edition for Linux on System z, V6.3 now manages zBX blades

 

IBM Systems Director product enhancements provide tools to better manage virtual and physical networks

 

XIV management is designed to enable more effective XIV deployments into private cloud computing environments and improve multi-system management

 

IBM Storwize V7000 Unified V1.4 includes real-time compression, local authentication server support, four-way clustering, and FCOE support

 

IBM Programmable Network Controller V3.0, when used with OpenFlow-enabled switches, provides architecture for centralized and simplified networking

 

IBM SmartCloud Virtual Storage Center V5.1 offers efficient virtualization and infrastructure management to enable smarter storage

 

IBM Tivoli Storage Manager V6.4 products deliver significant enhancements to manage data protection in virtual environments

 

IBM Infoprint XT for z/OS, V3.1 provides support to transform Xerox data streams and highlight color resources for printing on AFP printers and enhances DBCS support

 

IBM Security zSecure V1.13.1 products and solutions enhance mainframe security intelligence, compliance, administration and integration

 

IBM Tivoli Storage FlashCopy Manager V3.2 extends application-aware snapshot management to IBM N series and NetApp devices and enables seamless disaster recovery

 

IBM SONAS and Paradigm Epos 4 integrated seismic processing solution guide

December 23, 2011 Leave a comment

IBM published a paper this week describing how the scale out NAS product, SONAS, works with a software package in the seismic processing space called Paradigm Epos4. The report goes into detail of both the hardware and software issues surrounding the massive amounts of data associated with finding deposits of fossil fuels in the strata.

The software supports NFS mounts which seems to be the sweet spot of the linux based SONAS system. One of the biggest problems with the oil and gas industry is the tremendous and rich amount of data.
The cost of drilling varies depending on the depth of the well, remoteness of the location and extra services required to get the oil or gas up to the surface. With some of the deepwater rigs the rates for 2010 was around $420,000 per day and could be more on higher performance rigs.
With so much on the line, it is very important to get information accurate and quickly so that companies can avoid costly mistakes. IBM has been working in the oil and gas industry for over 50 years. We have experts not only in the hardware, software and services but we understand the industry and how “big-data” is changing that industry faster than others.
SONAS allows for companies to have a large scale NAS solution that can have a single files system for multiple peta-bytes of data. SONAS also allows data to move from faster pools to other virtualized systems down to a tape archive. This increases the ROI by having the most recent accessed data on the faster drives and customers can expand their buying cycles further because they are not spinning old data.
The other variable in this industry is companies need to scale projects up quickly and not always with a 1:1 ratio of performance to storage space. SONAS is able to scale both of these variables independently of one another. As new systems are brought online, disks can be added and rebalanced non disruptively. The same can be done with the interface nodes.
More information about the testing can be found in the report here.

Categories: SONAS Tags: , , , ,

Top 10 Reasons clients choose to go with IBM N series

December 22, 2011 Leave a comment

Top 10 Reasons clients choose to go with IBM N series

 Image

Some years ago I put together a list of reasons why people choose to buy from IBM rather than purchase directly from Netapp.  IBM has an OEM agreement with Netapp and rebrands the FAS and V-series as their N series product line.  They are both made at the same plant and the only difference between them is the front bezel.  You can even take a Netapp bezel off and stick it on an N series box and it fits exactly. 

 

The Software is the same exactly.  All we change is the logos and readme files.  The entire functionality of the product is exactly the same.  IBM does not add or take away any of the features built into the systems.  The only difference is it takes IBM about 90 days once Netapp releases a product to get it put online and change the necessary documents. 

 

Support for N series is done both at IBM and Netapp.  Much like our other OEM partners, they stand behind IBM as the developers and IBM handles the issues.  Customers still call the same 1.800.IBM.SERV for support and speak to trained engineers who have been working on N series equipment for 6+ years now.  IBM actually has lower turn over than Netapp in their support division and has won awards for providing top-notch support.  The call home features that most people are used to still go to Netapp via IBM servers. 

 

 

10.  The IBM customer engineer (CE) that is working with you today will be the same person who helps you with the IBM N series system.

9.  IBM GBS team can provide consultation, installation and even administration of your environment.

8.  IBM is able to provide financing for clients.

7.  When you purchase your N series system from IBM, you can bundle it with servers, switches, other storage and software.  This gives you one bill, one place to go to if you need anything and one support number to call.

6.  IBM has two other support offerings to help our clients, Our Supportline offering allows customers to call in and ask installation or configuration questions.  We also have an Enhanced Technical Support (ETS) team that will assign a personal engineer that will know everything about your environment and will provide you with everything you need.  They will help you with health checks to be sure the system is running optimally, updates on the latest technology and single point of contact in case you need to speak to someone immediately.

5.  IBM N series warranty support is done by IBM technicians and engineers at Level 1 and Level 2.  If your issue can not be resolved by our Level 2 team they have a hotline into the Netapp Top Enterprise Account team.  This is a team only a few very large Netapp accounts can afford and we provide this support to ALL of the IBM N series accounts no matter how large or small.

4.  Our support teams from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with another and when tough issues come up we are able to scale to the size of the issue.  We can bring in experts that know the SAN, Storage, Servers, and Software all under one umbrella.  With those tough cases we assign a coordinator to make sure the client does not have to call all of these resources themselves.  This person can reach out to all the teams, assigns duties and will coordinate calls with you the customer.

3.  All IBM N series Hardware and Software undergoes an Open Source Committee who validates there are no violations, copy right infringements or patent infringements.

2.  All IBM N series Hardware and Software is tested in our Tucson testing facility for interoperability.  We have a team of distinguished engineers who not only support N series but other hardware and software platforms within in the IBM portfolio.

1.  All IBM N series equipment comes with a standard 3 year warranty for both Hardware and Software.  This warranty can be extended beyond the three years as IBM supports equipment well beyond the normal 3-5 years of a system.

 

When it gets down to it, customers buy because they happy.  Since the systems are exactly the same it comes down to what makes them happy.  For some, the Netapp offering makes them happy because they like their sales engineer, for others they like IBM because they have been doing business with us for over 30 years. 

 

For more information about IBM N series, check out our landing page on http://www-03.ibm.com/systems/storage/network/