Archive

Archive for the ‘Software Defined Storage’ Category

Data Hoarding: How much is it really costing you?

March 13, 2017 Leave a comment

n-HOARDING-SIGNS-large640I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
But on more than one occasion I have had to go to the closet of ‘junk’ to get something that helped me in completing a project. A Cat5 cable for my son’s computer, an extra wireless mouse when my other one died. Yes I could go through it all and sort it out and come up with some nice labels for it all, but that takes time. It’s just easier to close the container lid and forget about it until I realize I need something and its easy enough to grab it.
Now this is not a hoarding issue like those you see on TV where people fill their house, garage, sheds and barns with all kinds of things. Those people who show up on TV have taking the ‘collecting’ business to another level and some call them ‘hoarders’. But if you watch shows like “American Pickers” on the History Chanel, you will notice that most of the ‘hoarders’ know what they have and where, a meta data knowledge of their antiques.
When you look at how businesses are storing their data today, most are looking to keep as much as possible in production. Data that is no longer serving a real purpose but storage admins are too gun shy to hit the delete button on it for fear of some VMWare admin calling up to see why their Windows NT 4 server is not responding. If you have tools that can move data around based on the age or last accessed then you have made a great leap into making savings. But these older ILM systems can not handle the growth of unstructured data of 2017.
Companies want to be able to create a container for the data and not have to worry if the data is on prem, off prem, on disk or tape. Set it and forget it is the basic rule of thumb. But this becomes difficult due to the nature of data as it has many different values depending on who you ask. A 2 year old invoice is not as valuable to someone in Engineering as it is to the AR person who is using it to base their next billing cycle.
One of the better ways to cut through the issue is to have a flexible platform that can move data from expensive flash down to tape and cloud with out changing the way people access the data. If the user can not tell the difference where his data is coming from and does not have to change the way he gets to it then why not look at putting the cold data on something low cost like tape and cloud tape.scale_cloudTier
This type of system can be accomplished but using the IBM Spectrum Scale platform. The file system has a global name space across all of the different types of media and can even use the cloud as a place to store data without changing the way the end user will access the data. The file movement is policy based and allows admins to not ask the user if the data is needed, it simply can move it to a lower cost as it gets older/colder. The best part is because of a new licensing scheme, customers only pay the TB license for data that is on disk and flash. Any data that sits on Tape does not contribute to the overall license cost.
For example: 500TB of data, 100 TBs that is less than 30 days old and 400 that will greater than 30 days. If stored on a Spectrum Scale file system, you only have to pay for the 100 TBs that is being stored on disk and not the 400 TB on tape. This greatly reduces the cost to store data as while not taking features away from our customers.
For more great information on the IBM Spectrum Scale go here to this link and catch up.

Advertisements

How to Save Money by Buying Dumber Flash

October 19, 2016 Leave a comment

Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.

1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.

2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call ‘stickiness’. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say “I want these <insert vendor name> out of here”, are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?

I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.

Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.

The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to ‘cloud’. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.