Archive

Posts Tagged ‘Cloud’

Building a Hybrid Cloud Using IBM Spectrum Scale

May 11, 2016 Leave a comment

Cloud is changing the storage business in more ways than just price per unit. It is fundamentally changing how we design our storage systems and which way we deploy, protect and recover them. For those most fortunate companies who are just starting out the cloud is an easy task as there is no legacy systems or tried and true methods, it has always been on the ‘cloud’.

For most companies that are trying to find ways to cut their storage cost while keeping some control of their storage, cloud seems to be the answer. But getting there is not an easy tasks as most have seen. The transfer of data, code that has to be rewritten, systems and processes that all have to be changed just to report back to their CIO that they are using the cloud.

Now there are many ways to get to the cloud but one that I am excited about is using technology originally deployed back in the late 90s. GPFS (errr, $1 in the naughty jar) Spectrum Scale is a parralel file system that can spread the data across many different tiers of storage. From flash to spinning drives to tape, Scale has the ability to alleviate storage administration by policy based movement of data. This movement is based on the metadata and is written, moved and deleted based on policies set by the storage admin.

So how does this help you get to the cloud? Glad you asked. IBM released a new plug in for Scale that treats the cloud as another tier of storage. This could be from multiple cloud vendors like IBM Cleversafe, IBM Softlayer, Amazon S3 or a private cloud (Think Openstack). The cloud provider is attached to the cloud node over ethernet and allows your Scale system to either write directly to the cloud tier or move data as it ages/cools.

mcstorescalediagram

This will do a couple of things for you.

  1. Because we are looking at the last read date, data that is still needed but the chance you will read it is highly unlikely can be moved automatically to the cloud. If a system needs the file/object there is no re-coding that needs to be done as the namespace doesn’t change.
  2. If you run out of storage and need to ‘burst’ out because of some monthly/yearly job you can move data around to help free up space on-perm or write directly out to the cloud.
  3. Data protection such as snapshots and backups can still take place. This is valuable to many customers as they know the data doesn’t change often but like the idea they don not have to change their recovery process every time they want to add new technology.
  4. Cheap Disaster Recovery. Scale does have the ability to replicate to another system but as these systems grow larger and beyond multiple petabytes, replication becomes more difficult. For the most part you are going to need to recover the most recent (~90 Days) of data that runs your business. Inside of Scale is the ability to create mirrors of data pools. One of those mirrors could be the cloud tier where your most recent data is kept in case there is a problem in the data center.
  5. It allows you to start small and work your way into a cloud offering. Part of the problem some clients have is they want to take on too much too quickly. Because Scale allows customers to have data in multiple clouds, you can start with a larger vendor like IBM and then when your private cloud on Openstack is up and running you can use them both or just one. The migration would be simple as both share the same namespace under the same file system. This frees the client up from having to make changes on the front side of the application.

Today this feature is offered as an open beta only. The release is coming soon as they are tweaking and doing some bug fixes before it is generally  available. Here is the link to the DevWorks page that goes into more about the beta and how to download a VM that will let you test these features out.

http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html

I really believe this is going to help many of my customers move into that hybrid cloud platform. Take a look at the video below and how it can help you as well.

Cloud vs Tape, keep the kittens off your data!

February 4, 2016 2 comments

Currently, I am working with a customer on their archive data and we are discussing which is the better medium for their data that never gets read back into their environment.  They have about 200TB of data that is sitting on their Tier 1 that is not being accessed, ever. The crazy part is this data is growing faster than the database that is being accessed by their main program.

This is starting to pop up more and more as the unstructured data is eating up storage systems and not being used very frequently. I have heard this called dark data or cold data. In this case its frozen data.

We started looking at what it would cost them over a 5 year period to store their data on both tape and cloud. Yes, that four letter word is still a very good option for most customers.  We wanted to keep the exercise simple so we agreed that 200TB would be the size of the data and there would be no recalls on the data. We know most cloud providers charge extra for the recalls so we wanted and of course the tape system doesn’t have that extra cost so we wanted an apples to apples comparison. As close as we could.

For the cloud we used Amazon Glacier pricing which is about $0.007 per GB per month. Our formula for cloud:

200TB X 1000GB X $0.007 x 60 months = $84,000

The tape side of the equation was a little more tricky but we decided that we would just look at the tape media and tape library in comparison. I picked an middle of the road tape library and the new LTO7 media.

Tape Library TS3200 street price $10,000 + 48 LTO7 tapes (@ $150 each) = $17,200

We then looked at the ability to scale and what would happen if they factored in their growth rate. They are growing at 20% annually which translates to 40TB a year. Keeping the same platforms what would be their 5 year cost? Cloud was..

200TB + (Growth of 3.33TB per month) x 1000GB x 60 months = $125,258

Tape was calculated at:

$10,000 for the library + (396TB/6TB LTO7s capacity)x$150 per tape = $19,900

We all here how cloud is so much cheap and easier to scale but after doing this quick back of the napkin math I am not so sure. I know what some of you are saying that we didn’t calculate the server costs and the 4 FTEs it takes to manage a tape system. I agree this is basic but in this example this is a small to medium size company that is trying to invest money into getting their product off the ground. The tape library is fairly small and should be a set it and forget it type of solution. I doubt there will much more overhead for the tape solution than a cloud. Maybe not as cool or flashy but for $100,000 over 5 years they can go out and buy their 5 person IT staff a $100 lunch everyday, all five years.

So to those who think tape is a four letter word and is that thing in the corner that no one wants to deal with, I say embrace it and squeeze the value out of them. Most IT shops have tape still and can show to their finical teams how they can lower their cost with out putting their data at risk in the cloud with this:

 

IBM releases Smarter Storage

February 5, 2013 Leave a comment

smarterstorageThe Super Bowl is over, March Madness is only a few weeks away and IBM is releasing more storage goodies. This latest round of software and hardware is the accumulation of hard work from our men and women that work in IBM research from around the world. But more than just shiny new objects, IBM is changing the story and becoming smarter with their solutions.
Our clients continue to grow and change the way they do business. We know the forces of cloud, analytics, social business and mobile computing are redefining business and creating IT challenges. Continuing with the promise to make the planet ‘smarter’ IBM is constantly listening to their clients and trying to solve their problems with real solutions. These solutions are based on 100 years of innovation and belief that IBM can really make a difference in your business.
Today, IBM is announcing Smarter Computing as the infrastructure that enables a smarter planet. This infrastructure is based on IBM products that are designed to help you transform your IT and meet the needs of what is coming next. There are products from each of our platforms including Power Systems, Pure Systems and of course Storage Systems. These platforms are designed to emphasize what matters most to our clients: Cloud, Data and Security.

Cloud is a term that is thrown around and can be defined by a variety of latest trends and jargon. Two of those that I think are important have to be efficiency and scalability. Data growth along with the demand to lower CAPEX/OPEX is fueling the cloud acceptance. For businesses, a smarter storage solution would demand a better efficiency that is virtualized and automated.
Two of the announcements today fit this profile. One is the IBM SmartCloud Storage Access. The self-service portal enables end-users to dynamically provision storage for use within minutes rather than requiring administer intervention which could take days. With a few clicks users can request and receive storage capacity, share files with other users and your storage administrators can easily monitor and report usage. The end user is guided through a catalog of services based on actual needs and is matched with a storage class that meets their needs.
The other cloud offering is based on a new model of the XIV family of storage. Last year IBM released its third generation of the XIV platform and today it makes that platform more efficient with 15% better energy efficiency. Also included is support for Windows 2012 server environments which includes space reclamation. Finally, this new model now offers 10GbE host ports that will offer up to a 5x sequential increase in iSCSI throughput compared to the previous generation 1GbE ports.
Data is the second design part of a smarter storage platform. As your data grows it is important to begin an automation and self-optimizing plan to accommodate data growth. One of the toughest concepts for businesses to achieve is increasing the speed of which data is gathered, processed and delivered while reducing costs.
Today IBM announces three pieces of IT infrastructure that will move customers closer to that goal. The first comes from our Real Time Compression Appliance. IBM announced a new model, STN7800, that doubles the performance from the previous model. This solution has beat all of the other compression models on the market and IBM is now adding even more performance on top. With RTCA customers can now save by storing more without having to purchase the storage up front and post process the compression. It also can update the data without having to uncompress the data which is unique to only IBM RTC.
Next is the addition of NFS connectivity to the ProtecTIER platform. This new interface enables users with NAS to backup data and dedupes it for better utilization. There was also improvement of the GUI and change in the upgrade to save time and unnecessary effort.
Last of the data design is the new XIV platform and some improved caching algorithms that our 300 mathematicians came up with. Yes IBM has the largest (and smartest) math department in the world. This improvement has been clocked to give a 4.5X performance bump for random workloads compared to previous versions.
Outside of the Smarter Storage announcements, IBM is releasing products in both the Power Systems and Pure Systems lines. Improving on the Power7 line, a Power 750 and 760 Enterprise server were announced along with Power Entry Servers and PowerLinux servers with Power 7+ processors and architecture.
Over on the Pure Systems side, a huge focus on MSP, VDI and interoperability are major topics of an all-day webcast. Other aspects around the Pure Systems platform include new models for entry x6 and Power based system.
IBM is very focused on meeting the needs of the customers with the infrastructure of a Smarter Planet. As customers look for ways to cut costs while dealing with the data growth, IBM is poised to take market share from those who still base their product on a 1992 technology.
To find more information about IBM Smarter Storage go here.

Smarter Fishing

January 10, 2012 1 comment

Last year the IT industry was saturated with the cloud buzz word.  Even Microsoft got in the game and took the cloud to the mainstream by adverts of suburban wives cropping photos on the ‘cloud’.

Every sales person was challenged with how many times he could bring up the word in a meeting.  There are even cloud magazines and cloud soap dispensers that allow you to have just in time soap inventory.

OK that last one is not true, yet, but we did see everyone talking about the cloud and how it could IT become more productive and increase the ROI for a company big and small.  I often here people tell me they are not big enough to use the cloud, some say they are too big.  At times people bring up the fact their systems are too complicated for a cloud infrastructure.  I respect their opinions and we continue to talk about other aspects of the IT industry like flash or the price of hard drives in Thailand.  But a video on YouTube recently caught my attention.

This video is about a group of fishermen in Italy near the port of Bari.  Bari is on the Adriatic Sea and is the second most important economic center of southern Italy after Naples.  A relative small city of only 325,000 people but steep in culture dating back to the 3rd century BC as a point of junction between the coast road and the Via Traiana and as a port for trade.  One of the other interesting facts is the city has been a center for fishing with tons of markets and fishermen who have fished for many, many generations.

In the past, the fishermen were going out and catching as many fish as they could and then returning back to port to sell them at the market.  This created a lot of waste as not all of the fish were purchased and the overage was wasted.  Also this created an imbalance in the market as too many fish saturated the market which drove the prices down and left very little margin for the fishermen.

With the help of the University of Bari, IBM developed a system that allowed this industry to become more efficient.  One solution was to allow the fishermen to communicate their real time catch numbers.  The other was a virtual fish market to sell fish before the boat returned back to the dock. 

Now the fishermen only catch what the market is demanding; lowering the amount of waste, but also allowing the market to control how much supply is needed.  This system in this example allowed the fishermen to increase their income by 25% and decreased the time to market by 70%.  Real numbers that had a stunning result on some of the biggest skeptics.

There is another important side effect of this technology change.  With fewer fish being caught the healthier the fish population.  Over fishing in the waters off our coasts have been devastating our food supply but also changing the make up of the fish as the food chain is mutated.

Now did IBM file this under their cloud solutions? You bet we did.  But in reality we have been doing this sort of thing for a long time.  Was this any different than what we did on the Apollo project for NASA back in the 1960’s? Not so much.  We have the hardware, the software, services and financing to fix the problems of today.  During your next meeting with IBM or vendor of your choice, don’t get hung up on the term cloud but think more about these fishermen and how their lives have changed.

Categories: General Tags: , , , ,