Archive

Posts Tagged ‘compression’

IBM releases Smarter Storage

February 5, 2013 Leave a comment

smarterstorageThe Super Bowl is over, March Madness is only a few weeks away and IBM is releasing more storage goodies. This latest round of software and hardware is the accumulation of hard work from our men and women that work in IBM research from around the world. But more than just shiny new objects, IBM is changing the story and becoming smarter with their solutions.
Our clients continue to grow and change the way they do business. We know the forces of cloud, analytics, social business and mobile computing are redefining business and creating IT challenges. Continuing with the promise to make the planet ‘smarter’ IBM is constantly listening to their clients and trying to solve their problems with real solutions. These solutions are based on 100 years of innovation and belief that IBM can really make a difference in your business.
Today, IBM is announcing Smarter Computing as the infrastructure that enables a smarter planet. This infrastructure is based on IBM products that are designed to help you transform your IT and meet the needs of what is coming next. There are products from each of our platforms including Power Systems, Pure Systems and of course Storage Systems. These platforms are designed to emphasize what matters most to our clients: Cloud, Data and Security.

Cloud is a term that is thrown around and can be defined by a variety of latest trends and jargon. Two of those that I think are important have to be efficiency and scalability. Data growth along with the demand to lower CAPEX/OPEX is fueling the cloud acceptance. For businesses, a smarter storage solution would demand a better efficiency that is virtualized and automated.
Two of the announcements today fit this profile. One is the IBM SmartCloud Storage Access. The self-service portal enables end-users to dynamically provision storage for use within minutes rather than requiring administer intervention which could take days. With a few clicks users can request and receive storage capacity, share files with other users and your storage administrators can easily monitor and report usage. The end user is guided through a catalog of services based on actual needs and is matched with a storage class that meets their needs.
The other cloud offering is based on a new model of the XIV family of storage. Last year IBM released its third generation of the XIV platform and today it makes that platform more efficient with 15% better energy efficiency. Also included is support for Windows 2012 server environments which includes space reclamation. Finally, this new model now offers 10GbE host ports that will offer up to a 5x sequential increase in iSCSI throughput compared to the previous generation 1GbE ports.
Data is the second design part of a smarter storage platform. As your data grows it is important to begin an automation and self-optimizing plan to accommodate data growth. One of the toughest concepts for businesses to achieve is increasing the speed of which data is gathered, processed and delivered while reducing costs.
Today IBM announces three pieces of IT infrastructure that will move customers closer to that goal. The first comes from our Real Time Compression Appliance. IBM announced a new model, STN7800, that doubles the performance from the previous model. This solution has beat all of the other compression models on the market and IBM is now adding even more performance on top. With RTCA customers can now save by storing more without having to purchase the storage up front and post process the compression. It also can update the data without having to uncompress the data which is unique to only IBM RTC.
Next is the addition of NFS connectivity to the ProtecTIER platform. This new interface enables users with NAS to backup data and dedupes it for better utilization. There was also improvement of the GUI and change in the upgrade to save time and unnecessary effort.
Last of the data design is the new XIV platform and some improved caching algorithms that our 300 mathematicians came up with. Yes IBM has the largest (and smartest) math department in the world. This improvement has been clocked to give a 4.5X performance bump for random workloads compared to previous versions.
Outside of the Smarter Storage announcements, IBM is releasing products in both the Power Systems and Pure Systems lines. Improving on the Power7 line, a Power 750 and 760 Enterprise server were announced along with Power Entry Servers and PowerLinux servers with Power 7+ processors and architecture.
Over on the Pure Systems side, a huge focus on MSP, VDI and interoperability are major topics of an all-day webcast. Other aspects around the Pure Systems platform include new models for entry x6 and Power based system.
IBM is very focused on meeting the needs of the customers with the infrastructure of a Smarter Planet. As customers look for ways to cut costs while dealing with the data growth, IBM is poised to take market share from those who still base their product on a 1992 technology.
To find more information about IBM Smarter Storage go here.

Three storage technologies that will help your 2013 budget plan

January 7, 2013 2 comments

As the old saying goes, “Out with the old and in with the new” technology keeps rolling down the road. If you are like me, I am always looking for the next thing coming and can’t wait until that hits the market. Watching the blogs, keeping up with pod-casts, asking around the labs keep the buzz inside me going til that day when I can see that new shinny thing in the wild.  Over the holiday break I was catching up on some things that interest me and I was reading a Forbes article that described the storage market for 2013 as flat.  The article had asked some ‘large enterprises’ about what their spend for 2013 was going to be and one reported they needed to make due with what they had.

“…. the CIO believed that his IT department had been on a storage buying binge and it was high time to in fact use more of the raw disk capacity on the machine room floor that was already powered-up and spinning. Utilization rates of between 20 and 30% of total capacity are common for first-line production storage systems. The CIOs point was simply; let’s use more of what we already have before I’ll sign-off on another PO.”

I would agree with the fact there are companies who have very low utilization rates, especially on those main production storage systems.  We as storage architects have been designing this systems to out perform, out last and out live most of the applications supported.  Long gone are the days were we simply put everything in RAID 10 that needed performance and up time and newer technologies were adopted like RAID 6 or even no RAID (ala XIV).  This has driven the ability to keep systems up longer and lower the cost. Even with drives getting larger and automatic tiering helping storage administrators get a better bang for the buck, we are facing the same paradigm that we faced 5 years ago.  Storage is driven by the number of arms and the bandwidth to and from the machine.

This will lead to more smaller spinning drives, more expensive SSDs, and more system sprawl which leads to a management nightmare.  If we are going to be asked to increase the utilization, there are three tried and true technologies that can help.  Much like the server market, the storage market is going to need to embrace virtualization.  Virtualization does something very cool with storage, it gives it a level playing field and allows some of the trapped storage to be released.  IBM has been selling the SAN Volume Controller for over 10 years and has a very wide interopability list of storage systems that can fit under the hyperisor.  The SVC allows administrators to move volumes from one system to another with out disrupting the application/host.  Much like a virtual machine that moved from server to the next, the storage admin can utilize the SVC to take storage from a EMC VMax and move it to a IBM DS8870 with out downtime with only a couple of mouse clicks.  This helps customers take parts of the storage that are under performing and start ‘stacking’ the storage pools so that a CIO can see the utilization start going up.  If you are worried about performance, the SVC can increase performance with a couple of SSD drives that help keep more reads at the virtualization layer and not on the storage array.

Another technology that we look to is flash arrays.  You think this is new? Texas Memory Systems have been doing their flash based array for thirty years.  It has only been very recently with the adoption of larger drives have forced companies to look at augmenting their storage arrays with faster, low latency systems.  Now with integrating these flash systems with larger drive arrays, we can start putting more on those drives while not compromising  performance.  The best analogy someone told me was the difference is like going from a corroborator based engine to a fuel injected engine.  Not only will your car run better but it will be more efficient and drive up your miles per gallon.

The third technology is another tried and true technology, compression.  Yes, I know most of you are shaking your heads and thinking about performance, but this is not the same algorithm you are used to.  A few years ago, IBM acquired a  small company that was doing some very cool compression technology.  This new take on an old standard is based on a software package called RACE or Random Access Compression Engine that can compress data real time.  The difference is mainly when the data that is compressed has to be changed.  Older technology has to uncompress the data, make the change and the compress it again as the write happens.  With RACE, the changes to the data does not need to be uncompressed.  A big difference when looking at whether or not to use compression in your storage.

With these technologies you can find ways to help your company get a better utilization out of your storage.  Even though we are all looking for that new shinny object, its actually tried and true technology that is going to help us this year.  maybe 2014 will be the year of the shinny object based storage.

IBM V7000 sets the bar with SPC1 all Flash results

June 18, 2012 9 comments

Last week IBM published some very interesting results on the Storage Performance Council website.  Using the SPC-1 test method, IBM raised more than just a few eye brows.  IBM configured 18 200GB SSDs in a mirrored configuration which was able to attain 120k IOPS with less than 5 ms response time even at the 100% load.

IBM used an all SSD array that fit into a single 2U space and mirrored the drives in RAID 1 fashion. These were all put into a pool and 8 volumes were carved out for the AIX server. The previous SPC1 run IBM performed used spinning media and virtualized the systems using the SAN Volume Controller virtualization engine. This gave IBM the top spot in the SPC1 benchmark with over 520,000 IOPS costing a wopping $6.92 per IOP.

This has been compared to the Oracle/Sun ZFS 7420 who published 137k IOPS late last year.  When matched to the IBM V7000 results we see the V7000 was $181,029  with roughly $1.50 per IOP compared to the SUN 7420 who came in at $409,933 and $2.99 per IOP. The V7000 was able to perform 88% of the work at 44% of the price. Oracle has not come back with any type of statement but I can only bet they are seething over this and will be in the labs trying to find a way to lower their cost while still performing.

The SPC1 benchmark tests the performance of a storage array doing mainly random I/O in a business environment. It has been called out for not being a very realistic workload as it tends to carter to higher end, cache heavy systems.  Now we have a mid range box that not only has the most IOPs trophy but also wins in the ‘Best Bang for your Buck’ category too.

I would have liked to see what the results would have been with the addition of compression to the software feature stack. This  going to be a game changer for IBM as the inline compression of block data is way beyond what other vendors are doing.  Couple that with the virtualization engine and now I can compress data on anyone’s storage array. The V7000 is definitely becoming a smarter storage solution for a growing storage market.