Three storage technologies that will help your 2013 budget plan
As the old saying goes, “Out with the old and in with the new” technology keeps rolling down the road. If you are like me, I am always looking for the next thing coming and can’t wait until that hits the market. Watching the blogs, keeping up with pod-casts, asking around the labs keep the buzz inside me going til that day when I can see that new shinny thing in the wild. Over the holiday break I was catching up on some things that interest me and I was reading a Forbes article that described the storage market for 2013 as flat. The article had asked some ‘large enterprises’ about what their spend for 2013 was going to be and one reported they needed to make due with what they had.
“…. the CIO believed that his IT department had been on a storage buying binge and it was high time to in fact use more of the raw disk capacity on the machine room floor that was already powered-up and spinning. Utilization rates of between 20 and 30% of total capacity are common for first-line production storage systems. The CIOs point was simply; let’s use more of what we already have before I’ll sign-off on another PO.”
I would agree with the fact there are companies who have very low utilization rates, especially on those main production storage systems. We as storage architects have been designing this systems to out perform, out last and out live most of the applications supported. Long gone are the days were we simply put everything in RAID 10 that needed performance and up time and newer technologies were adopted like RAID 6 or even no RAID (ala XIV). This has driven the ability to keep systems up longer and lower the cost. Even with drives getting larger and automatic tiering helping storage administrators get a better bang for the buck, we are facing the same paradigm that we faced 5 years ago. Storage is driven by the number of arms and the bandwidth to and from the machine.
This will lead to more smaller spinning drives, more expensive SSDs, and more system sprawl which leads to a management nightmare. If we are going to be asked to increase the utilization, there are three tried and true technologies that can help. Much like the server market, the storage market is going to need to embrace virtualization. Virtualization does something very cool with storage, it gives it a level playing field and allows some of the trapped storage to be released. IBM has been selling the SAN Volume Controller for over 10 years and has a very wide interopability list of storage systems that can fit under the hyperisor. The SVC allows administrators to move volumes from one system to another with out disrupting the application/host. Much like a virtual machine that moved from server to the next, the storage admin can utilize the SVC to take storage from a EMC VMax and move it to a IBM DS8870 with out downtime with only a couple of mouse clicks. This helps customers take parts of the storage that are under performing and start ‘stacking’ the storage pools so that a CIO can see the utilization start going up. If you are worried about performance, the SVC can increase performance with a couple of SSD drives that help keep more reads at the virtualization layer and not on the storage array.
Another technology that we look to is flash arrays. You think this is new? Texas Memory Systems have been doing their flash based array for thirty years. It has only been very recently with the adoption of larger drives have forced companies to look at augmenting their storage arrays with faster, low latency systems. Now with integrating these flash systems with larger drive arrays, we can start putting more on those drives while not compromising performance. The best analogy someone told me was the difference is like going from a corroborator based engine to a fuel injected engine. Not only will your car run better but it will be more efficient and drive up your miles per gallon.
The third technology is another tried and true technology, compression. Yes, I know most of you are shaking your heads and thinking about performance, but this is not the same algorithm you are used to. A few years ago, IBM acquired a small company that was doing some very cool compression technology. This new take on an old standard is based on a software package called RACE or Random Access Compression Engine that can compress data real time. The difference is mainly when the data that is compressed has to be changed. Older technology has to uncompress the data, make the change and the compress it again as the write happens. With RACE, the changes to the data does not need to be uncompressed. A big difference when looking at whether or not to use compression in your storage.
With these technologies you can find ways to help your company get a better utilization out of your storage. Even though we are all looking for that new shinny object, its actually tried and true technology that is going to help us this year. maybe 2014 will be the year of the shinny object based storage.