IBM Signs Software Reseller Agreement with Panzura

May 15, 2019 Leave a comment

Modernize Your OLD NAS: IBM Signs Software Reseller Agreement with Panzura

Traditional NAS has been great but now that we are seeing huge growth in the unstructured data space, the systems simply can’t keep up. The cost to protect through snapshot, backups, replication can be even more of a cost and burden. One way to simplify your protection scheme is to move to a platform that doesn’t require multiple copies to protect. IBM Cloud Object Storage uses a method called geo-dispersal that creates a 12 9s’ available platform with no snaps, backups or replication. 

Panzura and IBM have inked a new deal where IBM can sell the Freedom Cloud NAS with the object storage platform, COS. This agreement gives IBM a new avenue for not just a NAS front end but a way to migrate off expensive Netapp and Isilon file shares.  This easy way to migrate file storage to IBM COS without interruption to workflows or applications. Panzura makes file-based apps cloud-capable and available on COS, no rewrite needed. Once deployed, the combination of Freedom NAS and COS provides a single source of truth for converged primary and secondary file data while eliminating costly backup and data recovery workflows.

Panzura Freedom NAS and IBM COS deliver: 

  • High performance NFS/SMB file services
  • Integrated backup, cloud DR and data protection
  • Encrypted data in flight and at rest
  • Infinite scale and durability of the IBM COS
  • Automated, API-driven workflows
  • Search and analytics

With unstructured data accounting for 80% of all data created by 2020, enterprise companies looking to unlock the power of that data require the scale and durability of a modern object store with the performance and features of a traditional NAS solution,” said Patrick Harr, CEO, Panzura. “Panzura is the only enterprise high-performance file services solution built for IBM COS. Our partnership with IBM enables enterprise customers to easily migrate their file-based applications without rewrite, converge their primary and secondary storage and collaborate globally from a single, scalable platform.

The combination of Freedom NAS and COS includes:

  • Rendering and content creation for media and entertainment
  • Processing genomics for life sciences
  • Running simulations and archiving data for financial services
  • Seismic data processing for oil and gas
  • Medical imaging storage and analytics for healthcare
  • EDA simulations for engineering
  • Cross-site collaboration for CAD/CAM
  • File shares and home directories
  • Database and VM backups in the cloud
  • Cloud DR and long-term archival of critical file data

For more information on the Panzura Freedom NAS go here:

For more information on COS go here:


Categories: Cloud

IBM Announces Renewed V5000 for NVMe and more…

April 3, 2019 Leave a comment

IBM is churning out the announcements and as you might have seen in the past, IBM Storage is a major component in the releases today.  If you are looking for a mid tier all flash array that comes full features, you may want to look at the Storwize V5000 product line.

Screen Shot 2019-04-03 at 2.11.43 PM

The new generation is about 4x or 75% more throughput than previous generations. if you start with a V5010, then you can upgrade to the V5030 non-disruptively.  The size of these systems can scale up to 23 PB of flash and 32 PB with 2 way clustering. On the larger systems that support data reduction, IBM claims they can do this without performance impact. This is using the same algorithms that have been in the field on the flagship FS9100.

For security, we now see these systems come with hardware assisted encryption. IBM is also toting  FIPS 140-2 certification and IBM FlashWatch guarantee.

One nice thing I like about the IBM Flash Core modules is they come with hardware compression in each module. This is done with out involving the system processor so there is no performance hit for data coming in to the system. The other thing I like is the data is compressed high in the software stack, meaning the data that hits the flash cells is much smaller than other systems. In a normal environment (70/30 R/W) your flash is going to live longer because you are using fewer cells up front.

Screen Shot 2019-04-03 at 2.26.46 PM.png

Another innovation of the Flash Core Modules is variable voltage levels on writes. As a flash cells ages, each block is analyzed to determine the health and the ideal voltage level is set to minimize errors. In all other flash systems, they would mark that cell not usable which means your flash will eventually error out and have to be replaced before the Flash Core Modules from IBM.

Dynamic read level shifting over the life of flash blocks ensures that the flash cells will have the longest possible lifespan

Predictive techniques adjust internal flash settings in advance, minimizing probability of
un-correctable errors

Using advanced characterization lab, IBM FlashSystem developers determined best voltage levels to set a block as it ages proactively


These FCMs come in 4.8 9.6 19.2 TB sizes on the V5100, V7000 and of course on the flag ship FS9100 systems. You will also see these systems support upcoming Storage Class Memory in the near future.

These new V5s are part of the existing line of Storwize lines of  V7 and FS9 and using the IBM Spectrum Virtualize code that has been in the field for more than 15 years. The list of enterprise options are fairly long and most are table stakes in this fast changing world of data management.

If you have an older V5000, you can not reuse the older generation of the V5000 disks under the new one. This has been the case for many of the Storwize products and something IBM should look into how to deliver better value to their customers.

V5010E does not have the data reduction pools which means no compression or dedupe on the smallest system. V5030 and V5100 will come with enough horsepower to accommodate these tasks. A customer could start with the V5010 and move up to the V5030 if they needed the more horsepower/dedupe etc non disruptively.

32GB FC available on the V5100F and V5100 V7 Gen3 and FS9100. Roce and Iwarp cards 25GB available for those looking to move to an Ethernet block storage solution.

All in all, a great announcement and enhancement to the entry level storage array for IBM. A great way for small to medium size customers who are looking for enterprise soltuions, great performance and lower costs. I will be putting more blogs up in the next few days around IBM Spectrum Virtualize for Public Cloud and more.


Categories: Uncategorized

Ubiquity is everywhere for DevOps using Docker and Kubernetes

August 14, 2017 Leave a comment

This is a repost from an article I wrote on LinkedIN.

Project Ubiquity is gaining speed to help customers manage persistent storage for rolling out containers. One of the biggest issues in the container environment is now how to manage the persistent storage that allows you to snapshot, provision and move data around as applications move in and out of your DevOps.

IBM in an open source release now enables persistent storage for Docker and Kubernetes — which initially covered IBM Spectrum Scale only — now enables block storage as well. Now, clients can test the pre-release project using their IBM Spectrum Virtualize and IBM Spectrum Accelerate based systems as the storage back end for stateful containers.

This flexible framework allows businesses to now deploy not just IBM Storage like the new generation A9000 but also our storage virtualizing solution Spectrum Virtualize or the high performance file system Spectrum Scale. All of these solutions are able to be deployed as appliances or as software on your storage rich servers. With Spectrum Virtualize you can reuse some of your older storage that is under utilized today.

Here is more information and the link to the GitHub.

Categories: Uncategorized

Data Hoarding: How much is it really costing you?

March 13, 2017 Leave a comment

n-HOARDING-SIGNS-large640I have closet in my house that I keep all kinds of computer gear. Most are things from some fun project that I was working or a technology that is past is prime. There is everything from Zip drives to coax termination to a Ultra-wide scsi interface for an external CDROM. Why do I keep these things in a box in a closet? Great question that usually comes up one a year from some family member that sticks there head in there looking for a toy, coat or looking to make a point.
But on more than one occasion I have had to go to the closet of ‘junk’ to get something that helped me in completing a project. A Cat5 cable for my son’s computer, an extra wireless mouse when my other one died. Yes I could go through it all and sort it out and come up with some nice labels for it all, but that takes time. It’s just easier to close the container lid and forget about it until I realize I need something and its easy enough to grab it.
Now this is not a hoarding issue like those you see on TV where people fill their house, garage, sheds and barns with all kinds of things. Those people who show up on TV have taking the ‘collecting’ business to another level and some call them ‘hoarders’. But if you watch shows like “American Pickers” on the History Chanel, you will notice that most of the ‘hoarders’ know what they have and where, a meta data knowledge of their antiques.
When you look at how businesses are storing their data today, most are looking to keep as much as possible in production. Data that is no longer serving a real purpose but storage admins are too gun shy to hit the delete button on it for fear of some VMWare admin calling up to see why their Windows NT 4 server is not responding. If you have tools that can move data around based on the age or last accessed then you have made a great leap into making savings. But these older ILM systems can not handle the growth of unstructured data of 2017.
Companies want to be able to create a container for the data and not have to worry if the data is on prem, off prem, on disk or tape. Set it and forget it is the basic rule of thumb. But this becomes difficult due to the nature of data as it has many different values depending on who you ask. A 2 year old invoice is not as valuable to someone in Engineering as it is to the AR person who is using it to base their next billing cycle.
One of the better ways to cut through the issue is to have a flexible platform that can move data from expensive flash down to tape and cloud with out changing the way people access the data. If the user can not tell the difference where his data is coming from and does not have to change the way he gets to it then why not look at putting the cold data on something low cost like tape and cloud tape.scale_cloudTier
This type of system can be accomplished but using the IBM Spectrum Scale platform. The file system has a global name space across all of the different types of media and can even use the cloud as a place to store data without changing the way the end user will access the data. The file movement is policy based and allows admins to not ask the user if the data is needed, it simply can move it to a lower cost as it gets older/colder. The best part is because of a new licensing scheme, customers only pay the TB license for data that is on disk and flash. Any data that sits on Tape does not contribute to the overall license cost.
For example: 500TB of data, 100 TBs that is less than 30 days old and 400 that will greater than 30 days. If stored on a Spectrum Scale file system, you only have to pay for the 100 TBs that is being stored on disk and not the 400 TB on tape. This greatly reduces the cost to store data as while not taking features away from our customers.
For more great information on the IBM Spectrum Scale go here to this link and catch up.

What does the perfect match sound like?

February 15, 2017 Leave a comment

ipad_mymatchesThe day after Valentine’s is also known as “Singles Awareness Day” and people flock to the stores for the left over candies now priced down to move. Some people who are looking today will go on a website to find their love ‘match’. But how does that really work in the background?

One website named “Plenty of Fish” is an online dating site that claims they are the largest online dating site, with over 90 million registered users. Every day 55,000 people from every corner of the Earth sign up on their website for help finding that special person.

Here are some staggering facts from their website:

  • PlentyOfFish has over 55,000 signups every day
  • 80% of usage on PlentyOfFish takes place via a mobile phone
  • Over 3.6 million people log on to PlentyOfFish every day
  • Every 2 minutes a couple confirms they met on PlentyOfFish
  • There are over 10 million conversations every day on PlentyOfFish
  • PlentyOfFish creates 1 million relationships every year

Now this is not a promo for the site but the data points are interesting. If you were looking for someone to spend the rest of your life with or just trying to make friends, some of the factors in you paying this site would be the speed and accuracy of their match making algorithm. When I talk to clients about technology and their pain points we talk alot about making the decisions faster with more accurate data. This is very similar to the data points above.

I saw this video on Youtube and thought it was pretty cool.

Categories: Flash, Uncategorized

IBM Spectrum Accelerate Upgrade V11.5.4

February 14, 2017 Leave a comment

IBM Storage announced today an update on their Spectrum Accelerate platform. This is part of the Spectrum Family which IBM has been selling under different names for over 20 years. The ‘Accelerate’ portion is based on the XIV grid technology and runs on VMware servers in a converged or even hyper-converged environment.

IBM Spectrum Accelerate V11.5.4 delivers the following functions with this release:

  • Data-at-rest encryption enables the use of SEDs to protect user data from exposure to unauthorized personnel
  • Hot encryption enables you to activate encryption at a later stage of the system lifecycle instead of at the time of installation
  • Support for VMWare vCenter/ESXi 6.0 enables VMWare-based systems to upgrade to a newer version of vSphere
  • Support for 450 GB and 15K rpm drives increases the diverse inventory of hard drives you can install
  • Hyper-Scale Manager is the next-generation, web-based GUI to deliver a holistic and intuitive user experience while sharing the same pane of glass with the whole A-line family: A9000, A9000R, and XIV Gen3

For more information about this release click here.

A great resource for all things on Accelerate, check out these Redbooks

Categories: Uncategorized

Software Defined Storage: Client Case

February 6, 2017 Leave a comment
Categories: Uncategorized

IBM Hyper Scale Manager: Where Can I Put My Data?

January 27, 2017 Leave a comment
Categories: Uncategorized

IBM Hyper Scale Manager: Volume Creation

January 26, 2017 Leave a comment
Categories: Uncategorized

How to Save Money by Buying Dumber Flash

October 19, 2016 Leave a comment

Quick and simple new way to look at storage. Stop buying flash arrays that offer a bunch of bells and whistles. Two main reasons, 1. It increases your $/TB and 2. It locks you into their platform. Lets dive deeper.

1. If you go out and buy an All Flash Array (AFA) from one of the 50 vendors selling them today you will likely see there is a wide spectrum not just from the media (eMLC, MLC, cMLC) but also in the features and functionality. These vendors are all scrambling to put in as many features as possible in order to reach a broader customer base. That said, you the customer will be looking to see which AFA has this or is missing that and it can become an Excel Pivot Table from hell to manage. The vendor will start raising the price per TB on those solutions because now you can have more features to do things therefore you now have more storage available or data protection is better. But the reality is you are paying the bills for those developers who are coding the new shiny feature in some basement. That added cost is passed down to the customer and does increase your purchase price.

2. The more features you use on a particular AFA, the harder it is to move to another platform if you want a different system. This is what we call ‘stickiness’. Vendors want you to use their features more and more so that when they raise prices or want you to upgrade it is harder for you to look elsewhere. If you have an outage or something happens where your boss comes in and say “I want these <insert vendor name> out of here”, are you going to say well the whole company runs on that and its going to take about 12-18 months to do that?

I bet your thinking well I need those functions because I have to protect my data or i get more storage out of them because I use this function, but what you can do is take those functions away from the media and bring it up into a layer above them in a virtual storage layer. This way you can move dumb storage hardware in and out as needed and more based on price and performance than feature and functionality. By moving the higher functionality into the virtual layer the AFA can be swapped out easily and allow you to always look at the lowest price system based solely on performance.

Now your thinking about the cost of licenses for this function and that feature in the virtualization layer and how that is just moving the numbers around right? wrong! For IBM Spectrum Virtualize you buy a license for so many TBs and that license is perpetual. You can move storage in and out of the virtualization layer and you do not have to increase the amount of licenses. For example. You purchase 100TB of licenses and your virtualize a 75TB Pure system. You boss comes in and says, I need another 15TB for this new project that is coming online next week. You can go out to your vendors and choose a dumb storage AFA array and insert it into the virtual storage layer and you still get all of the features and functions you had before. Then a few years go by and you want to replace the Pure system with a nice IBM flash system. No problem, with ZERO downtime you can insert the Flash 900 under the virtual layer, migrate the data to the new flash and the hosts do not have to be touched.

The cool thing that I see with this kind of virtualization layer is the simplicity of not having to know how to program APIs, or have a bunch of consultants come in for some long drawn out study and then tell you to go to ‘cloud’. In one way this technology is creating a private cloud of storage for your data center. But the point here is by not having to buy licenses for features every time you buy a box allows you to lower that $/TB and it gives you the true freedom to shop the vendors.