From The Reg
Looks like a problem in a chip on the cache cards. Netapp is aware of the problem but is working to fix the problem.
This IBM® Redpaper® publication provides a basic introduction to the IBM System Storage® N series, virtualization using the Virtual Storage Console (VSC) 2.0 in VMware vSphere 4.x environments. It explains how to use the Virtual Storage Console with VMware vSphere 4 environments and the benefits of doing so. Examples are given on how to install and set up VSC.The Virtual Storage Console, which is an significant N series software product that works with VMware.
VSC provides local backup and recovery capability. You have the option to replicate backups to a remote storage system by using SnapMirror relationships. Backups can be performed on individual virtual machines or on datastores with the option of updating the SnapMirror relationship as part of the backup on a per job basis. Similarly, restores can be performed at a data-store level or individual virtual machine level.
IBM System Storage N series in conjunction with VMware vSphere 4 helps complete the virtualization hierarchy by providing both a server and storage virtualization solution. Although this configuration can further assist with other areas of virtualization, networks, and applications, these areas of virtualization are not covered in detail in this paper.This is a companion IBM® Redpaper® to “IBM System Storage N series with VMware vSphere 4.1”, SG24-7636.
As with movies, IBM Storage announces the new releases on Tuesday and today is now different. There is a ton of good information in the release today and I have summarized most of it below. Like a Thanksgiving dinner, we have a little of this and little of that for our IBM Storage family. I am super excited to see IBM continue to partner with Netapp on both hardware and software. I know most of the Netapp team that focus solely with IBM and they work really hard to provide the IBM machine with information. I have heard the new entry level N series systems are rocking with performance and are cluster mode ready. I also am trying to include more from our TSM software. In the future, its going to be the software that will help drive better utilization of storage and lower CAPEX/OPEX.
So grab a bag of popcorn, a box of Junior Mints and a ice cold Coke and enjoy the show. The only difference is we don’t charge you $15 a ticket for the show.
IBM N series
IBM keeps investing in the Netapp technology and releases the anticipated N3220 and N2340. The N3220 is an entry-level system that uses SAS internal disk on a 2U frame. Clients can choose from 12 to 24 SFF SAS disk drives and extend to a max of 144 spindles. The maximum raw storage for the system is 374 TB of storage, not shabby for a 2u box.
The big brother to the 3220 is the 3240. This system comes in at 4U and allows for the use of LFF SATA drives. Just like the the smaller version, the external disk expansion can grow up to a max of 144 spindles and top out at 432 TB of storage. Both systems have a one controller (A12/14) or a dual controller (A22/24) setup to accommodate even the choosiest clients. The single controller can be upgraded to a dual controller but this does require a disruption in service.
A couple of improvements over the N3400 is the amount of ports and IO slot. These new systems have 8 1GBE on board ports, 4 SAS ports and 2 IO slots that can be used for adding 10GB Ethernet or more disk expansion. Another upgrade was to how much memory these systems come with compared to the older N3400. The new systems now have 12 GB of memory in the dual controller configuration vs the 8 in the N3400. This can only lead to better performance from both a throughput and a cache standpoint.
The systems carry the full protocol portfolio (FC, NFS, SMB, iSCSI) and Data OnTap operating system that Netapp has been using for the last 10 years or so. These systems can be used in small remote offices or test environments were protocol flexibility is needed. General availability for both systems will be March 9th.
Keeping with N series, IBM announced today the support for Full Disk Encryption on the EXN3500. This feature (FC#4201) will allow for data encryption at rest on self encrypting drives. A few key points:
Implements full disk encryption at the hardware level
Prevents access to data until the drive is unlocked by an authorized administrator
Supports storage efficiency: deduplication and storage compression
Supports IDP: Backup/recovery, SnapMirror®, SnapProtect™, and SnapVault®
File system and network independent
General availability will be March 9th.
IBM Storage also announced today that Tivoli Storage Manager updated the FastBack for Workstations version 6.3. FastBack will backup files for laptop and desktops users with real-time protection. This means you can backup your most important files the moment they are saved or you can backup the files in a traditional schedule. The other cool thing about FastBack is you don’t have to be online to backup the data. The system can create multiple versions of your file on the local system for a quick recovery, then send an additional copy to a remote storage device for DR.
Here is what is new in FastBack version 6.3
A wizard to help simplify recoveries when a complete system rebuild is required
Automatic expiration of deleted files from remote storage
Significant increase to the include and exclude list capacity
Simplified email backup configuration
Support for using generic IPv6 addresses for remote backup configuration
Integration with IBM Tivoli® Storage Manager V6.3 as a remote system
General availability is planned fr March 16th for those who want to download the software and if you require a CD/DVD/USB stick you will have to wait until April 20th.
As in any release there are things that make a big splash and then some that are pretty cool, then comes the others. ‘Hanger-ons” I think is what they are called. First up is a system that is built to backup IBM Server systems. The 7226 is a multi-media system that attaches to Power6,Power7, SystemX and Bladecenters via four SAS ports. It includes a half-high LTO5 tape drives that can backup data at 280 MB/s. The system also comes equipped with a DVD-RAM that uses a SAS Slim Optical drive with both a USB and SAS interface option.
IBM Storage also announced the support for 8 Gbps FC Extended Reach SFP+ of 40KM on the Cisco MDS 9148. This SFP is designed to provide auto sensing FC connectivity for the 2, 4, and 8 Gbps ports. The maximum distance with this is now extended to 31 km. Available on March 2nd.
To find more information on this release and all of the releases, click here.
My father is a retired teacher but loves to work with his hands. I can remember very early on in my up bringing, him teaching me that it is good to measure twice and cut once. Whether it was building a deck or just a bird house the point was it took more time to cut something wrong and then has to re-cut the board shorter or even wastes the old board and cut a whole new one.
When I was preparing for this article I remember having to learn that lesson the hard way and how much effort really is put into that second cut. The problem in the storage industry is the misaligned partitions from a move of a 512 byte sector to a new 4096 byte sector. This has to be one of the bigger performance issues with virtualized systems and new storage.
Disk drives in the past had a limit on the number of sectors to 512 bytes. This was ok when you had a 315 MB drive because the number of 512 byte blocks was not nearly as large as what is in a 3 TB drive of today’s’ systems. Newer versions of Windows and Linux will transfer the 4096 data block that match the native hard disk drive sector size. But during migrations even new systems can have an issue.
There is also something called 512 byte sector emulation which is where a 4k sector on the hard disk is remapped to 8 512 byte sectors. Each read and write would be done in eight 512 byte sectors.
When the older OS is created or migrated, it may or may not align the first block in the eight block group with the beginning of the 4k sector. This causes misalignment of a one block segment. As the reads and writes are laid down on the disks the misalignment of the logical sectors from the physical sectors mean the 8 512 byte blocks now occupy 2 4k sectors.
This now forces the disk to perform an additional read and/or write to two physical 4k sectors. It has been documented that sector misalignment can cause a reduction in write performance of at least 30% for a 7200 RPM hard drive.
This issue is only magnified when adding other file systems on top of this misalignment. When using a hyper visor like VMWare or Hyper-V, the virtual image can be misaligned and cause even further performance degradation.
There are hundreds of articles and blogs written on how to check for you disk alignment. A simple Google search of the words “disk sector alignment” and you will find this has been a very popular topic. Different applications will have different ways of checking and possibly realigning the sectors.
One application that can help you identify and fix these is a tool called the Pargon Alignment tool. This tool is easy to use and will automatically determine if a drive’s partitions are misaligned. If there is misalignment the utility then properly realigns the existing partitions including boot partitions to the 4k sector boundaries.
I came across this tool when looking for something to help N series customers who have misalignment issues in virtual systems. One of the biggest things I saw as an advantage was this tool can align partitions while the OS is running and does not require the snapshots to be removed. It also can align multiple VMDKs within a single virtual machine.
For more information on this tool and alignment check out the Paragon Software Group website.
In the end, your alignment will effect how much disk space you have, how much you can dedupe and the overall performance of your storage system. It pays to check this before you start having issues and if you are already seeing problems I hope this can help.
For the last six years IBM has been selling the N series gateway and it has been a great tool to add file based protocols to traditional block storage. A gateway takes luns from the SAN storage and overlays its own operating system. One of the ‘gotchas’ with the gateway is the storage has to be net new, meaning it can not take an existing lun that has data and present that to another device.
Traditionally the gateway was used to put in front of older storage to refit the old technology with new features. In the case of N series, a gateway would be able to add features like snapshots, deduplication and replication. In the past few years, we have added the option to use both external and internal disk to a gateway system. The only caveat to this solution is you have to order the gateway license when the system is initially ordered. A filer can not be changed into a gateway system.
Another solution that we see in the field is when a customer is looking to purchase a new system and most of the requirement is SAN based and only a small portion is NAS. Putting a gateway in front of a XIV became a very popular solution for many years and still is today. IBM did release the SONAS platform that can be used as a NAS gateway in front of the V7000, SVC, and XIV.
I have seen some architects that wanted to use a gateway in an all NAS solution for new disks. This only complicates the solution by having to add switches and multiple operating systems.
If we look at virtualization of storage, the gold standard has been the SAN Volume Controller (SVC). This system can take new or existing luns from other storage systems and presents them as a lun to another host. This data can be moved from one storage system to another without bringing the lun offline. The IBM V7000 also has this virtualization feature as the code base for both systems are the same. The cool feature that IBM has added to the V7000 is now the system has the ability to do NAS and SAN protocols. This now competes in the same space as the EMC VNX and Netapp FAS systems.
The virtualization in the SVC code is somewhat similar to the gateway code in the N series. They both can virtualuze the lun from another storage platform. If you need to keep the data that is on the older system intact, then a SVC device is needed. I would also mention that the movement of data between storage systems is much easier with the SVC. I would also mention the N series gateway has more functionality like deduplication and easy replication than the SVC.
Finally, the SVC code was built by IBM to sit on top of complicated SAN environments. Its robust nature is complimented with an easier to use gui from the XIV platform. The N series gateway is somewhat easier to setup but is not to be used for large complicated SAN environments.
Both systems are good at what they do, and people try to compare them in the same manner. I would tell them, Yes they both virtualize storage but are used in a different manner.