Most storage professionals consider performance, scalability and availability as the primary considerations. Those issues remain important and in our follow up to this article we will cover scalability, availability, resilience, advanced DR, and QoS management in detail. For now though in addition to these capabilities we suggest that usability and efficiency be added to the consideration list.



Look to Alternative Vendors


Obviously the incumbent storage vendor is going to be considered a potential candidate for the storage refresh, but don’t take a ‘big three mentality’ toward vendor selection. Look to newer vendors. For tier one this typically means start-up vendors will be excluded, but public companies that have been in business for five or six years are well worth consideration. The traditional tier one storage vendors have all become distracted from offering just primary storage. They often now offer backup, security and other services that may have slowed down their technology development in primary storage. Newer vendors tend to be very focused on primary storage and as a result are delivering more advanced technology at a rapid pace.



Look for Easy


Tier one storage should no longer require a gang of professional service people to install the product and get it running. Also, it should not require weeks to bring into production. While predicting the future is certainly difficult, it’s safe to assume that most IT organizations will not be seeing dramatic increases in their budget nor will they be seeing dramatic decreases in storage demand. Something that’s difficult to install is often difficult to operate and will require professional services during this phase as well. Look for a system that’s easy enough to be installed quickly with minimal outside help and operated with absolutely no outside help.



Look to Shed Capacity


The primary expectation, and probably the most surprising, is that the storage manager should plan to purchase less capacity in the refresh than what they already have on the data center floor. The traditional plan of buying twice as much storage as is currently installed no longer applies in the new decade. In reality, the current economy probably won’t allow the purchase of that much storage anyway, nor will concerns about power consumption or space utilization.


As we discuss in our Thin Provisioning White Paper as much as 75-80% or more of disk capacity that’s allocated is not actually being used. This storage is captive to a server and its application. Thin provisioning can reduce this storage waste. Thin provisioning allows a storage manager to create volumes from a global storage pool at whatever sized capacity the application owner demands. The actual use of that capacity only occurs as the data is being written to disk.


The challenge with thin provisioning, however, is that in its original or ‘1.0’ format, it only applied to net-new projects with net-new data. In other words the thin provisioning capability had to be in place before new data was written. While this certainly still had value it made the capability useless when discussing migration from an existing storage platform. This is because most primary storage migration solutions perform very high speed block image transfers and the typical receiving storage platform isn’t able to differentiate between blocks of free space and blocks that contain written data. The only way to avoid this is for the IT team to restore the entire environment from tape. Most enterprises would not even consider this, due to the time required to perform the task. The result would be that the migration couldn’t take advantage of the thin technology and the storage refresh would still need to be larger than the storage in place.


Companies like 3PAR have advanced thin provisioning to be ‘thin aware’ during migrations by adding a ‘zero-block detect’ capability. Most OSs can zero out free space or third party utilities can add that capability. With a zero-detect capability in place blocks can be analyzed as they are transferred to the new storage system. If the block contains just zeros the thin provisioned system knows not to store that data and, as a result, reduces the amount of capacity required upfront.


In addition, advanced thin provisioning should also provide the ability to help maintain a volume’s “thinness”. Traditional thin provisioning cannot release old data blocks once they’ve been deleted. The technology only aides the initial provision; as data is written it’s never released. Advanced thin provisioning technologies allow for zero blocks to be released to the global storage pool and have that storage available to other connected servers.


Advanced thin provisioning requires compute cycles from the storage system. Be aware that unless the manufacturer has provided additional storage compute power this additional capacity may be gained at the expense of performance. 3PAR, as an example, has addressed this challenge by developing a special ASIC to handle the thin reclamation task, among other things.


Part of the reduction in storage should also come from systems having a reservation-less snapshot capability. Snapshots, when taken, allow the storage manager to see a volume as it appeared when the snapshot was taken. What makes snapshots powerful is that the system does not need a separate copy of the volume to accomplish this. By preserving the snapshot images and then tracking just the changes to the volume, the system can present both an active volume and a preserved volume from much the same data set.


In legacy systems anywhere from 20-100% of capacity has to be (or is strongly recommended to be) set aside to manage these snapshots and track the changed data. More modern storage systems like 3PAR can offer unlimited snapshot capability without the need for reservations. 3PAR’s thin copy software is built on the same core technology as its thin provisioning, drawing capacity only as writes occur (in this case to the base volume or the writeable snapshot). No capacity ever need be dedicated up front to a snapshot. 


Look for Universally Improved Performance


True wide striping will allow you to get the performance you need with way less capacity. An aspect of fine grained virtualization is the elimination of the need to decide how to best create RAID groups to balance performance and capacity management. This is a time consuming process that allows storage managers to extract better performance from traditional systems. Storage managers will allocate certain servers to very high drive-count but low drive-capacity RAID groups. This, however, increases the wasteful allocation of disk capacity and starves the performance potential from servers not assigned to the group.


Fine grain virtualization allows for wide striping of data, meaning that every drive in the system can store a portion of each volume. The result is smaller drive counts and higher utilization than the manual RAID build-out method, and performance that’s universally high across all servers.



Look for Fine Grained Storage Virtualization


Storage virtualization is a term used to describe different capabilities within the storage infrastructure. There are two predominate techniques. Storage system virtualization is the ability to take different storage vendors’ hardware and manage them under one umbrella. Essentially, this system peels off the storage management software from one of the storage systems and allows it to manage the other storage systems by filtering out their own value added features. It effectively dumbs down the arrays to JBODs and RAID controllers.


While this may have some value in environments looking to repurpose their storage solutions, for those looking to refresh primary storage it may not be appropriate. Instead, they should look to a fine grain virtualization capability which is more like how server virtualization is done for servers—from inside the device. In this technique of virtualization, the storage manager is no longer shackled to the management of LUNs, RAID groups etc... Fine grain storage virtualization virtualizes all resources within the array, allowing the storage to be managed as a single pool without losing the intrinsic information about underlying component resources (disks, ports, cpus, cache), which is then allocated to a server.


In part two of this series we will explore the more intangible aspects to look for when refreshing tier one storage. These intangibles maintain cost efficiency while at the same time extend the life of the storage system.

George Crump, Senior Analyst

This Article Sponsored by 3PAR

- Part One