NFS based NAS devices, when fully leveraged, become a virtual storage area for potentially the widest variety of data types and access patterns. Storage managers are no longer forced to buy a block storage device for database applications, another for messaging, one more for server virtualization, and then a separate NAS device for file sharing. Instead, thanks to the capabilities of the modern NAS, they can move almost all data types to a single storage platform. Unification has truly arrived, not by hosting Fibre, iSCSI and NFS separately on single boxes, but by having NFS emerge as the storage protocol for all data access. This single protocol, single storage platform approach greatly simplifies management and improves purchasing efficiencies.


However, these widely mixed workloads create a new storage I/O profile that is more random and IOPS intensive than ever. Most data centers consider the storage infrastructure that supports a virtual environment like VMware their most random I/O generator. Add to that several standalone database servers running Oracle or MySQL and combine file centric environments like home directories, sequential processing applications, or analytic processing and you have a ‘random I/O monster’. It’s one that storage managers are looking to tame in order to reap the cost benefits and still deliver on users’ performance expectations.



NFS Acceleration Options


Many options are available to the data center manager trying to accelerate NAS system performance. Most managers start by increasing hard drive spindle count under the rationale that the more spindles the NAS head has to work with the faster it can perform read and write operations. There are several challenges with this option. The first is that the application has to generate enough simultaneous storage I/O requests to match the number of spindles being installed. If this can’t be done, adding spindles provides no performance improvement and instead latency becomes the issue. Latency is the time it takes from a client requesting data to when the data arrives, including the time for a mechanical hard drive to move to the right platter, find the right section of the platter, and rotate the drive head to that section of the drive.


Interestingly, even if the application or environment can generate a large number of simultaneous storage I/O requests, the ‘more spindles’ approach may still not be feasible. The cost to add enough hard drive spindles to match those requests can be exorbitant. In addition, the space, power, and cooling that shelves of disk drives require only worsens the economics. Finally, mechanical drives, no matter what the count, are limited by simple physics. They can only rotate and seek data so fast. Any investment in mechanical drive technology is fundamentally constrained.


Due to these limitations, many data centers are turning their attention to solid state storage devices. The zero latency, non-rotational nature of these systems makes them perfect candidates for increasing storage I/O performance. Haphazard implementation of solid state storage, however, creates its own issues. They are so fast that understanding how to best use solid state storage, especially in an NFS environment, can cause confusion among both customers and suppliers.


Early attempts at NFS acceleration consisted primarily of inserting a tray of solid state disks (SSD) directly into the NAS system. The user then had to manually decide which data needed to be on that high performance tier and when. Data migration to the SSD tier was manual and often required the application needing the performance be down while the data was migrated. This did provide fixed acceleration for a particular application but the process wasn’t dynamic as it required manual identification of the data to accelerate and then manual movement of that data into the solid state tier of storage. And in the end, the process often did not make the best use of the premium SSD resource anyway. The original data moved to the SSD tier typically was never removed and in many cases the SSD tier was not regularly updated. To maximize investment, SSDs should be completely full of the most important data at all times and data should be constantly changing to match the needs of the environment.


To alleviate these issues, some vendors added the ability to automatically promote active files or blocks of files to the SSD tier. While this solved the issue of manual data migration, the automated tiering technology was relatively simple in its promotion algorithms. The technology could take days to promote data to the SSD tier, and there was no way to force a promotion of a particular data set. To gain access to this capability required that the NAS, at a minimum, be upgraded. However, in most cases, the NAS appliance needed to be replaced because it required more work for the NAS controller to drive the SSD tier and to manage the promotion process. Finally, many NFS environments have a mixture of NAS systems from a variety of vendors. This requires not only upgrading each system but also learning how each vendor delivers automated tiering, something that could be quite complex and problematic.



NFS Caching


There is a technology that has been used in data centers for years that can be an ideal fit for the NFS random I/O monster: caching. Caching analyzes data access to make sure that the most active portion is available in memory for faster retrieval. Caching, in the classic sense, has been limited to very small amounts of memory or integrated into expensive storage controller technology. The problem is that in a highly random, busy NAS appliance the likelihood of a cache miss is high. With the advent of more economical DRAM memory and high capacity, low cost Flash storage, the ability now exists for extremely high capacity, high performance caching solutions that seldom have a cache miss.


Some NAS vendors have begun to provide cache cards in their systems to improve performance. While more cost effective per gigabyte than the DRAM-based cache in a traditional controller architecture, they are limited to only accelerating performance of the particular NAS in which they’re installed. This means that every NAS in the environment would need a cache card installed and that cache memory could not be shared across NAS heads, once again lowering utilization of a premium-priced resource.


A better solution may be to leverage a solid state caching appliance, like those offered by Cache IQ, where memory is installed into an external cluster of appliances. These devices are in-line between the NAS heads and the connecting servers or clients and all data is analyzed for cache appropriateness. In default operation the most active data is stored in a combination of DRAM and solid state storage within Cache IQ’s appliance.  For data that is business critical but not the most active, simple policies can be written for the Cache IQ appliance to guarantee the most important data is alas served from high-performance cache. A cache appliance brings a universal deployment capability so that multiple NAS heads can be accelerated from a single NFS cache. The NAS heads can also be from different vendors.


This approach allows for a non-disruptive implementation of SSD technology and provides an automated best use of that technology. It’s also one of the safest ways to get started with cache based solid state storage. Since most cache appliances act only on reads, all writes are sent directly to the NAS device. This means that snapshots, replication and backup processes are all unaffected by the use of the cache and data protection is assured. Finally, if a cache failure does occur there is no data loss and operations will reliably continue, although performance will be limited to that of the NAS device.


The impact of this approach is significant. Obviously most read operations are now served from memory based storage, so applications and environments see a tremendous performance increase. Secondly write performance increases because the the large majority of read operations have been offloaded from the NAS controller to an external cache. In the classic 80/20 read/write workload this means that 80% of the controller processing requests are now offloaded to the cache appliance. The bulk of its horsepower can now be focused on write traffic. Finally, a caching appliance doesn’t require an upgrade of the current NAS systems, it actually extends their lives and usefulness. Now they primarily become mass storage areas where data is held and protected. In short the existing NAS becomes the data services tier to provide functions like snapshots and replication and the cache becomes the performance tier responsible for data delivery.



Summary


NAS systems are being asked to perform new tasks like hosting virtual server images and databases, while at the same time their legacy function of hosting discrete files is becoming more demanding. This has resulted in a performance storage gap that most systems either can't address or the users can’t afford to address. Caching appliances like those offered by Cache IQ are an ideal way to deliver high storage performance to those NAS systems without forcing an expensive upgrade. They allow NAS systems to reach their full potential as a universal storage area for data sets of all types. All NFS cache appliances are not created equal, however; and in the next article (section) Storage Switzerland will discuss what to look for in an NFS Caching Appliance.

George Crump, Senior Analyst

Cache IQ is a client of Storage Switzerland