Change is being forced on NAS because its use cases are changing. The traditional file server use case is only part of the story now and even that has changed significantly over the past few years. In the mid to late 90‘s when the NAS concept first began to take hold, a file server consisted of a relatively small number, by today’s standards, of files to a small number of users. The volume however was enough to justify the move from using a general purpose to a dedicated operating system to serve files.


Initially, NAS systems were customized or modified file systems designed to run as a stand alone instance. There was high degree of customization at the NAS OS level, and the hardware used was often custom as well. 


Later, as the systems matured and were used in more demanding enterprise environments, high availability was added in the form of two-node clusters. This was implemented with either two nodes attached to the same storage or as two separate NAS systems that had the ability to assume the other’s identity in the case of a failure.


As the performance capabilities of the stand alone NAS systems ‘rode the Intel wave’, albeit with a lag due to the hardware release cycles of the NAS vendors, the NAS use case began to move beyond simple file serving. And, NFS-mounted application data became a reality. That started with Oracle data being served successfully from NFS volumes and then other applications moved their data to NFS. This brought good performance and greatly simplified management to these data sets.


In recent years, using NAS via NFS to serve virtual machine images in server virtualization environments has become very popular. Similar to the Oracle use case, the systems performed surprisingly well and greatly simplified the storage management challenges that server administrators were facing. These virtual server images could be managed as large files. Nexenta and a handful of NAS vendors have written to the APIs of the leading virtualization vendors in order to enable storage administrators to establish per VM storage policies and some allow for the use of the NAS for cloning and other tasks that perform better at the storage layer than on the virtualization hosts. 


Needless to say in recent years the original NAS use case, file serving, has evolved significantly. First, the size and number of user-generated files has exploded. Even the smallest of data centers is now dealing with millions of files and larger data centers are dealing with billions. Also, with the growth in cloud storage and web 2.0 services, web technology based solutions have been developed by these technically advanced users that extend the capabilities and improve the manageability of traditional NAS architectures. A notable example of this trend is the ‘Haystack’ solution developed by Facebook to improve the performance and management of their NAS, allowing them to confidently substitute OpenStorage ‘bricks’ for their remaining proprietary NAS solutions.


Finally, NAS systems have evolved beyond just providing file sharing services. These systems first added the capability to provide block based storage via iSCSI and then eventually supported fiber channel protocols. After block I/O capability, NAS systems then started adding other services like compression, deduplication and automated data tiering (moving data based on access patterns to different classes of storage).


This ever-increasing workload has caused some management and scalability concerns for the single-system NAS solution particularly as many NAS solutions were not designed with today’s multi-core processors and relatively inexpensive Solid State Disks (SSD) in mind. For example, while disk drive I/O has increased slightly since NAS solutions reached enterprise adoption 15 years ago, processor, SSD and even Ethernet price/performance have increased by between 700 and 800 times. It is not surprising that solutions designed 15 years ago cannot easily scale to take advantage of these changes. Conversely modern file systems like ZFS based solutions were designed to leverage the multi-core capabilities of Solaris (able to natively leverage up to 256 cores) the same cannot be said of many now dated NAS solutions.


Once the maximum performance or capacity of the individual NAS has been reached the storage manager is forced to either upgrade that system or add an additional system. With the upgrade option there is potential disruption and cost, although many NAS providers today can allow the data to stay in place and simply connect the drives into the new NAS head. However, almost all vendors require the customer to purchase any additional capacity from that vendor. When an additional NAS is added, it must typically be individually provisioned as needed. The result, over time, can be a proliferation of NAS solutions with some enterprises having 500 or more instances, all of which must be managed and provisioned in concert to provide storage services to the enterprise.


The inherent management and performance limitations to vertical scaling NAS solutions have lead many to consider a more clustered or grid approach in order to achieve scale. There are two horizontal scaling methodologies available for consideration. The first is a ‘tightly coupled’ cluster where each node is dependent on the other and data is striped across the nodes in the cluster via the use of a clustered file system. In this configuration performance scales as capacity is added to the systems, via additional nodes. This type of cluster does well in high bandwidth type of operations where large files are being accessed simultaneously. In this use case the large files can be served up by multiple nodes simultaneously. As a result, there is less of a concern around a single node in the environment becoming bottlenecked.


A downside to a tightly coupled cluster is the initial and ongoing investment requirements of the cluster itself. First, in most cases there is a minimum initial number of nodes that must be purchased to build the cluster and to support some sort of data protection scheme. That minimum number is, in many cases, three nodes, sometime six. As a result, it’s hard to start small in these environments.


The second challenge with tightly coupled cluster is there is a very tight integration between the nodes and the underlying technology has almost always been proprietary. Open source and standards based solutions such as pNFS have yet to gain wide acceptance. The proprietary nature of these solutions means that the choice to scale with such a technology means a level of vendor commitment that many enterprises refuse to make in the rest of their IT stack.


The other downside of some tightly coupled cluster running a scale out file system is that while such solutions work well on large files they often perform less well on smaller files. These clusters typically use a meta data file server which is used to direct the reads and writes appropriately and accessing this meta data server becomes a significant ‘I/O tax’ when the files are small. There are some tightly coupled clusters that overcome this challenge by spreading the meta data load across the cluster.


Finally, a clustered solution can further limit the management flexibility of the solution.  For example the nodes in the cluster must typically be identical in processor type and often in capacity configuration. For example, you can’t mix a node with four 1TB drives with an existing set of nodes with four 500GB drives without suffering a loss in capacity.  In addition, these clusters require a special purpose internal network for the cluster itself. Often this is a separate IP network, although clusters also can use other interconnect protocols like Infiniband.


The other option is the ‘loosely coupled cluster’. In this configuration, nodes act independently of each other and the management of the nodes is centrally controlled. The goal in loosely coupled clustering is to reduce the management burden so that enterprises with dozens or even hundreds of NAS solutions can manage them all at a higher level, from a central solution.


Users that look to leverage a loosely coupled cluster won’t typically have the data set of large bandwidth intensive files that users of a tightly coupled cluster would. Instead they may be like today’s web 2.0 and cloud storage users who often have a variety of file types and sizes with millions of different possible permutations. A loosely coupled cluster essentially consists of a policy-based abstraction of the data path that makes the dozens or hundreds of underlying NAS solutions look and feel like a single file system to administrators and end users. Something more is needed however, the ability to abstract the policy management of the system.


These policy based abstraction layers have so far been built by web 2.0 and ISP storage users themselves although a number of leading vendors have adapted their multi-system management solutions to provide similar capabilities. This is where the second method of forming a loosely coupled cluster begins to show its value. As the abstraction of the data path method becomes more prevalent, so does the abstraction of the management path or the actual task involved in managing storage beyond just physical placement. This method can work in conjunction with the data path method to provide a combined solution.


Vendors like Nexenta are adding a web-service-based API and a work flow oriented user interface called Pomona to their existing NAS infrastructure to abstract the management tasks of individual NAS systems. These APIs are essentially a way for the storage manager to execute management commands once across all the systems in the environment. For example, the storage administrator can use these policies to auto- provision storage from the least-utilized NAS, in terms of capacity and I/O processing available.


Pomona could also be used to enable capabilities environment wide. For example, if a backup needs to be taken, a snapshot can be issued across all NAS heads from a single command as well as the operation to back that data up. Further policies could be triggered on specific events. For example, any volume that goes below a certain level of activity could have compression enabled to save space. This policy could work in conjunction with other capabilities to migrate that volume to a less expensive tier of storage or a power-managed tier of storage.


As capacity has increased the NAS admin-to-capacity ratio has not kept pace. Many organizations have not been able to hire additional staff to make up for this gap and there is increasing awareness that the boom in file types of data is only accelerating. As a result enterprises have either thrown hardware at the problem, typically in the form of more capacity, or they’ve had to endure a lower quality of service. Either work-around leads to bigger problems down the road. The abstraction of the data management from individual NAS systems will allow organizations not only to fill this gap but to also reduce wasteful capital expenditures - ‘throwing hardware at the problem’. The state of NAS has evolved to allow the storage administrator to beat back out-of-control storage costs and to manage that storage more effectively.  


Clustered NAS storage is clearly the present and future of NAS. Dependent on the organization’s needs, a tightly coupled and loosely coupled system has their role to play in the data center, it is up to the customer to access which solution is a best fit for them. In short, the hardware components of NAS systems have become increasingly commoditized. The differentiator is set at the higher level file systems and management solutions that together promise to address the booming demands for file level storage.

George Crump, Senior Analyst

This Article Sponsored by Nexenta