Many storage systems start out simply enough, but become complex when it’s time to upgrade. The most common reason to upgrade a storage system is to secure additional capacity, which is typically driven by the need to support more users, files, applications or attached servers. As these servers are added they don't just need capacity, they also make additional demands on the other storage resources.


Beyond capacity there are two other storage resources that must be available to meet a growing organization’s demands: bandwidth and compute power. Without enough I/O bandwidth, connected servers and users can become bottlenecked, requiring sophisticated storage tuning to maintain reasonable performance. Among other functions the storage software needs compute resources to provide data services like snapshots, replication and volume management. Without enough of this resource the system may have to limit the additional services it can provide. For example, some systems place a hard limit on the number of snapshots that can be maintained or the capacity to which a volume is allowed to be scaled.


The problem with legacy or scale-up storage is that as capacity is added to the existing system, bandwidth and compute power are not increased as well. The result is that the scale-up storage system performs best on its first day. As capacity is subsequently added the system reaches a peak level of performance and then starts to degrade. The only way to delay this point is to overbuy on storage compute power and bandwidth upfront which is, of course, a waste of capital.


In scale-up storage when the eventuality of that performance peak is reached, the next option is to either replace the compute and bandwidth capabilities of the storage engine or to purchase additional stand-alone storage systems. The first option is an expensive fork-lift upgrade, the second becomes a management nightmare because many environments end up with several (even double digit numbers) of the same storage system to help address this performance scaling issue. Each additional system makes the management burden heavier and the likelihood of having to add IT staff greater. Scale-out storage is designed to eliminate these issues.



The Architecture Of Scale-Out Storage


Scale-out storage systems are typically made up of individual storage components called “nodes”. These nodes often will contain capacity (in the form of four or more drive spindles), processing power and storage I/O bandwidth. As a node is added to the storage system the aggregate of each of these three resources in the system is upgraded simultaneously. For example, capacity is often the key driver in storage expansion but, with scale-out storage, as capacity increases compute processing power and storage I/O bandwidth increases as well. These nodes are typically interconnected via a high speed backplane or network that enables them to communicate with each other. This means that, in comparison to scale-up, scale-out storage systems become increasingly faster as capacity is added to the storage infrastructure.


Each of these nodes may individually have less processing power and capacity than the typical enterprise class array, but this is by design. It allows for the individual nodes to be less expensive than an equivalent scale-up storage system and, more importantly, allows for a granular expansion of the various storage resources. Instead of overbuying, this granularity allows for the purchase of only the resources that are needed at the time.



Scale-Out Storage Software


Of course, having to access each of these nodes as an individual series of machines would make it no different than scale-up storage. The key ingredient is the software that enables these nodes to be interconnected and referenced as a single object by the storage administrators and connecting servers, essentially making the nodes into a single cluster or grid.


To accomplish this feat, the storage cluster software should manage the writing of data across all the nodes in the scale-out storage infrastructure. This spreads the load of the data writes and the subsequent data reads across more processors and I/O connections in the cluster. It’s also important that any node in the cluster be enabled to act as the control node at any time. If all the I/O has to be routed through a single control node then the cluster storage system reverts to something similar to a scale-up system, with much of the same limitations. The storage cluster software also should be able to take advantage of the additional RAM that each node brings to the cluster and leverage it to further enhance performance.


Another key function of the storage software is the ability to support very large volumes. The goal is one volume that can scale to practically any size and support a variety of application types, ranging from user home directories to sequential processing tasks and virtual machine images. The software has to be aware that it’s in the cluster and be developed in such a way as to take advantage of all the additional storage compute and I/O resources available so it can support the wide range of workload types. Finally, the cluster storage software has to provide all of the data services that IT professionals expect from a storage system including snapshots, thin provisioning, cloning, replication and even newer features like automated storage tiering and advanced metadata acceleration. With scale-out storage the storage manager’s responsibility is to simply manage their data not to manage the storage hardware.



The Modern Data Center and Scale-Out Storage


The modern data center may be the ideal environment for scale-out storage. Never before has there been such a mixture of workloads and I/O demands. User files are larger and more numerous, sequential processing is being more frequently applied across a wider range of business types and most data centers are moving to server virtualization. The key overriding factor is that there is also an unprecedented need to keep costs down and make IT staff more efficient.


Scale-out storage is uniquely positioned to address this need. Since I/O, bandwidth and storage compute resources are added as capacity is increased there is a continuous and simultaneous growth of all three as more nodes are put into the storage environment. As a result there is limited need to fine tune the system for special use cases. There’s typically enough performance by using the default system configuration, including placing all of these data types into a single volume. This allows the environment to scale without needing specialized storage managers. Many scale-out customers report managing very large data stores with less personnel than they had before.


The scale-up storage system vendor’s response has included attempts to bolt-on capabilities, management interfaces and cross-system file systems. While each of these may alleviate the problems short term they only mask the real challenges with data services. For example, if an additional server is needed in a scale-up storage environment each system and volume has to be inspected to determine which resource is the best candidate to support that server’s workload. In a scale-out system there is one system and one volume. The new server simply attaches to the storage and begins using it. If that new server creates the need for more storage, then a new node is added which brings more capacity, I/O and compute resources.


Scale-out storage may be the ideal solution for the modern data center, which is currently besieged by a wide variety of workloads, yet needs to keep storage management costs under control. Most importantly it allows the business to focus on what it does best, not on dealing with storage management issues.

George Crump, Senior Analyst

Isilon Systems is a client of Storage Switzerland