The second challenge is that this memory is volatile, meaning that if the server fails or looses power, anything in memory is lost. This can be a potentially dangerous problem if write caching is enabled and there is a large amount of memory available for this purpose. The data loss could be significant. The third challenge is that the operating system and/or application may not be optimized to take advantage of the additional memory. In many cases adding memory to a server becomes an exercise in diminishing returns; 16GB may help performance significantly, 64GB may not make any difference at all. The final challenge is that the typical server-class machine can only scale to about 64GB, and in some cases 128GB of memory. While 128GB may be plenty for the operating system temp files, for databases with a highly random read requirement that may be too small to make a performance difference.

Building out server internal memory is a simple and cost effective first step. As the need for more memory grows, the limitations and the safety of server memory become problematic. The next step is to explore PCIe Flash SSD. These solutions are essentially SSD on PCI cards that are installed in a server. Texas Memory Systems, for example, provides a 450GB Flash-based PCIe SSD for less than $15,000. While PCIe-based Flash does have the same sharing limitations as server RAM, it's not volatile. Data stored on the Flash drive will survive a power or application failure. Also, some PCIe-based systems have a form of RAID-like data protection built into them. Finally, they gain the capacity of Flash memory, often tripling the size of server memory. Instead of using this memory for cache, it's actual storage. And, instead of just moving the active parts of the database to a volatile cache, entire temp files or even the entire database can be moved to the PCIe SSD. No performance is lost moving data to and from cache, nor is there risk of a cache miss which also impacts performance.

PCIe-based Flash technologies derive their cost savings from not having to build the rest of the storage system around them. They count on the server where they are installed to provide power and redundancy. This also provides them with a space and power efficiency aspect as well. They require no additional rack space and they place a very limited amount of extra power load on the servers they run in. Finally, PCIe-based Flash may also have the "cleanest" access to data. There are no storage protocols to negotiate, nor are there realistic bandwidth limitations, since there is a near-direct transfer from the PCIe bus to the processor when a storage request is made.

PCIe-based Flash systems, however, have their limitations. The first of which is capacity. While multiple cards can be installed in a server, there are a limited amount of slots available in which to place those cards. There is also the challenge of concatenating these separate "drives" into a single drive that the system will use. The other major limitation is a lack of sharing. Flash-based PCIe devices are essentially server and single-application-specific. While there is an interesting use case to place these products in servers with software based NAS and storage virtualization, for most users looking to share the SSD investment across multiple applications, a more natively shared storage platform is required.

The next step is to consider shared SSD systems that are external; SAN attached appliances that can be added to an existing storage infrastructure. These can be RAM or Flash based systems, although Flash tends to be the predominant choice for most customers. RAM based systems are appropriate for situations were there is a very heavy write workload. For almost all other workloads Flash based SSD are perfectly suitable. 

Texas Memory, as an example, offers the Flash based version of these appliances for starting prices that are typically less than $100,000 and capacities from 128GB to over 4TBs. This capacity range allows multiple workloads to share the performance capabilities of SSD. Since they are SAN attached they can be shared across multiple servers and applications. Compared to the cost of buying ten or more Flash based PCIe SSDs, these systems offer more flexibility at a lower total price. In many cases they can be purchased to provide performance boost to a few justifiable applications and then be leveraged to provide a similar performance boost to applications that were not as mission-critical but still important to the enterprise.

The challenge with external, SAN attached SSDs is that they are another drive type for the storage manager to be aware of. In most cases, they will not integrate into the existing storage product. This downside can be easily overcome through the use of built-in OS mirroring or third party products like Symantec Volume Manager as discussed in our article "Do you Adapt or Replace your Backup Application?".

Some customers will look to their traditional storage suppliers for SSD to provide an integrated solution. This, as it stands right now, is one of the more expensive options available to the storage manager, with prices typically starting at $125,000. The advantage these systems have is that it's safe to assume some level of integration with the existing storage, meaning that it should be somewhat easier to manage. The downside is that many of these vendors are new to the SSD market and some are still figuring out what best practices are for their customers. They also may have performance issues as well. Most are using SSDs packaged in the same form factor as hard drives. While this is not in and of itself a disadvantage, it is something to be aware of. First, RAID is not built-in per drive as it is with the external SAN attached appliance. That means in a RAID configuration, an entire drive must be used for data protection. In the mechanical drive world that's not an issue. In the premium priced world of SSD, it can be a major issue.

The second performance issue is that the I/O capabilities of the combined drives when inserted into the drive shelf may outstrip the performance capabilities of the storage controller itself. Be aware that the typical storage system has many other responsibilities than simply reading and writing data. It must perform the aforementioned RAID calculations, plus do snapshots, thin provisioning and replication, among other tasks. In the mechanical drive world storage controllers have more time available to perform these other tasks than they will in the instant response world of SSD.

That said, Flash drives integrated into storage systems have their place; simply be careful to understand where and when they should be deployed. At the same time don't rule out other technologies simply because they're not formally integrated.

Budget is always a driving factor in IT decision making and with SSDs, because of their premium price, it's even more so. The job of the storage manager is to weigh the available SSD options as outlined in this series, compare the capabilities, strengths, and weaknesses of those systems, and select the most appropriate SSD solution available while understanding the budget realities.

George Crump, Senior Analyst

This Article Sponsored by Texas Memory Systems

 Related Articles
 Will MLC SSD Replace SLC?
 SSD, Heal Thyself!
 The Importance of SSD Architecture Design
 Using SSS with High Bandwidth Applications
 Solid State Storage for Bandwidth Applications
 Texas Memory Announces 8Gb FC
 The Power of Performance
 Integrate SSD into Virtual Server or DT Infrastructure
 Enhancing Server & Desktop Virtualization w/ SSD
 SSD in Legacy Storage Systems
 Driving Down Storage Complexity with SSD
 SSD is the New Green
 SSD or Automated Tiering?
 Selecting an SSD - Part Two
 Selecting which SSD to Use - Part One
 Pay Attention to Flash Controllers
 SSD Domination on Target
 Flash Controllers when Comparing SSD Systems
 Integrating SSD and Maintaining Disaster Recovery
 Visualizing SSD Readiness
Screen Casts
 Access our SSD Screen Cast