Both external SAN-attached SSD and SSDs that are integrated into an existing system offer high availability at a hardware level. These units typically will have redundant power suppliers, storage controllers, media redundancy, and finally they will have redundant paths to the data, meaning that if one HBA fails the others can pick up the workload. Additionally external SSDs are often small enough that complete systems can be mirrored for the ultimate in high availability.


All three implementation methods (PCI Express attached, external purpose-built systems and external integrated solutions) can benefit by leveraging the high availability features built into the software application, the file system, or the operating system that the application runs on. For example, in our recent article “Integrating SSD and Maintaining Disaster Recovery” we discussed using the preferred read capability within Symantec’s Storage Foundation to achieve high availability. This process would work across the different SSD deployment methods.


Where PCIe-based SSDs have a data availability weakness, however, is if the server itself fails. If the server goes down the PCIe-based SSD goes with it. Of course since PCI SSDs are flash-based they do have the advantage of not losing the data, just the access to it. Be careful here as some PCIe SSD products will loose all the metadata and onboard cache on a power failure, so it is not entirely straight forward. This is also the ultimate shortcoming of cache-based RAM memory within a server. If the server fails then all the data that was in cache is lost.



Data Management


One of the challenges that some SSD solutions seem to pose is that, with the exception of the integrated solutions, they present themselves as separate storage systems. This can present a problem for storage managers but it’s important to put this in the proper perspective as some manufacturers make it out to be a bigger problem than it actually is.


For the most part, the integration aspect of the integrated systems refers to its ability to use similar data services between different types of storage. An example would be the ability to use the same snapshot commands on SSD that you would on mechanical drives. More advanced concepts, like that of moving data to SSD based on what the storage controller knows about that data, is maturing. The technology is still in its infancy. 


Integrated storage services presents two distinct trade-offs that affect performance. First, the data services themselves can bottleneck the performance of the SSD. The storage software, which prior was waiting on mechanical drives to respond, now becomes a hindrance to getting maximum performance from the SSD investment.


Secondly, most of the storage systems on the market today are using standard storage shelves that were originally designed to house mechanical drives. The throughput of these shelves were designed to match the performance characteristics of those drives. SSD provides an order of magnitude faster performance than mechanical drives. Often, just a few of these drives can deliver more I/O than the shelf can handle. A few of these shelves together can provide more I/O than the storage controller itself can handle. As a result, a significant performance gap can develop between what the SSDs’ potential  performance is and what the total throughput of the system actually is. 


Alternatively, an externally attached SSD can be used, like those from Texas Memory Systems, which are designed to extract maximum performance from the SSD memory. In order to do this, it does sacrifice the “integrated” data services mentioned above. In most cases where SSD is used the data set that’s being moved to SSD is well understood and of high value. Managing it separately either does not create a substantial management burden or if it does it’s well worth the extra effort to obtain maximum performance. Reality is that there are multiple storage systems with multiple data service interfaces already in the data center. Adding one more that can significantly increase productivity is not going to cause a major reduction in administrator productivity.


Interestingly, there are solutions that allow the integration of these different systems. Storage virtualization solutions will allow multiple vendors’ hardware to be managed by the same data services model. For those looking to have one solution to provide all the data services, like snapshots and replication, these solutions can provide the best of both worlds: focused performance for SSD yet unified data services for easier management. These systems would also include consolidation of different hardware platforms, including those that are not SSD.


Systems that can justify SSD are also systems that need High Availability (HA) in most cases. The level of availability required will vary from data center to data center and there is some degree of HA from each of the implementation methods. SSD, like any other storage technology, has its challenges as it relates to integration to an overall storage management infrastructure. There are options available to integrate either externally based SSD solutions or integrated solutions to obtain the correct balance of performance and ease of management. That balance however will be different for each data center and they should consider their own needs.


Our recently updated SSD Resource page can guide you through learning and selecting the right SSD for your data center.

George Crump, Senior Analyst

This Article Sponsored by Texas Memory Systems

- SSD Data Availability and Management