These needs will often be at odds with each other and must be prioritized. In almost every case there will be some compromise between competing needs.



SSD Deployment Options


SSDs for the most part started as external SCSI attached devices similar to mechanical hard drives. The SCSI interface was dropped for Fibre Channel which provided greater I/O bandwidth and the ability to share the SSD investment across multiple applications.


For years there was a close partnership between traditional storage vendors and manufacturers of SSDs. At the time SSDs, while very high performance, where mostly RAM based and expensive, relegating them to niche applications. Then, as the prices of these systems came down and the capacity went up, especially due to the emergence of flash memory, the traditional manufacturers began to look at integrating SSD technology into their systems. Several suppliers developed flash SSD systems that were packaged in the same form factor as a mechanical drive, allowing this new technology to be inserted into existing storage systems.


Over the past few years another method was developed that allowed SSD to be inserted right into the same server as the application via a PCI-E card. PCI SSDs are recognized by a server in a near similar fashion as an internal hard drive. Leveraging the server bus and power supply meant that SSD could break new price barriers.


As stated earlier, each of these SSD deployment methods has its pros and cons and selection should be based on how these features meet users’ needs. Ideally, when reviewing their options storage managers should consider an SSD supplier that can offer more than one category of SSD solutions. This avoids the “If all you have is a hammer all your problems look like nails” scenario that many storage manufacturers place themselves in.



Performance


The primary reason for investing in any form of SSD is its ability to improve performance of a particular application or an entire environment. That said, for the performance advantages of SSD to be realized the applications do need to generate storage I/O demands that surpass what traditional mechanical drive based systems can offer. As we explain in our Visual SSD Guide, this often comes down to the number of pending I/O requests that an application can generate, also known as queue depth or the number of near-simultaneous requests that can occur.


If the application or use case can generate these demands then all of the SSD deployment choices are going to result in some performance improvement. However each has its own limitations. It is important to compare bandwidth, latency and I/O limits of various solutions. It is simplistic to say that external systems will be slower because they are outside the server. It is more important to understand that different vendors have different architectures that impact these performance metrics. Some PCI devices are slower than some external systems. Some external systems offer higher bandwidth than is available on the PCI bus. It is also important to characterize the reason SSD is being used. If SSD is used for database logs, users should be more concerned with the mix of latency and IOPS than bandwidth.


There will always be a performance bottleneck somewhere in the I/O chain. The key with SSD performance is to make sure that the selected implementation method and the connectivity are not going to be that bottleneck. The goal with SSD is to make the application or server the bottleneck. From there the choice can be made to address that issue.



Shared SSD


During the performance needs analysis process, if multiple applications on multiple servers require acceleration or a single SSD needs to provide data to an application executing on multiple servers (such as clustered databases) then an external SSD that can be shared is essential. These systems can leverage a SAN and allow multiple applications to use the SSD to accelerate application performance. For these applications, a PCI based SSD is typically less desirable.


The sharing of SSD is an ideal method to spread the cost of the solution across multiple applications. Unlike traditional mechanical storage SSD won’t typically suffer a performance loss by supporting data sets from multiple applications. There are no moving parts that need to be repositioned because of random read or write requests. Finally, because of its higher cost it’s ideal to use 90% or more of the available SSD capacity to extract maximum value from the investment.


If just a single application needs to be accelerated and this application does not require shared storage, PCI SSD may be a viable consideration. For example, some Microsoft SQL Server databases are clustered but the cluster is “shared nothing”, meaning that while multiple servers are used to provide redundancy storage is not shared. In these cases, PCI SSD can be used to augment the storage of each server in the SQL Server cluster without decreasing cluster efficiency. PCI based SSDs are also well positioned for applications that are already server-centric. In most cases PCI SSD makes sense when more performance is needed and yet the servers already have maximum RAM and it is impossible to get application I/O high enough from the limited number of disk drives in the server.



Capacity


Along with the number of qualifying applications, a determination has to be made about how much SSD capacity will be required. If that number is relatively small, less than 128 GBs for example, then either a RAM based, externally attached system or additional server memory should be considered. Instances where the data set is small and read-intensive, server RAM should be the first approach to improving application performance. Where the data set is small and write intensive, external RAM SSD or in some cases internal PCI SSD can be considered (as long as the applications availability model is not impacted). 


As the data set moves beyond 128GB flash SSD should be considered. While there is some decrease in write-heavy performance, as we explain in our article “Pay Attention to Flash Controllers When Comparing SSD Systems”, these shortcomings are being addressed.


Typically, all varieties of flash SSD are worthy of consideration, from 128GBs to about 500GBs. At approximately 500GB, most PCI based flash SSDs will need to have a second card added to the system. Regardless of vendor there must be additional physical slots available to install the cards. Then, depending on the vendor and the quality of their flash controller software, adding an additional PCI card may require additional server resources that can affect performance. Make sure to ask the vendor what, if any, server resources their PCI SSDs will require.


As capacity grows beyond 500GB strong consideration should be given to external flash based SSDs (both dedicated SSD systems and integrated flash and disk solutions). The likelihood at this capacity level is that there will be multiple applications that can take advantage of SSD performance increases, further support for a shared environment.


If the environment can justify more than a couple TBs of SSD storage then consider removing the integrated solutions from the candidate list. There won’t be an issue of lack of capacity scaling, as the integrated systems can add SSD as long as there are drive slots to accommodate the units. The challenge goes back to the performance consideration. At some point the raw I/O potential of the SSD will exceed the I/O capabilities of the storage shelf and/or the storage compute engine itself. Typically external attached SSDs, since they are dedicated to memory based I/O, will be able to scale capacity and performance significantly higher.


Beyond performance, application workloads, and capacity, consideration should also be given to data availability, data management, and the physical space consumed. We will cover these aspects in part two of this series. Then in part three we will focus on the factor that may trump all the rest: budget.

George Crump, Senior Analyst

This Article Sponsored by Texas Memory Systems