The storage systems that house these controllers are differentiated today, more by storage management functions or features than by performance. One of the reasons could be a parity in performance offered in these systems due to the similarity in hardware previously discussed. While it’s common for some standard, repetitive functions like RAID operations to be put into dedicated hardware, most of the higher-level features are not. Things like snapshots, deduplication, replication, encryption, etc. are usually software based and are performed by the CPU in the array controller.

Disk drives are the bottleneck

These two factors - overmatched array processors and an extra CPU load - could mean that array processors won’t keep up with the IOPS demand of typical servers. While this looks to be true ‘on paper’, it’s not true in practice. The reason is that while array controller designs are effectively bottlenecked by their CPU, they’re still faster than hard disk drive I/O. At 200 IOPS per drive or less, back-end disk drive performance is even lower than that of the array processors. This means the bottleneck in the system is the back-end disks, not the array controller.

Putting drive form factor SSDs into these legacy arrays quickly shifts the bottleneck away from the back-end storage media and onto the array controllers. Now the ‘many to one’ ratio of host server CPUs to array controller CPU discussed earlier can cause real performance issues. And the shortcomings of using general purpose server hardware components to keep the costs down become painfully obvious. Making matters worse, these controllers are further loaded with the overhead of array storage management functions. Operations like tracking multiple copies of data, updating metadata indices, comparing data blocks for changes, reverting to previous versions, etc. are added to the regular I/O operations that array controllers have to process. The result can be a storage array upgraded with SSDs that shows little real improvement over the performance its applications had with only hard disk drives.


With a single drive form factor SSD producing in the neighborhood of 50,000 IOPS, and a storage array holding a dozen or more SSDs, it’s clear that legacy hard disk drive array systems simply can’t support SSDs effectively. SSDs require a different array controller architecture and need to be designed for fast I/O and not simply constructed with mass market components to save product costs. Controllers in the RamSan family of SSD arrays by Texas Memory utilize a simplified data path design and massively parallel architecture to achieve this performance.

Simplifying the data path refers to a ‘flattening’ of the process, or reducing the number of steps required for each data transaction. A simplified process such as this doesn’t need the sophistication of a server CPU in order to carry out the instructions, and can instead use a programmable gate array architecture. Without a CPU, the SSD controller would have to be stripped of storage management functions that are resident on most array controllers today, eliminating another potential performance drain. Without these data functions, or the complexity of a centralized data path architecture, these SSD controllers don’t need to deal with issues like data consistency, coherency or locking. There’s also no need for memory or cache space to hold hash tables, snapshots or other copies of data sets that these features are being applied to.

Massively parallel design

In order to scale performance, this flatter, distributed process would also need to be more parallel, meaning many simplified data paths running between end points independently and simultaneously. Individual flash controllers would be designed to handle a couple dozen flash chips, a much simpler process than designing a controller to handle up to potentially hundreds of disk drives. Data could be striped across these individual flash controllers, combining the performance of hundreds of flash chips needed to produce the IOPS expected of SSD arrays.

Solid state storage technology offers enormous potential performance improvements over traditional HDDs. When packaged in a disk drive form factor, SSDs provide orders of magnitude better IOPS than HDDs. Although replacing mechanical drives with SSDs is a simple operation and certainly makes intuitive sense, it may not lead to the anticipated performance improvement. Because of the IOPS limitations of HDDs, existing array controllers weren’t designed to provide the much higher level I/Os that SSDs can support. A new controller design is required, one which simplifies the data path, leverages hardware instead of software in the I/O process and eliminates high overhead ‘storage management’ functions. This distributed architecture can better support the performance of solid state storage and enable performance expectations to be met.

Texas Memory Systems is a client of Storage Switzerland

Eric Slack, Senior Analyst

 Related Articles
 Will MLC SSD Replace SLC?
 SSD, Heal Thyself!
 The Importance of SSD Architecture Design
 Using SSS with High Bandwidth Applications
 Solid State Storage for Bandwidth Applications
 Texas Memory Announces 8Gb FC
 The Power of Performance
 Integrate SSD into Virtual Server or DT Infrastructure
 Enhancing Server & Desktop Virtualization w/ SSD
 Driving Down Storage Complexity with SSD
 SSD is the New Green
 SSD or Automated Tiering?
 Selecting Which SSD to Use Part III - Budget
 Selecting an SSD - Part Two
 Selecting which SSD to Use - Part One
 Pay Attention to Flash Controllers
 SSD Domination on Target
 Flash Controllers when Comparing SSD Systems
 Integrating SSD and Maintaining Disaster Recovery
 Visualizing SSD Readiness
Screen Casts
 Access our SSD Screen Cast