The storage systems that house these controllers are differentiated today, more by storage management functions or features than by performance. One of the reasons could be a parity in performance offered in these systems due to the similarity in hardware previously discussed. While it’s common for some standard, repetitive functions like RAID operations to be put into dedicated hardware, most of the higher-level features are not. Things like snapshots, deduplication, replication, encryption, etc. are usually software based and are performed by the CPU in the array controller.



Disk drives are the bottleneck


These two factors - overmatched array processors and an extra CPU load - could mean that array processors won’t keep up with the IOPS demand of typical servers. While this looks to be true ‘on paper’, it’s not true in practice. The reason is that while array controller designs are effectively bottlenecked by their CPU, they’re still faster than hard disk drive I/O. At 200 IOPS per drive or less, back-end disk drive performance is even lower than that of the array processors. This means the bottleneck in the system is the back-end disks, not the array controller.


Putting drive form factor SSDs into these legacy arrays quickly shifts the bottleneck away from the back-end storage media and onto the array controllers. Now the ‘many to one’ ratio of host server CPUs to array controller CPU discussed earlier can cause real performance issues. And the shortcomings of using general purpose server hardware components to keep the costs down become painfully obvious. Making matters worse, these controllers are further loaded with the overhead of array storage management functions. Operations like tracking multiple copies of data, updating metadata indices, comparing data blocks for changes, reverting to previous versions, etc. are added to the regular I/O operations that array controllers have to process. The result can be a storage array upgraded with SSDs that shows little real improvement over the performance its applications had with only hard disk drives.



Solution


With a single drive form factor SSD producing in the neighborhood of 50,000 IOPS, and a storage array holding a dozen or more SSDs, it’s clear that legacy hard disk drive array systems simply can’t support SSDs effectively. SSDs require a different array controller architecture and need to be designed for fast I/O and not simply constructed with mass market components to save product costs. Controllers in the RamSan family of SSD arrays by Texas Memory utilize a simplified data path design and massively parallel architecture to achieve this performance.


Simplifying the data path refers to a ‘flattening’ of the process, or reducing the number of steps required for each data transaction. A simplified process such as this doesn’t need the sophistication of a server CPU in order to carry out the instructions, and can instead use a programmable gate array architecture. Without a CPU, the SSD controller would have to be stripped of storage management functions that are resident on most array controllers today, eliminating another potential performance drain. Without these data functions, or the complexity of a centralized data path architecture, these SSD controllers don’t need to deal with issues like data consistency, coherency or locking. There’s also no need for memory or cache space to hold hash tables, snapshots or other copies of data sets that these features are being applied to.



Massively parallel design


In order to scale performance, this flatter, distributed process would also need to be more parallel, meaning many simplified data paths running between end points independently and simultaneously. Individual flash controllers would be designed to handle a couple dozen flash chips, a much simpler process than designing a controller to handle up to potentially hundreds of disk drives. Data could be striped across these individual flash controllers, combining the performance of hundreds of flash chips needed to produce the IOPS expected of SSD arrays.


Solid state storage technology offers enormous potential performance improvements over traditional HDDs. When packaged in a disk drive form factor, SSDs provide orders of magnitude better IOPS than HDDs. Although replacing mechanical drives with SSDs is a simple operation and certainly makes intuitive sense, it may not lead to the anticipated performance improvement. Because of the IOPS limitations of HDDs, existing array controllers weren’t designed to provide the much higher level I/Os that SSDs can support. A new controller design is required, one which simplifies the data path, leverages hardware instead of software in the I/O process and eliminates high overhead ‘storage management’ functions. This distributed architecture can better support the performance of solid state storage and enable performance expectations to be met.

Texas Memory Systems is a client of Storage Switzerland

Eric Slack, Senior Analyst

 
 Related Articles
 Will MLC SSD Replace SLC?
 SSD, Heal Thyself!
 The Importance of SSD Architecture Design
 Using SSS with High Bandwidth Applications
 Solid State Storage for Bandwidth Applications
 Texas Memory Announces 8Gb FC
 The Power of Performance
 Integrate SSD into Virtual Server or DT Infrastructure
 Enhancing Server & Desktop Virtualization w/ SSD
 Driving Down Storage Complexity with SSD
 SSD is the New Green
 SSD or Automated Tiering?
 Selecting Which SSD to Use Part III - Budget
 Selecting an SSD - Part Two
 Selecting which SSD to Use - Part One
 Pay Attention to Flash Controllers
 SSD Domination on Target
 Flash Controllers when Comparing SSD Systems
 Integrating SSD and Maintaining Disaster Recovery
 Visualizing SSD Readiness
Screen Casts
 Access our SSD Screen Cast
../../2011/8/22_Will_MLC_SSD_Replace_SLC.html../../2011/6/29_SSD,_Heal_Thyself%21.html../../2011/6/1_The_Importance_of_SSD_Architecture_Design.html../../2011/4/13_Using_Solid_State_Storage_with_High_Bandwidth_Applications.html../../../../Blog/Entries/2011/4/6_RamSan-640.html../../../../Blog/Entries/2011/1/26_High_Performance_For_the_Database_Cluster.html../12/2_The_Power_of_Performance.html../10/8_Integrating_SSD_into_a_Virtual_Server_or_Desktop_Infrastructure.html../9/2_Enhancing_Server_And_Desktop_Virtualization_With_SSD.html../7/14_Driving_Down_Storage_Complexity_with_SSD.html../5/10_SSD_is_the_New_Green.html../4/6_SSD_or_Automated_Tiering.html../3/9_Selecting_Which_SSD_to_Use_Part_III_-_Budget.html../2/9_Selecting_an_SSD_-_Part_Two.html../1/26_Selecting_which_SSD_to_Use_-_Part_One.html../../2009/12/11_Pay_Attention_to_Flash_Controllers_when_Comparing_SSD_Systems.html../../2009/12/1_SSD_Domination_On_Target.html../../2009/12/11_Pay_Attention_to_Flash_Controllers_when_Comparing_SSD_Systems.html../../2009/9/17_Integrating_SSD_and_Maintaining_Disaster_Recovery.html../../2009/6/2_Visualizing_SSD_Readiness..html../../../../RegForSSD1.htmlshapeimage_2_link_0shapeimage_2_link_1shapeimage_2_link_2shapeimage_2_link_3shapeimage_2_link_4shapeimage_2_link_5shapeimage_2_link_6shapeimage_2_link_7shapeimage_2_link_8shapeimage_2_link_9shapeimage_2_link_10shapeimage_2_link_11shapeimage_2_link_12shapeimage_2_link_13shapeimage_2_link_14shapeimage_2_link_15shapeimage_2_link_16shapeimage_2_link_17shapeimage_2_link_18shapeimage_2_link_19shapeimage_2_link_20