George Crump, Senior Analyst

Automation, especially in the form of caching, is quickly becoming the universal answer. At the Flash Memory Summit, just on day one, we were briefed by LSI, VeloBit, FlashSoft, STEC and Marvell on their cache solutions. Look for more detailed briefing notes on each of these solutions in the coming days but needless to say we are in the year of the cache. As you can see from the link panel on the right, we have been writing a lot about flash based cache this year already. Expect that pace to increase.

Without cache or some sort of tiering technology you have to analyze the applications in your enterprise and decide which ones or which sections of those applications should be promoted to SSD. This takes time. I don’t speak to many IT people that have spare time. They are stretched thin just keeping things running. You also need to do this analysis on an ongoing basis since things change in the data center and what was hot data today may be superseded by even hotter data in the future. If you don’t have time to do the analysis once, you certainly don’t have time to do it five or six times a year.

Enter caching, it is constantly, second by second, analyzing your data to make sure the most appropriate data is in SSD at the right moment in time. Of course caching is nothing new, we have had the technology seemingly forever. What has changed is how Flash SSD enables the technology. Caching is only of benefit if the data is actually in cache when the application requests it. Instead of a 64MB RAM cache, you can easily have a 640GB flash cache. Obviously the chances of data being in cache go up significantly. The other change is how vendors have continued to refine cache algorithms to improve cache accuracy and how many are now caching writes which provides improvement to the slowest of I/O operations.

Cache Wars

Caching, potentially more so than automated tiering, is going to propel wider flash acceptance. It is simple, cost effective and gives you much of the performance boost that a pure SSD solution provides. The looming cache war is going to be over where the cache is located; in the server, in the network or on the storage, and how the cache will be implemented; all in hardware, as a software driver or some combination.

Just like everything else in technology, each implementation strategy has its advantages and disadvantages. Which one you choose will be largely dependent on your environment as is always the case, there is no perfect solution that will cover every use case. The alternative to a cache enhanced mechanical storage is a pure solid state storage system. Compared to caching or tiering it can have some advantages as well and is something we will explore in an upcoming entry.

Flash Memory Summit Briefing Note