The Storage Performance Issue


With VDI, the data center is now responsible for the end user experience. There are three key factors that differentiate performance issues in a VDI environment from those of a data center server virtualization project; the number of end-points (desktops vs. servers) accessing a storage system, the need for more storage capacity and large numbers of simultaneous logins and logouts.


First, there is the issue of the sheer number of storage end-points. An end-point  accesses storage either directly or through another layer. A server virtualization infrastructure introduces challenges when compared to the legacy one-application-per-server design. While potentially using fewer physical servers, each virtualization host may have dozens of VMs, each with its own I/O demands. Tuning performance for these VMs is critical and something that Storage Switzerland has covered in "VMware on NAS". To say that server virtualization acts as an “I/O mixer” is an understatement.


While a VDI environment may have an equal number of physical servers, its VM density per physical host is significantly greater than in the traditional server virtualization environment. A VDI environment takes the I/O mixer and turns it on puree – with even a greater impact. Also the performance needs are different in a VDI environment. The need for high performance for each end-point VM is replaced by the need for consistent performance for each end-point, something that’s particularly challenging when there may be hundreds or thousands of end-points that in aggregate can create a massive load on the storage.


One of the best ways to address the challenge of this high end-point count may be to use a NAS storage system to support the VDI storage infrastructure. NAS is designed from the start to handle shared access to files, which is essentially what desktops become, once virtualized. Also current versions of desktop virtualization solutions suffer from iSCSI locking, limiting the usefulness of iSCSI for larger VDI deployments Storage systems like Nexenta's Nexentastor, as an example, automatically pull frequently accessed blocks, such as the image of a desktop in a VDI deployment, onto SSDs and memory, as I/O demands rise or fall. This is especially important in the VDI environment where a user’s desktop may become idle for long periods throughout the day, but then suddenly need improved storage I/O.


Also some systems, like NexentaStor, have the ability to migrate virtual desktops between separate NAS servers automatically via software that interacts with the APIs of all the major virtualization solutions. If the bottleneck ends up being the NAS hardware itself, migration of a group of VDI can be triggered to move to another NAS server. This again provides some insight into the value of a NAS system. Even when multiple NAS servers need to be connected, the set up of those servers can be easily accomplished and then managed through the virtualization software.



The Storage Capacity Issue


A second storage management issue that VDI creates is the need for additional storage capacity. VDI can make a significant impact on the SAN storage budget. To assist with this issue, NAS solutions should utilize at least four storage optimization techniques. The first is thin provisioning. Thin provisioning allows for virtual desktops to be created without having to hard-allocate all the storage to them. While many VDI software applications offer this capability, if the storage system can perform the function, it removes the I/O load from the server while also reducing the capacity requirements of storage. The second technique is writable snapshots. This allows a ‘golden master desktop’ to be created and used repeatedly, instead of creating a new installation for each new desktop. With a writeable capability the snapshot version can be updated so that customization can be applied to each individual desktop as needed.


The final two optimization technologies that a NAS solution should deploy are deduplication and compression. Deduplication eliminates redundant copies of data that occur after the snapshot of the golden master is taken. Essentially duplicate data will sneak into the environment over time. Deduplication finds and eliminates the redundant data as it occurs. Compression is also essential. Deduplication, while very space efficient, is only effective on redundant data. Compression, on the other hand, works across all data types and will bring space savings to data that’s not redundant.


It is rare to find all four space saving capabilities within a single NAS solution. When all can be applied, as is the case with NexentaStor, the total effort can reduce capacity utilization by as much as 90%, compared to the stand-alone environment. Considering that shared storage is typically 2X to 3X the cost of direct attached storage the accumulated savings can be significant.



The Boot Storm Issue


The final challenge when compared to server virtualization is that while VDI shares the random nature of storage I/O needs it also adds a very specific high demand moment, when the bulk of the workforce logs in for the day. Known as a ‘login storm’ or ‘boot storm’ this is a peak I/O moment that can dramatically affect performance. While boot storms can be addressed by phased-in power-ons before the workforce arrives, there is little that can be done to work around the I/O demands of a login storm, other than accelerating storage I/O performance.


This challenge is addressed by leveraging both the auto-tiering capabilities and the space optimization of some NAS software applications. Solid State Storage (SSS) is an ideal solution for the login storm problem because it offers extremely good read performance, which is exactly what a virtual desktop is doing during a login. If those logins all come from SSS then there will usually be a negligible if any, performance impact.


The challenge is the cost associated with moving a thousand virtual desktops to SSS as well as the performance issues. While auto-tiering could be leveraged, the movement of all of those images would be time consuming since Flash SSS is significantly slower at writes than it is with reads. However, if the capacity optimization techniques discussed above were leveraged, then the amount of SSS required to host the VDIs would be minimal. The administrator could either decide to auto-tier the master image to SSS prior to the typical login or it could be so small that they just leave it on SSS. One remaining unique component would be the user profiles, and they could be easily migrated to SSS during login and migrated back to mechanical hard drives after the login period is over. The other remaining unique component, the individual user data, could be migrated to and from SSS as needed since most users only work on a few files at a time. Ideally the auto-tiering would be done at the block-level and would treat the SSS as an expansion of cache so that all of these decisions could be made in real time, within the file system itself.


The net result is that with a NAS software application that has the above capabilities, including in-line deduplication, SSS based caching, thin provisioning, and integration into the virtualization environments, users can experience equal performance with VDI than when they had their own local processing power. VDI implementations typically cost 1.5x-2x as much as traditional implementations thanks in large part to the extreme cost of providing the additional storage capacity and performance required; as a result most enterprises are having a hard time justifying the ROI for VDI. Only a storage solution that performs extremely well while reducing CapEx and OpEx can unlock the promise of VDI.

George Crump, Senior Analyst

Nexenta is a client of Storage Switzerland