The first step to maximizing the FC investment is to understand what the current utilization level of the infrastructure is. Optimization requires accurate, line-level data collection. This is unfortunately where the process breaks. The default vendor response is to throw more hardware at perceived network growth needs (or to throw IP protocols at it) instead of using what is installed more effectively. Most storage vendors are armed with their own software-only monitoring tools that provide a relatively simple ‘working / broken’ analysis. In addition to being vendor supplied, they tend to take a myopic view of the environment. Instead, what’s needed for maximum understanding and utilization is hardware-level fiber-optic network tapping like that offered by Virtual Instruments, which collects data on SAN latency at the ‘scene of the crime’ instead of getting it second- or third-hand. Hardware-layer data collection provides a detailed analysis of actual utilization, in realtime. Realtime analysis also becomes important later, to manage the more narrow resource headroom created when optimizing the environment.


With this data readily available, the environment can now be optimized. One of the first steps is to correct obvious storage configuration issues that often occur in the rush to get new projects rolled out. These are simple configuration problems that don’t require an FC certification to fix. For example, in almost every implementation, the Virtual Instruments team will find miss-configured ports or inter switch links (ISL) that aren’t operating at the correct speed. Again, while relatively simple to fix, identification of these problems, especially in a large data center with 1000s of ports, is very difficult. Many customers may be able to correct enough of these miscues that this step alone pays for the investment in the analysis. With the performance gains from implementing these simple fixes, the SAN is often ready to handle several times the server and storage capacity than was originally thought possible.


After basic configuration issues the next step is typically performance based tiering which differs from ‘frequency-of-access’ tiering. This is not moving old data to less expensive storage, this is making sure that current data is on performance-correct storage - true optimization of the storage infrastructure. For example, if a file server’s volume is on a LUN that’s built on 15k RPM hard disk drives, hardware based analysis will often determine that the file server never comes close to using the potential bandwidth of those drives. Consequently, the volume can be moved to another drive type, either slower fibre drives or even SATA drives, freeing up the more expensive storage for applications that can take advantage of it. The other advantage of this approach is that it doesn’t impact users or applications which still see their storage on "Drive G". It’s just that "Drive G" has moved to another location, the translation being handled by the O/S. This is especially powerful in virtualized environments using VMware. Products like Virtual Instruments' VirtualWisdom software can correlate the hardware layer analysis with the abstracted virtual machine layer to provide a complete view of the environment. This will allow analysis of individual VMs to see if even they are on the correct performing storage or if particular storage spindles are overloaded with VMs. Then, using VMware's Storage VMotion, VMs can be moved in realtime to allow for more efficient use of the storage resources.


The same process can be applied to switch utilization as well. Most environments are in a ‘perpetual upgrade state’ as they move through different generations of technology, and won’t completely sweep the floor of 4Gb switches in favor of 8Gb switches. There will be a mix. As 8Gb switches are implemented it makes sense to only move the servers to those switches that have the highest probability of using the extra bandwidth. Hardware level monitoring and analysis can provide that information.


As the speed of the infrastructure continues to increase, fewer servers will be able to take advantage of those increases. The potential then exists to "stack" multiple servers onto a single card with I/O Virtualization (IOV). As we described in our article "What is IOV?", companies like Virtensys make a physical gateway in which high performing cards can be shared across multiple physical servers. The knowledge of which servers should share these cards once again comes from the physical analysis that a product like VirtualWisdom provides.


If a move from fibre is intended to address a performance bottleneck or a cost concern then leveraging proper analysis of the environment will often show that the infrastructure is far more robust than originally thought. With proper insight and optimization many more servers and data sets can usually be supported. Doing so will actually save money since in most cases the infrastructure has already been bought and paid for. The final objection may be management of the environment vs. IP but as Storage Switzerland will discuss in an upcoming article that too can be made significantly easier with hardware level inspection and reporting.

Virtual Instruments is a client of Storage Switzerland

George Crump, Senior Analyst

- Fiber Channel Optimization