The option to explore an IP based technology for storage networking usually occurs when the fibre infrastructure has reached a threshold and more capacity is needed by adding another switch or upgrading to a director-class product. It can also come when a technology refresh is on the horizon. Right now in the data center most FC SANs are using 2Gb or 4Gb technology and 8Gb FC is the option being considered, as IP now has a fast alternative in 10GbE. Some data center managers may decide that instead of upgrading to the next generation of fibre (8Gb or eventually 16Gb) the time has come to switch to an IP based storage protocol.


Before even entering into a discussion about the merits of the various protocols, the IT manager should ask whether they’re being accurately compared. FC is the clear market leader, especially in the large enterprise data center, which may prompt a comparison of a large FC implementation with a smaller IP SAN. In this light, FC's alleged complexity could be just the result of a large enterprise data center that is pushing storage technology as a whole to its limits, compared with a simpler IP example. Any protocol that has to scale to support large numbers of connections and deliver very high performance will bring with it additional cost. In some ways, these challenges could also be due to the fact that FC just ‘got there first’ and consequently experienced issues that IP as a storage technology hasn’t had to. There is still value in making the comparison of FC vs. the IP options because the result may lead to a better and more optimized FC infrastructure, instead of just replacing FC outright. However, it is important to judge FC against a like-sized IP environment.


There are three key questions to ask when considering this decision. First, “will the move to an IP based protocol really be a one-time, complete conversion?” In almost all cases it will be a very gradual transition from fibre to iSCSI or NAS. This means that for the foreseeable future, probably measured in years, both protocols will have to be supported. As a result the potential cost saving will be greatly diminished and in fact, costs may even increase as two protocols are managed, monitored and maintained.


The second question is “will the IP based protocol be able to deliver on the performance requirements of the environment or will the amount of tuning required place a significant cost burden on it?” As discussed above some of the perception issues that surround FC may be simply the result of its ‘first in’ status, as it was the first protocol to be scaled and pushed to the truly ‘enterprise’ level. IP based protocols will likely develop the same problems as they’re scaled to these same proportions. Or it may turn out that IP based protocols won’t be able to deliver the performance that the entire data center needs. While they may be fine for the large majority of workloads, the 80/20 rule applies in I/O just like it does everywhere else - 80% of the data center's I/O resource consumption comes from less than 20% of the servers. That 20% may not be served well by IP based protocols and FC may need to stay in place to ensure high I/O services to mission critical servers making a mixed environment a permanent reality. And similar to the above situation, during transition, a mixed environment may be more expensive than a single environment. While FC's upfront costs may be higher than the IP based protocols, scalability is built in. Supporting one protocol that can address all of the data center’s needs may be less expensive than supporting two protocols, even if that second protocol saves some upfront expenses. The goal with fibre is to make sure that its support costs can be controlled, its utilization maximized, and its performance better optimized.


The final, and potentially most important, consideration is to make sure that the reason for the upgrade is valid. In other words has the current FC infrastructure really reached its limit? Storage Switzerland and SAN optimization specialist Virtual Instruments have repeatedly found this answer to be “no”. Most large FC SAN network infrastructures are massively underutilized, to the tune of only 3%-5%. This means that the potential return on an investment that has already been made, the money already spent, is not anywhere near where it could be. If this is accurate then an additional load can be placed on that infrastructure with little cost and address the most common complaint about FC SANs; their expense. And if a tool like Virtual Instruments’ VirtualWisdom is used, the ongoing cost of operations can be significantly reduced as well. The hard cost of FC comes from the cost of the host bus adapters and the cost of the switches and cabling infrastructure. If even another 50% of load can be placed on that SAN, its cost disadvantage can disappear making FC not only the highest performing, but also least expensive investment choice.


When considering a move away from an existing FC investment it is important to factor in all the elements in that decision and not be swayed by an IP protocol that seems less expensive and easier to use. In many cases the ‘beauty’ of IP is only skin deep. In part two of this article, we’ll explore ways to maximize an existing FC investment by increasing storage and network utilization, which serve to decrease per GB costs.

Virtual Instruments is a client of Storage Switzerland

George Crump, Senior Analyst

- How to make a Meaningful Comparison