Just as when data centers began to deploy data deduplication, you should consider deploying FCoE where it will deliver the most return on investment. In deduplication, that meant using the technology as a target for backup data where successive full backup jobs created incredibly high storage efficiencies. This would be like potentially storing 50TBs worth of backup data on 5TBs of physical storage and delivering a high return on investment.


Where FCoE could potentially deliver the highest return on its investment is in reducing the cabling count going into physical hosts in a virtual server infrastructure, as well as decreasing management complexity. The typical virtual host will have a minimum of two quad-port Ethernet cards and two dual-port Fibre Channel SAN cards. That is approximately 12 cables per host; which, in a fully built-out rack, could be up to 100 cables just for server connectivity. It becomes very challenging (and expensive) to identify which cables go to which servers and from which storage arrays, etc. In short, this becomes a time-consuming mess.


Alternatively, with FCoE in that same configuration, we will be able to reduce that cable count to two cables per server and potentially 10 to 20 for the entire rack. A single cable pair, for redundancy, would carry all the storage traffic. With the 10GbE bandwidth available in FCoE, we can eliminate the need for quad-port Ethernet cards altogether. And of course, FCoE already has the storage protocol built in.


For these reasons when you are ready to start deploying FCoE, potentially the best way to start deploying it is a rack at a time, as virtualized server infrastructures are started or expanded. Ideally, it’s recommended that as a new rack is built out, you implement the new virtualized hosts with two converged network adapters (CNA) in each server for redundancy. Then, use FCoE-quality cables from those servers to a Top-of-Rack (TOR) switch. From that TOR switch, make the connections out to the main IP, as well as the Fibre Channel, storage infrastructure. The result is: there are only two cables per server running down the rack and the cable cluster is limited to the TOR switch itself.


Potentially, as this rack is built out, there may be a server that, for some reason, can’t go into the combined infrastructure. For example, it may need dedicated 8 Gig Fibre Channel performance or the higher-end performance of IP cards.


This is a reason why it’s important that your infrastructure providers remain fully committed to both traditional IP and FC technologies. For example, Brocade has a stated commitment to 16Gb FC and beyond. Mixing standard FC with FCoE should not be a problem because of FCoE and FC's compatibility, meaning that an FCoE card can be removed or one specialized card can be added. While that does mean a separate run to that particular server, the overall result in the rack should still be quite beneficial. Flexibility, as the environment changes, is critical. As an example, companies like Brocade offer the choice as to when to deploy FCoE. They continue to support traditional Fibre Channel, while at the same time, innovating in FCoE. The decision of when to move is yours, and with it comes the flexibility of specific high-speed fibre (16Gb) or optimized Ethernet.


Ideally, for the initial rack build-out for this server group, it’s ideal to put all the servers on the same type of infrastructure. The paybacks in this rollout strategy can be potentially large. First, this obviously can greatly reduce cable count. As we stated earlier, we can go from 12 cables per host to 2. This also should reduce heat in each of these servers because there are only 2 cards, as opposed to potentially 4 or more. Fewer cards means increased airflow, and increased airflow means cooler server temperatures. That heat reduction also means lower data center temperatures and lower overall cooling costs.


As the technology continues to improve and mature, in addition to cost reductions at a CNA and cabling infrastructure perspective, the data center should also begin to see cost reductions from simplified management. This comes from being able to manage both the IP infrastructure and storage infrastructure from a single pane of glass as well as addressing the staffing shortfall common in both infrastructure teams.


Today, for example, there is typically a network infrastructure team and a storage infrastructure team. This could, over time, be combined into a single team; more cost effective, and potentially more efficient, than separate teams.


If you are starting or expanding a server virtualization project right now standard FC more than likely has some compelling advantages. As we discussed in our recent article "Using NPIV to Optimize Server Virtualization's Storage", Fibre Channel today allows for companies like Brocade to leverage the combined use of their HBA and switch products’ NPIV support to enhance storage control over the virtual environment. FCoE's close compatibility with Fibre Channel, plus all the techniques referenced in that article are applicable in when your FCoE implementations begin. Further as FCoE continues to develop, it will allow even greater exploitation of enhanced Ethernet and eventually lead to CNA-based quality of service.


The net effect of the planning for FCoE now is a vision of a cleaner infrastructure that operates cooler and is simpler to manage. Initially, there may not be hard dollar cost savings as the CNAs may be as expensive as the combined costs of the cards. Also, the higher quality FCoE cable may be as much as the multiple, low-cost Ethernet cables. However, the cost should eventually, be about the same.


Planning for FCoE allows to you begin to build an FCoE infrastructure that, over the next year or so, will progressively get less expensive on a per-rack basis as market adoption increases. By being better prepared for FCoE now, your data center will be better positioned for a more aggressive rollout, as prices come down and capabilities increase, than those data centers that wait and don't even think about FCoE for the next few years.