What is an IOV Architecture?


IOV typically has three key components. The first is an appliance, often called an I/O gateway, which is usually installed at the top of a server rack. It has inbound connections for the servers in the rack and outbound connections to attach to storage or network infrastructures. Inside the appliance I/O cards are installed that were traditionally installed in each of that rack’s servers. These I/O cards are now shared between the servers in the rack, with the gateway managing, or ‘brokering’, the access to those cards between the servers. The servers then connect to the gateway via some form of high speed connection, today typically PCIe, Ethernet or Infiniband.



Why IOV?


The premise of IOV is that with the coming of 8Gb Fibre Channel, 10GbE and 10Gb Fibre Channel over Ethernet (FCoE) there will be more than enough bandwidth for most servers in the environment. Even with today's bandwidth most infrastructures are designed for peak performance demands, not their typical steady-state I/O levels. The result is that much of the bandwidth designed in and paid for goes unused.


The goal of IOV is to reduce the cost and increase day-to-day utilization rates by sharing that bandwidth across all the servers in a rack. IOV allows infrastructures to be built to handle the peak load on a rack basis instead of a server basis. Essentially the rack has enough excess bandwidth to handle the peak needs of any one or two servers. IOV enables all servers in each rack to tap this extra I/O capacity when needed, something that’s much more efficient than building this extra bandwidth into every server’s connection.


In addition to bandwidth sharing, IOV can also be used to share other functionality. If there are specialized cards needed in the environment or there’s interest in being able to share PCIe-based SSD or SAS controllers attached to disk arrays, an IOV appliance can also provide these capabilities to all the servers in the rack.



The $ Impact of IOV


Looking at the per-rack impact of IOV there is significant savings to be gained by its implementation. Instead of buying two high-speed Ethernet and fibre storage cards for each server (4 total cards per server), only one set of the above cards needs to be installed in the IOV gateway. The cards then used to connect each server to the IOV gateway are high speed, yet typically less expensive than traditional network cards. This  can amount to a $3,000 to $5,000 savings per server, plus the added flexibility of being able to provision the bandwidth as needed throughout the rack.



The Blade Server Impact


The above benefit of consolidating network cards to save money is just one example. Where IOV is set to take off is in the blade server market. A blade server's biggest limitation is its inability to expand because of a lack of I/O slots. When blade server manufacturers begin to embed IOV connectivity onto the blade itself, they will then be open to all the connectivity that network cards provide traditional servers today.



What's needed?


The good news is that much of what is needed for IOV is available now. Systems can be deployed and are certainly ready for evaluation. The final piece that will make IOV truly valuable is Single Root I/O Virtualization or SR-IOV. This is a capability that allows for a single card to be shared and subdivided between multiple physical hosts, or even virtual machines within those hosts. Storage Switzerland expects most major suppliers of I/O cards to support SR-IOV before the end of 2010.


IOV has the same potential as other forms of virtualization to drive out costs by increasing resource utilization while improving flexibility. From the data center view, at a minimum, IOV is something that should be part of infrastructure planning meetings today. And, for many environments, initial evaluations of the technology should be considered this year.

George Crump, Senior Analyst

 
 Related Articles
  The Use Cases for Shared PCIe SSD
  How to Share PCIe SSD
  Using IOV to Cable Once & Keep Flexibility
  Offload I/O from Hypervisor with SR-IOV
  Using Infrastructure Bursting to Handle VM..
  Comparing I/O Virtualization Technologies
  Aprius IOV Technology Evaluation Platform
  Thin Provisioned Networks

../../2011/3/16_The_Use_Cases_for_Shared_PCIe_SSD.html../../2011/1/10_How_To_Share_PCIe_SSD.html../12/8_Using_IOV_To_Cable_Once_And_Still_Maintain_Flexibility.html../9/29_Offloading_I_O_from_the_Hypervisor_with_SR-IOV.html../8/2_Using_Infrastructure_Bursting_To_Handle_Virtual_Machine_Peaks.html../4/21_Comparing_I_O_Virtualization_Technologies.html../../../../Blog/Entries/2010/5/28_Aprius_IOV_Technology_Evaluation_Platform.html../5/12_Thin_Provisioned_Networks.htmlshapeimage_1_link_0shapeimage_1_link_1shapeimage_1_link_2shapeimage_1_link_3shapeimage_1_link_4shapeimage_1_link_5shapeimage_1_link_6shapeimage_1_link_7