Expectations


As server virtualization has grown, so have the expectations that are often used to justify its expense. But the objective of consolidating individual physical servers and reducing costs must be balanced with expanding the demands on the networked storage pool, in order to maintain performance. As a data point, VMware recently stated that over 80% of installations are deployed with Fiber Channel SANs and that over 90% of performance problems are related to storage I/O issues.  Clearly, the traditional knobs and dials a server admin can tweak are not sufficient to ensure that most performance issues are dealt with. As the virtual server infrastructure grows and becomes more complex, this balancing act gets harder.



The perfect storm


Server virtualization has been made simple for the end user, but under the covers it’s a complex process and managing the large, dynamic storage I/O infrastructure it requires can be difficult. Traditional storage management processes that IT administrators are used to are not always sufficient. Virtual server infrastructures include a number of factors that combine to create a very challenging environment for performance and resource optimization - a perfect storm so to speak.


Virtualization makes it easy (some say too easy) to create server instances. This can result in a VM sprawl which increases management tasks and the overall noise level which accompanies day-to-day problems. Similarly, this increase in VM count also increases the density of server instances on each physical host which can cause contention for shared storage and SAN resources that didn’t exist in the physical world.


In the new world of virtual servers, resources are abstracted from the server instances themselves. Unlike the ‘old world’ where a storage admin could walk through the data center and see the storage attached to each application server, virtualization hides these connections, making it easier to lose track of which resources belong to each consumer. The dynamic nature of allocation and faster set up and decommissioning of server instances also leads to potential waste and inefficiency as server and storage admins struggle to keep up.


Another issue is a lack of resource ‘headroom’ that exists in most virtualized data centers. Since a primary objective of virtualization is to increase utilization and save money, ‘spare’ resources are continuously targeted for reallocation, something made easier by the ability to move VMs around, as well as storage. This desire to squeeze all available resources out of the common pool can severely reduce the margin for error available to help maintain the critical allocation / consumption balance, as the headroom mentioned above quickly evaporates.


The virtualized server environment is complex and maintaining application performance while effectively managing the resources it consumes is a job that requires more than a traditional ‘IT generalist’. Like database applications, networking, storage, etc, the virtual server infrastructure needs a specialist. This doesn’t mean adding headcount to the IT organization, but more a case of upgrading the skill set of an existing senior server or storage administrator.


Unfortunately, IT organizations don’t typically have the information this specialist would need to effectively optimize the virtualized infrastructure. And, unlike the physical world, where storage, server and network management is often done by different teams, a cross-domain approach is needed. The dynamic, interrelated nature of the virtual environment will require managers to have access to all the resources that relate to this infrastructure. Becoming a specialist in the performance and utilization of the virtualized server and SAN I/O infrastructure will require this new approach, and a different set of tools.



The Virtualization Performance Specialist


This would be a cross-domain role that can interface with the storage, networking and server teams on a day-to-day basis. This specialist would be able to understand the virtual infrastructure, the unique challenges it puts on managing resources and how to troubleshoot performance issues in the virtual domain effectively. They would also be able to articulate what information is needed to do this job and which tools can provide that data fast enough to stay ahead of this dynamic environment. For most organizations, the existing monitoring solutions are not designed to provide the data required, but instead provide ‘stove-piped’ data, with little or no cross correlation ability.


To begin with, this person would need realtime I/O information. Most traditional monitoring tools poll devices and collect data at intervals usually 5-15 minutes apart - or longer. Solutions like VirtualWisdom from Virtual Instruments, which uses fibre channel network taps to gather realtime information about the infrastructure, could be the cornerstone management platform for this new IT focus. Instead of snapshots of each element, optimization requires dynamic information about the interaction between elements that are contributing resources to support the virtual environment. The ability of these physical- and virtual-layer tools to isolate server / storage pairs is essential to identifying root causes of problems and effectively troubleshooting issues like application latency in this abstract, dynamic environment.



Data independence


Monitoring tools that come from array vendors can provide a one-sided, device-specific view of the infrastructure. What’s needed is unbiased information, the kind that comes from vendor-independent monitoring platforms. When suppliers’ advice is often to ‘throw more spindles’ at a performance issue, it’s not surprising to learn that most enterprise SAN utilization rates are in the 5-10% range. Data independence is needed for the Virtualization Performance Specialist to keep SAN and storage vendors honest and identify the real problem devices. At the end of the day, expectations won’t be met unless performance and resource utilization improve and investments in time and money go down. The ‘full data path awareness’ that solutions like VirtualWisdom provide can give IT the transactional data needed to resolve application performance issues without buying more storage.


In a virtual server environment, maintaining a balance of minimal resource use and acceptable performance is a difficult task. The sheer number of VMs and the complexity of most virtual server infrastructures have created the need for a new focus in IT administration - the Virtualization Performance Specialist. For most larger IT organizations, maximizing virtualized application performance will require realtime information that’s simply not available from existing virtual server performance monitoring or SRM tools. They’ll need transaction-based data collected from network physical layer devices about latency between storage and application servers in the environment to resolve performance issues and improve resource utilization.

Virtual Instruments is a client of Storage Switzerland

Eric Slack, Senior Analyst