In two prior articles we discussed doing more in less space and doing more in less time. These articles focused on operational efficiencies. In the next two articles we will discuss doing more with less money and doing more with less power. These two articles will focus on improving capital efficiencies.


In the data center doing more with less money requires being efficient across the environment. A holistic approach and tool set is needed, one that can examine servers, applications, virtual infrastructures and storage. Holistic tools that drive these efficiencies are products like Tek-Tools' Profiler Series of software solutions.



Doing More with Less Storage Money


Storage seems to be a never-ending expense. One of the most common areas of waste is the creation of ‘Free LUNs’. These are LUNs of storage that have been created on the disk array but have never been assigned to a host. While most data centers have some form of storage assignment policy to prevent this from happening, the undeniable fact is that it does happen, often to the surprise of the storage managers.


The common reaction is to point to thin provisioning as the solution. The challenge is that for many data centers the pressure is on to save money now, without spending precious capital on a new storage system. Additionally, less than 10% of the midrange-to-enterprise storage systems now installed have thin provisioning capabilities; and of  those that do, 40% have hard provisioned LUNs. The problem of Free LUNs is going to be with us for a while.


To prevent this from happening, storage administrators would have to manually audit the storage environment and compile this data into a spreadsheet. This can take weeks and is often out of date the moment it’s complete. The ideal alternative would be to use a tool like Storage Profiler to perform a ‘Free LUNs Report’ that will execute once per week and examine the storage system to find LUNs not allocated to servers. The storage manager can then take appropriate steps to free up this capacity and postpone future storage purchases.


It is also critical that this tool have the ability to examine storage as it relates to virtual server environments, like VMware. VMware is a great ‘waster’ of storage capacity. For example, VMware templates typically set a default value for the size of the virtual server’s disk image. Often, this image is set to some ‘safe’ number and never looked at again. While templates are a great tool for rapidly deploying servers they also can rapidly consume disk capacity. With a management tool that could provide real time information about VMware storage utilization, you would know the moment the size of over-allocated VM storage is forcing a premature storage purchase and simply down-size those VMDKs instead of buying more storage.


Another area where VMware causes storage waste is when VM snapshots are used. This is one of the downsides of VMware’s ‘simplifying everything’ approach. For example, when databases need to be upgraded they are seldom backed up via their dump commands. It is easier for the IT administrator to power down the VM, take a snapshot of it and then power it back on. The whole process is easy and takes just a few minutes and is one of the hidden values of VMware.


The problem is, however, that the snapshots of the prior database versions are rarely ever pruned from the storage. This often results in terabytes of stale snapshots that can be exposed through a monitoring tool to provide an instant ROI.


Another example in storage money savings is determining if old data can be deleted or moved to secondary storage, instead of buying new storage. The longer you can put off spending a dollar on additional capacity, the more capacity that dollar will provide in the future, due to Moore’s Law. When more capacity is needed, it’s almost always an emergency and has to happen now and storage admins are sent scrambling. Ideally, you would want to delete the largest and least accessed files to solve the problem quickly. What’s ideal is if your IT management tool has the ability to group categories of files by age.


For example, if you could have the information available that would tell you the largest 100 files that are over 1GB, this would quickly allow you to clean up 100s of GBs of space. You could also group by type, PST files, Audio files and Video files, which are all ideal candidates. For example, you could identify all the PST files that are on primary storage and are assigned to users that are no longer with the company, and then make a decision to delete or move them to a more cost-effective tier of storage.



Doing More with Less Virtualization Money


Server virtualization has been wildly successful for many data centers that have deployed it. But, overall CPU utilization is down after virtualization. This is because most of those projects involved migrating physical machines to virtual machines which ran on newly purchased physical hosts, and because the machines being migrated had relatively low resource requirements. The physical servers that are the virtual hosts need to be packed in more densely, with more virtual systems. The challenge is in identifying which of the remaining physical machines should be in the next wave of systems to be migrated.


This group is more resource demanding. As a result, one runaway virtual machine could starve the other virtual machines. As you increase virtual machine density, there is a need to be more careful with what servers are in that next migration wave, and more importantly, which hosts they should be migrated to.


With tools like Tek-Tool's VMware module, you can get real time analytics of the candidate machines. The machines can then be prioritized, and most importantly, the impact of virtualizing them can be simulated, so you can see the impact on both the overall physical host and the new virtual machine itself. Denser virtualization saves money by retiring more physical hardware, which reduces power consumption, further streamlines IT administration and reduces data center footprint.



Doing More with Less Backup Money


The backup infrastructure is a costly investment for most organizations. But, with the proper analytics, it’s one in which infrastructure investment can be delayed, or at least minimized. The first step is to analyze the nature of the data being backed up. If much of it has not changed, it can be archived and moved to a storage area that’s not backed up as frequently. Another strategy with infrequently changing data is to do fewer full backups, especially if disk-to-disk backups are being deployed where the number of tape mounts is less of an issue.


Understanding and moving this data can reduce the ongoing investment in the backup-to-disk area as well. Without this understanding, the disk backup target will need to be upgraded every time additional disk capacity is added.


Finally, metrics can be run on these targets to make sure they are being used to their full potential. In many cases conventional wisdom is to add more backup targets to the infrastructure to improve backup performance. Often, this leads to a case of diminishing returns. With proper information, the IT administrator may be able to rearrange backup jobs to get maximum potential out of the backup target devices.


In each category, storage, virtualization and backups, IT organizations can do more with less money if they are armed with the right tools. Such tools would provide accurate, real time information of their environment.

George Crump, Senior Analyst