Bandwidth and performance

How will this solution get enough data to and from the cloud, fast enough to support my applications? Application performance is affected by available bandwidth, latency, and packet loss, and these have been a concern since storage was first put on a network. It’s certainly no surprise that this question persists when the network is a WAN instead of a LAN and storage is no longer in the data center.


Control refers to the primary concern most people have when outsourcing anything, they’re letting someone else do a portion of their work, but they’re still responsible for the outcome. Outsourcing data storage and related services to a cloud provider requires a measure of trust in order for it to be an acceptable alternative to in-house. Users have questions about whether their data will be available when they need it at the performance levels they expect, whether it will be protected from loss or unreasonable downtime and if it’s secure from compromise. Unlike other corporate functions, outsourcing data storage doesn’t just involve putting the day’s output at risk, it involves putting existing corporate assets at risk. Moving a resource like storage to an outside party involves a certain amount of vulnerability and a cloud solution must address these largely subjective issues.


The devil’s in the details. How a cloud solution provider gets their product installed and running with a minimum amount of disruption to the business process is a key question that needs to be answered. Although unstructured data is growing, most applications still typically need or prefer access to block storage; SCSI DAS, iSCSI or FC. Some cloud storage providers provide file protocol interfaces, but most usually make their service accessible through APIs based on SOAP or HTTP. So, unlike the consumer-grade ‘backup over the internet’ solutions, business-grade cloud storage isn’t really a ‘plug and play’ proposition. There are also questions about how cloud storage will impact existing data protection processes and what kinds of storage services will be available. With features like snapshotting, replication, dedupe and virtualization becoming standard with in-house storage systems, questions come up about what the user experience will be like when data and data management are outsourced.

The cloud as a storage solution is fairly intangible, even by IT standards, which may explain some of these questions. However, part of the appeal of cloud storage technology is that it is outsourced infrastructure, especially the ‘care and feeding’ of that infrastructure. The best solution may be a combination of strategies, or a little of both the in-house model and the out-sourced model. If people want to enjoy the benefits of outsourcing their infrastructure, but still have questions about how to do that, a hybrid cloud architecture, like that offered by StorSimple, may be the answer. StorSimple provides a hybrid storage solution that blends the best of on-premises storage with WAN optimization, security, and an on-demand cloud storage model.

A Hybrid solution

A hybrid cloud solution includes an appliance or VM that’s installed on-site at the client data center with either internal/direct-attached or network connected storage (disk drives and/or SSD). It can provide a ‘local’ storage tier with capacity for servers plus an interface to the cloud provider’s infrastructure. Like traditional storage systems, this appliance could provide storage services such as volume management, dedupe, virtualization, snapshots, plus security and data protection features, where storage capacity given to servers would actually be tiered across internal disks and cloud storage.

The hybrid appliance can answer the bandwidth, latency, and packet loss issues by optimizing TCP WAN traffic, as well as reducing data sent to the cloud with techniques like dedupe and compression. This appliance can also provide a local storage tier and migrate data to and from the cloud on the back end. Obviously, this appliance won’t be able to store as much as a large, traditional storage array, but it can support the most active subset of data locally. In most storage applications the ’80/20 rule’ applies where most of the activity involves a relatively small percentage of data. This means that a local storage device only needs enough capacity to hold this working set in order to satisfy local applications and reduce concerns about potential performance issues.

Local storage is familiar

Having data reside on a local storage appliance gives users the ability to ‘touch and see’ their data and keep some of it on-site. A local copy of data also protects against the  possibility (although unlikely) of cloud infrastructure downtime and can provide a measure of confidence for the uncertainties of outsourcing, especially for the most mission critical data. Hybrid appliances can also be configured to store data with multiple cloud providers or locations, further increasing the comfort level for customers that want this. In addition, hybrid appliances can provide local encryption (and local key management) ensuring security before transferring data into the cloud without sharing these keys with the cloud provider.

Built-in cloud interface

Cloud storage is usually accessed via RESTful (Representational State Transfer) or SOAP (Simple Object Access Protocol) APIs. Hybrid cloud appliances handle the protocol translation required to interface with these cloud APIs and local application servers needing block-storage, and move data between internal and cloud storage without the server being aware. Also, being on-site physical storage devices, they can provide a level of familiarity and a logical first step for a company that’s new to outsourcing data storage. As a local storage device hybrid appliances can offer flexibility in how a company uses the cloud. They can designate volumes as ‘on-site only’ and not send anything to the cloud, they can store data on-site and send backups of that data to the cloud or they can have volumes stored in the cloud with a subset stored locally. This kind of flexibility can allow organizations to ‘evolve’ their use of cloud storage as their environments’ needs change, and as their familiarity with this outsourced solution increases.

Data protection can actually be simplified with this technology as point-in-time copies or clones of data stored locally can be created and sent to the cloud. Then these copies can be restored using the existing data protection software without the need for on-site tape or disk backup infrastructure. In the case of a cloud, it puts customers in a position to use a cloud provider that also provides virtual machines in addition to the storage service as a backup data center, rather than having to build their own.

Cloud storage as a technology is compelling, with benefits that are becoming better-defined. But the adoption of outsourced primary and secondary storage applications into mainstream IT has been hindered by concerns that practitioners have like: performance, availability, security, data protection and implementation. A hybrid cloud architecture may alleviate these concerns by putting a storage appliance on-site to provide local capacity, storage services and familiarity, while serving as an efficient interface to the cloud.

StorSimple is a client of Storage Switzerland

Eric Slack, Senior Analyst

- Hybrid storage solutions provide answers