By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Solving the Cloud’s Elasticity Problem

New factors such as containerization require more flexible storage in the cloud.

A wide variety of customers with a broad range of requirements and application profiles use public clouds. This has historically meant there was little sense in charging customers for storage as a single service due to the diversity of those requirements. Public cloud providers offer multiple storage offerings with different features and needs, including block storage, file storage, and object stores, among others.

These have typically been siloed -- each made available separately with little to no synergy between them -- so users would have to choose the tier or type of storage when setting up their installation. Some data would be stored in hot storage, other data would be put in cold storage, and so on.

For Further Reading:

Why Enterprises Are Turning to the Cloud for Global Data Management

How Selecting the Right Tools Can Improve Compliance

Preparing for the Elastic Age of BI and Analytics

Nor do private clouds avoid the problem. An organization that owns, operates, and uses all of the resources in a private cloud will focus mainly on overall efficacy and global optimization of resources. Manually configuring which individual services to use for an application is unnecessary overhead. Similarly, breaking down the infrastructure to individually managed pools has a negative impact on the efficiency of resources, as well as increases complexity. Infrastructure and application management need to be based on quality of service (QoS), classes of service, and API-based management, while everything else must be automated and globally optimized.

The Need for Greater Elasticity

Once you have this siloed approach in place, you start to encounter issues. What do you do when something that was in cold storage suddenly “heats up”? How do you move that data? Does it have to be done manually? Is that going to take a long time?

What’s needed is an elastic, self-adapting storage flow that pools all the different tiers into one coherent storage load and then handles the management chores for you. This is elastic computing. Elastic computing enables organizations to meet their changing storage needs by adjusting the amount of computer processing, memory, and storage resources a system uses. Elasticity frees IT professionals from concerns about engineering for peak usage or capacity planning.

Elastic computing is usually controlled by system monitoring tools to provide the required resources at any given time without disrupting operations. Having this ability in the cloud enables organizations to save money by paying only for the capacity and resources they need.

Automation is a primary benefit of elastic computing. There’s no need to depend on human IT staff monitoring storage needs 24/7, which again saves money. Elasticity is more efficient, side-stepping service interruptions and slowdowns to provide continuous availability of services.

Adding Containers to the Mix

An additional factor is the rapid adoption of containerization. Gartner predicts that by 2022, over 75 percent of global organizations will be running containerized applications in production. However, public cloud storage was designed before the advent of containerization/Kubernetes -- it was designed for legacy applications.

If you are using Kubernetes, there are two ways of dealing with this: container-ready or container-native. A container-ready or container-attached approach uses existing traditional storage -- typically external arrays -- attached to the Kubernetes environment using software shims. This may make sense as an initial bridge to the container environment, and it can be effective for those experimenting with containerization or planning to do so at a smaller scale. However, it also adds friction that limits the full benefits of Kubernetes’ inherent agility, reduced operational complexity, and lower cost. The separate storage and data management required for container-attached approaches can’t scale, can’t adapt, and can’t respond at the speed required for Kubernetes. It may work acceptably in a very small cluster of two or three nodes, but as soon as you start to scale, you’ll realize the limitations.

The other option is container-native storage, which is built for the Kubernetes environment. This solves the aforementioned limitations by allowing application data to move at the speed of the application requires. Full volumes, regardless of size, can be transported across clouds or across the world in under a minute, which enables rapid movement of data to and from any cluster, and allows users instant access to data at any point.

Storage Today

The public cloud offers tremendous economies of scale and convenience, but the downside is a siloed approach to the diversity of storage requirements each organization has. These multiple offerings have typically been siloed -- until the advent of elastic computing. This model pools the various tiers of storage into one coherent storage load, creating greater efficiency and cost savings. With containerization now a factor, elasticity becomes a top priority. Today’s changing storage demands require a flexible, elastic solution that can cover all the bases and address all of an organization’s needs.

 

About the Author

Or Sagi is chief architect and vice president, technology, at ionir. Or is responsible for the architecture, design, and core technology of Ionir’s platform.  Prior to Ionir, Or was chief system architect for Reduxio’s storage platform, and principal engineer with IBM XIV, where he founded the NAS development team. Other highlights include developing the OS infrastructure for Texas Instruments’ next-generation cable modems, developing PCI device pass-through virtualization for Qumranet (acquired by Red Hat), implementing Infiniband-based message passing systems at Voltaire, and introducing high availability into the Exanet distributed file system. Or holds several patents and patents-pending in the areas of storage architecture, encryption, and data consistency in distributed systems. You can reach the author via LinkedIn.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.