For many, the most attractive aspect of the cloud is its ability to expand the possibilities of what organizations — particularly those at the enterprise scale — can do. This extends to their data, the essential applications driving their operations, the development of new apps and much more.
The central concept behind that notion of ever-expanding, ever-evolving capabilities in the cloud is sometimes referred to as "cloud elasticity." Considering that the pace at which cloud computing and its possibilities are growing, it's critical for company leaders to develop a comprehensive understanding of this principle and what it can mean for their organizations.
What is cloud elasticity?
In a nutshell, cloud elasticity describes the ability of enterprises to add or remove cloud computing resources within their deployments as needed — based on shifting workload demands — without causing any downtime or other significant disruptions to the cloud service. Such resources include RAM, input/output bandwidth, CPU processing capability, and storage capacity. Automation built into the cloud platform drives elastic cloud computing.
Elasticity is typically delineated into several categories:
- Scale out/in elasticity: Adding or removing components with the goal of expanding or diminishing the capabilities of the resources within the cloud infrastructure.
- Scale up/down elasticity: Adding or subtracting the resources themselves from the infrastructure, in the interest of adjusting performance to meet workload needs.
Essentially, the difference between the two is adding more cloud instances as opposed to making the instances larger.
Cloud elasticity vs. cloud scalability
Because these two terms describe similar occurrences, they are often used interchangeably. But they aren't interchangeable, and as such, shouldn't be considered synonymous with each other. What they are is intertwined — because an elastic cloud must simultaneously be scalable up and out.
- Scalability in the cloud refers to adding or subtracting resources as needed to meet workload demand, while being bound by capacity limits within the provisioned servers hosting the cloud.
- Elasticity differs in that it's not defined by those limits, because if a server reaches its full capacity and additional resources are needed, that resource can be deployed by spinning up a virtual machine (VM), or several if need be. The VM(s) can then be spun down when demand settles down.
Basically, scalability is about building up or down, like someone would with, say, a Lego set. Elasticity, meanwhile, entails stretching the boundaries of a cloud environment, like you would stretch a rubber band, to ensure end users can do everything they need, even in periods of immensely high traffic. When traffic subsides, you can release the resource — compare this to letting the rubber band go slack. Achieving cloud elasticity means you don't have to meticulously plan resource capacities or spend time engineering within the cloud environment to account for upscaling or downscaling.
All of the modern major public cloud providers, including AWS, Google Cloud, and Microsoft Azure, offer elasticity as a key value proposition of their services. Typically, it's something that occurs automatically and in real time, so it's often called rapid elasticity. In the National Institute of Standards and Technology (NIST) formal definition of cloud computing, rapid elasticity is cited as an essential element of any cloud.
How elasticity affects cloud spend
The capabilities of the cloud are invaluable to any enterprise. But at the scale required for even a "smaller" enterprise-level organization to make the most of its cloud system, the costs can add up quickly if you aren't mindful of them.
Scaling up your cloud instances at a time when you don't really need to is an unnecessary expense that leaves you with idle resources. This is commonly called overprovisioning. Conversely, failing to scale up when business needs dictate that you should do so, known as underprovisioning, also becomes costly: If your infrastructure can't handle high levels of application traffic from cloud users, high latency and outages may occur. These issues will need IT mitigation, taking their attention away from more productive responsibilities. And if such failures become common, you might even lose customers.
When you have true cloud elasticity, you can avoid underprovisioning and overprovisioning. Moreover, the efficiency you're able to achieve in everyday cloud operations helps stabilize costs. Cloud elasticity enables software as a service (SaaS) vendors to offer flexible cloud pricing plans, creating further convenience for your enterprise.
Cloud elasticity in action: Major use cases
Elasticity is beneficial to any enterprise that experiences fluctuations in traffic and workload changes on a regular basis — in other words, most enterprises. But there are certain situations that illustrate it particularly well:
- Seasonal business spikes: These are most common in retail. But regardless of sector, any organization that sees massive jumps in traffic for holidays or other specific times of year will significantly benefit from an elastic cloud deployment that allows for resources to be added during these upticks and scaled back when no longer needed.
- Media file hosting: Video streaming services like Netflix and Hulu experience constant fluctuations in workload due to viewer behavior changes. Cloud elasticity enables these services to stay online and minimize crashes.
- DevOps: Developing new apps can be a time of trial and error, during which raw computing demands and cloud traffic experience major surges. Stretching the capabilities of the cloud to accommodate these needs is an invaluable advantage.
- Growing number of data sources: When an enterprise creates new business units, these all require their own unique sources of data. Combined with existing data sources from established departments, that's quite a lot of traffic flowing into data warehouses and data lakes. Using a pay-as-you-grow elastic cloud model can prevent this expansion from impairing daily operations.
Increases in data sources, user requests and concurrency, and complexity of analytics demand cloud elasticity, and also require a data analytics platform that's just as capable of flexibility. Before blindly scaling out cloud resources, which increases cost, you can use Teradata Vantage for dynamic workload management to ensure critical requests get critical resources to meet demand. Leveraging effortless cloud elasticity alongside Vantage's effective workload management will give you the best of both and provide an efficient, cost-effective solution.
Check out our blog to learn more about how Teradata elasticity can help you improve performance even in the midst of rapid operational expansion, or contact us to learn about everything Vantage has to offer.