As I've watched and written about the evolution of cloud infrastructures, I tend to fall back on the NIST definition of cloud computing, which highlights five essential services:
On-demand self-service. A consumer can unilaterally provision computing capabilities,
such as server time and network storage, as needed automatically without
requiring human interaction with each service’s provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client
platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling. The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no
control or knowledge over the exact location of the provided resources but may be
able to specify location at a higher level of abstraction (e.g., country, state, or
datacenter). Examples of resources include storage, processing, memory, network
bandwidth, and virtual machines.
Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited
and can be purchased in any quantity at any time.
Measured Service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the
type of service (e.g., storage, processing, bandwidth, and active user accounts).
Resource usage can be monitored, controlled, and reported providing
transparency for both the provider and consumer of the utilized service.
From an industry perspective, some of these essential services were introduced (and have matured) earlier than others. For example, the concept of resource pooling has been maturing for several years. Topics such as rapid elasticity (e.g. the "cloudbursting" use case) still have more maturing to do.
The VMAX Cloud Edition release today speaks to the continued evolution of the "on-demand self service" characteristic. While there are many features to consider in the new release, I'd like to focus on a tenant using the VMAX, and the on-demand self service capabilities that can now be used.
A tenant that understands their application workload can usually assign that workload to a catalogue of choices that emphasizes either performance or capacity. The pre-cloud model required the manual provisioning of switch zoning, port configuration, cache settings, LUN creation, and security masking (among other things). These settings are all important in order to achieve the needs of a given workload. Maintaining and monitoring this configuration, in the face of other tenants sharing the infrastructure (each with different workloads) is hugely challenging.
Let's focus on just one area of manual provisioning that can be highly challenging: the provisioning of disk tiers to specific workloads.
A high capacity workload (e.g. Long Term Archiving) will favor the provisioning of a large number of SATA disks, for example (higher capacity at lower performance).
A higher performing workload (HPC/Specialty Applications) will instead lean towards a large deployment of flash, for example.
These two use cases are at opposite ends of the performance/capacity spectrum. If one were to argue that on-demand self-service is straightforward in these cases, they would be correct.
As the capacity and performance trade-offs get more blurry, however, the choices become less obvious. For example, an
OLTP workload may not need as much flash as HPC but will certainly require some level of FC/SAS capacity. Certain workloads may actually care about the specific kind of FC/SAS (e.g. 10K or 15K RPM). A collaboration workload will similarly wish to avoid an all-SATA configuration and mix in some level of FC/SAS.
Let's put it this way: VMAX has been quite advanced in its capability to deliver resource pooling in a cloud-like manner (I wrote a post about managing VMAX at scale nearly four years ago). Flash, FC/SAS, and SATA are grouped into pools, and portions of these pools can be manually mapped to any given application (tenant). The tenant can directly measure the quality of service based on those settings and dynamically balance the allocation from different resource pools.
I am using disk drives as a way of highlighting the complexities that the tenant is exposed to. As mentioned previously, I have left out similarly complex configuration choices (such as cache and zoning).
How does VMAX Cloud Edition bridge the gap between tenant workload and the details of the computing infrastructure? The improvement is being introduced via the creation of a service catalogue:
- The use of traditional management tools to configure LUNs, cache, and pathing via the traditional VMAX 10K, 20K, and 40K.
- A new self-service portal or API to provision choices from a service catalog via the VMAX Cloud Edition.
In a future post I will dive down a bit deeper into the details of choice #2: provisioning from a service catalog using the VMAX Cloud Edition. In the meantime, more detail on today's release can be found here.
Steve
Twitter: @SteveTodd
EMC Fellow