In the past several years data center admins have continually deployed virtualization software onto their servers. My own company's IT department (EMC IT) actually tracked the percentage of virtualizated versus non-virtualized servers. They assigned thresholds to these percentages, that when crossed, create a three-phase guide to implementing IT-as-a-Service. I wrote two articles on this approach, which can be found here.
Rarely will a data center achieve 100% server virtualization. Some workloads are simply too hard to virtualize. Other workloads have security considerations (e.g. compliance, or availability) that are easier to satisfy in a more static, non-virtualized environment.
As a result, many data centers have at least two discrete pools of compute resources to satisfy two kinds of workloads:
- Transitory workloads: workloads that vary over time. They have unpredictable peaks and valleys in their use of compute resources, which require a more elastic infrastructure. Transitory workloads are often isolated in a data center from fixed workloads (sometimes physically).
- Fixed workloads: applications that run in this environment have more well-defined computing needs, their performance maximums are often bounded, and they are typically more "locked-down" from a security standpoint.
As a result, two pools of compute resources may be completely separated from each other, or in some cases they may be united by a SAN. In a shared-SAN case, the transitory workloads may connect via iSCSI (for improved flexibility in the case of mobility events), while the fixed workloads often connect via Fiber Channel.
My EMC colleage Ken Durazzo has spent some time educating me on these use cases. He uses the following diagram to describe this scenario:
Although the picture seems simple, managing this configuration is not. Securing this configuration is not straightforward, especially if the transitory workloads depend upon the fixed workloads. In general, however, this scenario can be and has been made to work.
There are three additional technology trends, however, that will require careful navigation to implement:
- Big Data workloads often result in a separate analytic silo that is often completely isolated from transitory/fixed. Many data centers now have separate silos, which are managed using disparate tools.
- Software-defined networks (SDNs) are attractive for their programmability, but where, how, and when are they introduced into this environment?
- Public clouds can provide cost-effective and value-added services to augment this type of data center, but how is it managed and secured?
I'll discuss the options for navigating this real-world situation as the industry progresses towards the vision of technologies such as software-defined storage and SDDC. Over the next few weeks I hope to publish some articles for just such a discussion.