Several months ago I used a Data Loch Ness analogy to highlight a backup and recovery strategy for next generation infrastructure. I used the image shown here on the left to stress that large amounts of data shouldn't be brought to the surface unnecessarily. The point of the article was straightforward: as data volumes increase, traditional backup and recovery (e.g. bringing data up to the backup server and down to the protection storage) won't work anymore.
I made a similar point in a different article called How to Get the Wrong Answer. This article described an analytic use case: don't bring your data to the surface where your algorithms live. Bring your algorithms down into the data tier instead.
Both of these points highlight a new constraint that data center architects have to deal with: minimizing data movement.
There is a third use case that should also be avoided at all costs: placing an application on the wrong infrastructure. Out of all of these use cases this last one is probably the most critical for two reasons:
- Data movement takes a lot of time, manual effort, and is error-prone. The cost of the operation (and recovery from potential mistakes) is an expense that could have been avoided, and therefore hurts the profitability of the overall business.
- If the application data is placed on the wrong infrastructure, that means that the business has either over-invested (e,g, putting all applications on a mission-critical infrastructure) or under-invested (putting critical applications on slow, unreliable infrastructure).
What's the best way to ensure that the application is running on the right infrastructure? If the business wants to run a specific application in a public cloud environment , which is the right provider?
In recent posts I have introduced EMC's Adaptivity solution that effectively answers these questions and can move a business towards an IT Maturity Model that automatically puts the apps in the right place at the right time (and keeps them there).
Over the next few series of posts, I'd like to explain Adaptivity's approach to mapping applications onto the correct infrastructure. Their approach is business-based, and highly leverages business value chains like Porter's Value Chain, depicted below and well-described here.
The model above is cited here.
I have found that one of the keys to understanding Adaptivity's approach is their decision to rearrange the five primary activities listed above into a diagram that can be correlated to information flow cycles in an IT infrastructure. The diagram that Adaptivity technologists use appears below:
The arrows that flow across this diagram actually represent business scenarios that involve applications talking to each other and pushing data between different business activities.
This is the context on which applications are overlaid. In my next post I will discuss these application overlays and their relationship to the appropriate IT infrastructure.
Steve
Twitter: @SteveTodd
EMC Fellow
Comments