I've been thinking about the steps required to build an architecture that supports data valuation (assigning economic or business value to data). In my last post I introduced a greenfield approach for building this architecture from scratch.
In this post I'd like to begin exploring some ideas for transforming a data center towards an architecture where application data is automatically placed and managed based on business value.
As I suggested previously, I think application decommissioning is a great place to start. I'd summarize it this way:
- Discover which applications are wrongly running on mission critical infrastructure
- Manually assess the business value and determine what archival features are then required (e.g. retention, searchability, etc)
- Sunset the application onto the appropriate level of trusted infrastructure
- Record the perceived value of the application, and its target infrastructure, in a Metadata Lake
The first step in this process is "discover". There are programmatic methods for discovering (a) how many applications are currently running on mission critical infrastructure, and (b) how many should be retired.
One of the key tools that supports this functionality is Adaptivity. I've used the diagram below to highlight how Adaptivity, broadly stated, can classify applications into three different buckets: sunset, enhance, and modernize.
I discussed the process of decommissioning applications to an archival tier in a previous post. This archival tier is typically less expensive, slower, and highly trusted. By "highly trusted" I mean that the tier typically surfaces a large number of trust attributes such as:
- Chain of custody
- Retention management
- Legal holds
- Controlled destroy
One such archival product, depicted below, is InfoArchive.
As this move occurs (shown above as a move from a 2nd platform infrastructure on the right to the InfoArchive tier on the left) rich archival metadata is also appended to the content. This enables the system to enforce specific business policies that are in line with a corresponding level of data value.
How can this system be augmented to position the infrastructure for ongoing and automated data valuation?
Simply put, the team in charge of the migration should manually record any statements made about the business value of the data (e.g. data policies), combine that information with the capabilities of the InfoArchive infrastructure (trust services), and store both of them into a Metadata Lake. In a previous post I depicted the end result as follows:
How does this step enable the eventual nirvana of a more automated valuation architecture? The diagram below highlights the target state of governed placement service (which I have described more fully in a previous post).
When it comes to building a data valuation architecture, I believe that application decommissioning is a great place to start for a number of reasons.
First of all, decommissioning frees up valuable mission critical resources.
Second of all, decommissioning onto an archival tier is a manual activity that aligns the data value of the application with the right level of trusted infrastructure underneath. This is the first step towards the goal state.
Thirdly, it introduces a foundational architectural component for data valuation: the Metadata Lake. All future pieces of a valuation framework will build off of this.
Finally, it is not uncommon for Adaptivity to classify 20-30% of business applications for decommissioning. If a corporation can commit to the discipline of application inventory, they would be well on their way towards a valuation architecture.
This approach is analogous to the discipline that many customers undertook to increase "percent virtualized" as they moved towards a private cloud architecture.
In my next post I will cover the second phase on the journey to data valuation: Application Mapping.