At lunch during VMworld this week I had the opportunity to sit in on a CTO Roundtable kicked off by EMC CTO John Roese. The lunch was attended by a handful of customers: Costco, Lumenate, the NNSA, Secure-24, AIG, and the University of Florida. The session was organized by Ed Walsh and moderated by Sheryl Chamberlain (both CTO Office employees).
The goal, as part of the CTO Office VISION-X program, was to give select VMware customers direct access to the vision and strategy messaging of EMC's CTO. In addition, Ed and Sheryl arranged to have three technologists dive down into specific areas of conversation:
- Scott Lowe discussed VMware's Network Virtualization point of view.
- Robin Ren discussed the emergence of the all-flash array.
- Andrew Aitken of Olliance Consulting shared the continued evolution of Open Source strategies.
Below are some of the highlights of the discussion in these three areas.
Network Virtualization, SDN
John Roese kicked things off with the message that SW-defined does not mean throwing away or de-emphasizing hardware. It simply means that the hardware becomes less specific to the software above it: tight coupling between the two results in an exponential decay of the response time. Here at VMworld, we see a company (VMware) that removed the tight coupling at the server layer, and now they are moving to the network. SDN is not a radical replacement of the networking infrastructure, it's simply a more agile way to manage it.
Scott Lowe, currently working in VMware's Nicira team, then led a discussion exactly along those lines. Scott's main message about Network Virtualization was the following:
Automate the complex stuff.
In order to emphasize this and clarify VMware's view on Network Virtualization, Scott referred back to Martin Casado's earlier comment at VMworld:
Martin mentioned Nick McKeown during the keynote. The term SDN was meant to apply strictly to the separation of control plane and data plane. Separate the control path. Part of the reason why we run into network silo issues in the data center is because people don’t understand SDN and what it means to them. SDN doesn't mean the networking team is not relevant because we have a new type of control plane. The issue is not the elimination of hardware, or a positioning of hardware versus software. The issue is that we need to separate out the stuff that is complex and slows down provisioning and other things. We will still need human intervention but the complex stuff becomes automated. Allow the hardware to focus on the transport in the same way that the compute virtualization pulled the complex stuff out and let the CPU focus on crushing the bits.
At this point the customers in the room chimed in with their point of view on Scott's statements:
- The biggest challenges in regards to SDN and Network Virtualization are skillset challenges. Many organization are not prepared, and don't have the knowledge to connect the hypervisor to the storage. It requires someone that has that knowledge, and where do you find them? How do you train them? Scott's approach is the right one, but it is not an easy path to move down from the skills perspective.
- In the mid-market (e.g. SMB) they are viewing technologies like SDN as complexity, not automation. Adoption will be difficult in this regard.
- The Nicira approach in regards to the integration with security is a differentiator. When you combine Nicira with security you can do things that you can’t do today. It allows the data center to self-defend and remediate security risks that can’t be remediated today.
- Mainframe is still a large part of many organizations and we need an answer there as well.
Enhancing the Storage Layer with Flash
John Roese also shared his thoughts on the expanding heterogeneity of storage architectures. Block and file systems continue to be the place where vital data is stored, but new storage architectures (such as flash) are emerging because of a simultaneous expansion in the number of applications. If we don't do something about the expansion of applications and storage platforms, we'll have an n-squared problem in the manageability of the overall data center. The OPEX issue will be become unbearable. John mentioned evolving to an "hourglass" model where application expansion funnels down to the software-defined storage layer as the arbitrage point for the underlying storage infrastructure. This allows customers to deploy the newer systems in concert with the legacy. At this point, Robin Ren stepped the customers through some very recent updates on the flash storage market.
Robin discussed the fundamental transition from spindles to flash. Three years ago, nobody would have believed that all-flash was an option, but in the last 6 months we hear people taking about all-flash data center. Here at VMworld, virtualization has been a primary driver for the adoption of flash. Virtualization in general is a great workload for flash because of randomization. You have large quantifies of random small block IOs, and flash handles this better than spindles.
In addition to performance, Robin referenced this year's VMworld 2013 Hands-on Lab infrastructure.
In 2012, the VMworld hands-on lab took 13-14 floor tiles for their spindle-based approach. This year the lab is running 100% on XtremIO. It is roughly a 30-to-1 efficiency increase. The cost of power management and cooling in the US is about $1000 per rack per month. Overseas it can be triple that in countries like Singapore, Germany, and the UK. The VMworld hands-on lab savings alone can easily save $14000 per month. So while an all-flash array price tag may seem high for a given application workload, there are significant OPEX savings to consider in the post-installation phase.
Robin then turned the conversation over to the customers:
- Several customers in the room had already worked with Robin on their XtremIO deployment and mentioned that
- They were finally able to satisfy their VDI deployments,
- They are looking to deploy it for other applications like ODS,
- They would love a VBlock version of it,
- The level of management complexity is quite low
- The dedup functionality can only be done in the array and XtremIO does it well
- One customer had a 10PB Hadoop infrastructure and was planning on throwing XtremIO into the mix
- Several customers questioned Robin about the positioning and commitment of EMC across the HDD (e.g. VMAX, VNX) and flash (e.g. XtremIO) storage systems. Robin quoted David Goulden's message that we want overlaps in our product lines as opposed to gaps. The rich feature set of 20+ years of innovation in VMAX and VNX will not go away, and ViPR will be used to bridge between these systems and the new kid on the block (XtremIO).
- Customers would like EMC to spend more time on the "application-aware" aspect of the I/O stream. So much of the traffic between application and storage is "trash I/O", and with a bit of application awareness, a large amount of this trash I/O could be eliminated. Robin responded that a lot of that intelligence can be solve by the VMware layer. Pat Gelsinger mentioned that it’s about app-aware storage and intelligence. The application can work hand in hand with the underlying storage to solve that problem. VMware can direct activity because it is higher up in the stack, it can understand which type of IO that it is seeing and better redirect. This is a key aspect of their software-defined storage strategy.
John addressed the last part of the session by sharing his views on Open Source. His background with Open Source in many other arenas (most notably networking) fuels his strong belief that vendor collaboration is a key ingredient on the final result of any Open Source initiative. With successful vendor involvement, an OpenSource ecosystem can yield a well-reasoned, well-structured architecture that unlocks innovation by enabling a focus on diffentiation. He pointed to Swift as a great example of how strong vendor involvement resulted in a great plug-in to the OpenStack framework.
At this point in the conversation Sheryl introduced Andrew Aitken to lead the customers in an Open Source discussion. Andrew highlighted some new ways in which the industry is beginning to innovate with Open Source, including:
- BMW's Open Source framework for applications for cars.
- Department of Defense and Veterans leveraging Open Source (MUMPS) for electronic health records.
- Open Source initiatives in the financial service industry where companies like Bank of America, Fidelity, and Citibank are cooperating on leveraging common frameworks that they all use.
Andrew also mentioned that fear of the licensing model is still the primary barrier in many companies.
At this point the customers began sharing their feedback:
- They love the cost and leverage of OpenSource, but typically need a support license along with it in order that they don't get caught flat-footed when something goes wrong.
- The government is starting to be a wholesale adopter of such methods.
- There is a lifecycle of Open Source denial, inventory/discovery, followed by a decision to either be strategic or tactical in enterprise use of Open Source.
Sheryl closed the meeting by stating that this is just the first of many planned EMC CTO Roundtables. Contact Ed or Sheryl (pictured below) to explore setting up this type of meeting in future sessions.