Monday, April 20, 2009

McKinsey Recommends Virtualization as first step to Cloud

In a study released last week, the storied consulting company, McKinsey & Company, suggested that moving datacenter applications wholesale to the cloud probably doesn't make sense – it's too expensive to re-configure and the cloud is no bargain if simply substituted for equipment procurement and maintenance costs. I think this conclusion is obvious. They go on to suggest that companies adopt virtualization technology in order to improve the utilization of datacenter servers from the current miserable average of ten percent (10%). I think this is obvious too. The leap that they hesitated to make explicitly, but which was called out tacitly in the slides, was that perhaps virtualization offers the first step to cloud computing, and a blend of internal plus external resources probably offers the best value to the enterprise. In other words cloud should not be viewed as an IT alternative, but instead it should be considered as an emerging IT architecture.

With virtualization as an underpinning, not only do enterprises get the benefit of increased asset utilization on their captive equipment, they also take the first step toward cloud by defining their applications independent from their physical infrastructure (virtual appliances for lack of a better term). The applications are then portable to cloud offerings such as Amazon's EC2, which is based on virtual infrastructure (the Xen hypervisor). In this scenario, cloud is not an alternative to IT. Instead, cloud is an architecture that should be embraced by IT to maximize financial and functional capability while simultaneously preserving corporate policies for managing information technology risk.

Virtualization as a step to cloud computing should also be viewed in the context of data, not simply application and server host resources. Not only do applications need compute capacity, they also need access to the data that defines the relationship of the application to the user. In addition to technology such as VMware and Citrix's Xen technology, enterprises also need to consider how they are going to abstract their data from the native protocols of their preferred storage and networking equipment.

For static data, I think this abstraction will take the form of storage and related services with RESTful interfaces that enable web-scale availability to the data objects instead of local network availability associated with file system interfaces like NFS. With RESTful interfaces, objects become abstracted from any particular network resource, making them available to the network where they are needed. Structured data (frequently updated information typically managed by a database server technology) is a bit trickier, and I believe solving the problem of web-scale availability of structured data will represent the “last mile” of cloud evolution. It will often be the case that the requirement for structured data sharing among applications will be the ultimate arbiter of whether an application moves to the cloud or remains on an internal network.

The company that I founded, rPath, has been talking about the virtualization path to cloud computing for the past three years. Cloud is an architecture for more flexible consumption of computing resources – independent of whether they are captive equipment or offered by a service provider for a variable consumption charge. About nine months ago, rPath published the Cloud Computing Adoption Model that defined this approach in detail with a corresponding webinar to offer color commentary. In the late fall of last year, rPath published a humorous video cartoon that likewise offered some color on this approach to cloud computing. With McKinsey chiming in with a similar message, albeit incomplete, I am hopeful that the market is maturing to the point where cloud becomes more than a controversial sound-byte for replacing the IT function and instead evolves into an architecture that provides everyone more value from IT.

Labels: , , , , ,

Monday, April 13, 2009

Outsourcing gives way to Now-sourcing via Cloud Technology

The theory behind the value of outsourcing, aside from labor arbitrage, was that the outsourcer could deliver IT resources to the business units in a more cost effective manner than the internal IT staff due to a more highly optimized resource management system. The big problem with outsourcing, however, was the enormous hurdle the IT organization faced in transitioning to the “optimized” management approach of the outsourcer. In many cases this expensive hurdle had to be crossed twice – once when the applications were “outsourced” and then again when the applications were subsequently “in-sourced” after the outsourcer failed to live up to service level expectations set during the sales pitch. Fortunately, the new architecture of cloud computing enables outsourcing to be replaced with “now sourcing” by eliminating the barriers to application delivery on third party networks.

The key to “now sourcing” is the ability to de-couple applications and data from the underlying system definitions of the internal network while simultaneously adopting a management approach that is lightweight and fault tolerant. Historically, applications were expensive to “outsource” because they were tightly coupled to the underlying systems and data of the internal network. The management systems also pre-supposed deep access to the lowest level of system structure on the network in order to hasten recovery from system faults. The internal IT staff had “preferences” for hardware, operating systems, application servers, storage arrays, etc., as did the outsourcer. And they were inevitably miles apart in both the brands and structure not to mention differences in versions, release levels, and the management system itself. Even with protocols that should be a “standard,” each implementation still had peculiarities based upon vendor and release level. NFS is a great example. Sun's implementation of NFS on Solaris was different than NetApp's implementation on their filers, leading to expensive testing and porting cycles in order to attain the benefits of “outsourcing.”

I believe a by-product of the “cloud” craze will be new technology, protocols, and standards that are designed from the beginning to enable applications to run across multiple networks with a much simpler management approach. A great example is server virtualization coupled with application delivery standards like OVF. With X86 as a de facto machine standard and virtualization as implemented by hypervisor technology like Xen and VMware, applications can be “now sourced” to providers like Amazon and RackSpace with very little cost associated with the “migration.”

Some will argue that we are simply trading one protocol trap for another. For example, Amazon does not implement Xen with OVF in mind as an application delivery standard. Similarly, VMware has special kernel requirements for the virtual machines defined within OVF in order to validate your support agreement. Amazon's S3 cloud storage protocol is different than a similar REST protocol associated with EMC's new Atmos cloud storage platform. And the list of “exceptions” goes on and on.

Even in the face of these obvious market splinters, I still believe we are heading to a better place. I am optimistic because all of these protocols and emerging standards are sufficiently abstracted from the hardware that translations can be done on the fly – as with translations between Amazon's S3 and EMC's Atmos. Or the penalty of non-conformance is so trivial it can be ignored – as with VMware's kernel support requirements which do not impact actual run-time performance.

The other requirement for “now sourcing” that I mentioned above was a fault tolerant, lightweight approach to application management. The system administrators need to be able to deliver and manage the applications without getting into the low level guts of the systems themselves. As with any “new” approach that requires even the slightest amount of “change” or re-factoring, this requirement to re-think the packaging and management of the applications will initially be an excuse for the IT staff to “do nothing.” In the face of so many competing priorities, even subtle application packaging and management changes become the last item on the ever lengthening IT “to do” list – even when the longer term savings are significant. But, since “now sourcing” is clearly more palatable to IT than “outsourcing” (and more effective too), perhaps there is some hope that these new cloud architectures will find a home inside the IT department sooner rather than later.

Labels: , , ,