Monday, April 20, 2009

McKinsey Recommends Virtualization as first step to Cloud

In a study released last week, the storied consulting company, McKinsey & Company, suggested that moving datacenter applications wholesale to the cloud probably doesn't make sense – it's too expensive to re-configure and the cloud is no bargain if simply substituted for equipment procurement and maintenance costs. I think this conclusion is obvious. They go on to suggest that companies adopt virtualization technology in order to improve the utilization of datacenter servers from the current miserable average of ten percent (10%). I think this is obvious too. The leap that they hesitated to make explicitly, but which was called out tacitly in the slides, was that perhaps virtualization offers the first step to cloud computing, and a blend of internal plus external resources probably offers the best value to the enterprise. In other words cloud should not be viewed as an IT alternative, but instead it should be considered as an emerging IT architecture.

With virtualization as an underpinning, not only do enterprises get the benefit of increased asset utilization on their captive equipment, they also take the first step toward cloud by defining their applications independent from their physical infrastructure (virtual appliances for lack of a better term). The applications are then portable to cloud offerings such as Amazon's EC2, which is based on virtual infrastructure (the Xen hypervisor). In this scenario, cloud is not an alternative to IT. Instead, cloud is an architecture that should be embraced by IT to maximize financial and functional capability while simultaneously preserving corporate policies for managing information technology risk.

Virtualization as a step to cloud computing should also be viewed in the context of data, not simply application and server host resources. Not only do applications need compute capacity, they also need access to the data that defines the relationship of the application to the user. In addition to technology such as VMware and Citrix's Xen technology, enterprises also need to consider how they are going to abstract their data from the native protocols of their preferred storage and networking equipment.

For static data, I think this abstraction will take the form of storage and related services with RESTful interfaces that enable web-scale availability to the data objects instead of local network availability associated with file system interfaces like NFS. With RESTful interfaces, objects become abstracted from any particular network resource, making them available to the network where they are needed. Structured data (frequently updated information typically managed by a database server technology) is a bit trickier, and I believe solving the problem of web-scale availability of structured data will represent the “last mile” of cloud evolution. It will often be the case that the requirement for structured data sharing among applications will be the ultimate arbiter of whether an application moves to the cloud or remains on an internal network.

The company that I founded, rPath, has been talking about the virtualization path to cloud computing for the past three years. Cloud is an architecture for more flexible consumption of computing resources – independent of whether they are captive equipment or offered by a service provider for a variable consumption charge. About nine months ago, rPath published the Cloud Computing Adoption Model that defined this approach in detail with a corresponding webinar to offer color commentary. In the late fall of last year, rPath published a humorous video cartoon that likewise offered some color on this approach to cloud computing. With McKinsey chiming in with a similar message, albeit incomplete, I am hopeful that the market is maturing to the point where cloud becomes more than a controversial sound-byte for replacing the IT function and instead evolves into an architecture that provides everyone more value from IT.

Labels: , , , , ,

Monday, April 13, 2009

Outsourcing gives way to Now-sourcing via Cloud Technology

The theory behind the value of outsourcing, aside from labor arbitrage, was that the outsourcer could deliver IT resources to the business units in a more cost effective manner than the internal IT staff due to a more highly optimized resource management system. The big problem with outsourcing, however, was the enormous hurdle the IT organization faced in transitioning to the “optimized” management approach of the outsourcer. In many cases this expensive hurdle had to be crossed twice – once when the applications were “outsourced” and then again when the applications were subsequently “in-sourced” after the outsourcer failed to live up to service level expectations set during the sales pitch. Fortunately, the new architecture of cloud computing enables outsourcing to be replaced with “now sourcing” by eliminating the barriers to application delivery on third party networks.

The key to “now sourcing” is the ability to de-couple applications and data from the underlying system definitions of the internal network while simultaneously adopting a management approach that is lightweight and fault tolerant. Historically, applications were expensive to “outsource” because they were tightly coupled to the underlying systems and data of the internal network. The management systems also pre-supposed deep access to the lowest level of system structure on the network in order to hasten recovery from system faults. The internal IT staff had “preferences” for hardware, operating systems, application servers, storage arrays, etc., as did the outsourcer. And they were inevitably miles apart in both the brands and structure not to mention differences in versions, release levels, and the management system itself. Even with protocols that should be a “standard,” each implementation still had peculiarities based upon vendor and release level. NFS is a great example. Sun's implementation of NFS on Solaris was different than NetApp's implementation on their filers, leading to expensive testing and porting cycles in order to attain the benefits of “outsourcing.”

I believe a by-product of the “cloud” craze will be new technology, protocols, and standards that are designed from the beginning to enable applications to run across multiple networks with a much simpler management approach. A great example is server virtualization coupled with application delivery standards like OVF. With X86 as a de facto machine standard and virtualization as implemented by hypervisor technology like Xen and VMware, applications can be “now sourced” to providers like Amazon and RackSpace with very little cost associated with the “migration.”

Some will argue that we are simply trading one protocol trap for another. For example, Amazon does not implement Xen with OVF in mind as an application delivery standard. Similarly, VMware has special kernel requirements for the virtual machines defined within OVF in order to validate your support agreement. Amazon's S3 cloud storage protocol is different than a similar REST protocol associated with EMC's new Atmos cloud storage platform. And the list of “exceptions” goes on and on.

Even in the face of these obvious market splinters, I still believe we are heading to a better place. I am optimistic because all of these protocols and emerging standards are sufficiently abstracted from the hardware that translations can be done on the fly – as with translations between Amazon's S3 and EMC's Atmos. Or the penalty of non-conformance is so trivial it can be ignored – as with VMware's kernel support requirements which do not impact actual run-time performance.

The other requirement for “now sourcing” that I mentioned above was a fault tolerant, lightweight approach to application management. The system administrators need to be able to deliver and manage the applications without getting into the low level guts of the systems themselves. As with any “new” approach that requires even the slightest amount of “change” or re-factoring, this requirement to re-think the packaging and management of the applications will initially be an excuse for the IT staff to “do nothing.” In the face of so many competing priorities, even subtle application packaging and management changes become the last item on the ever lengthening IT “to do” list – even when the longer term savings are significant. But, since “now sourcing” is clearly more palatable to IT than “outsourcing” (and more effective too), perhaps there is some hope that these new cloud architectures will find a home inside the IT department sooner rather than later.

Labels: , , ,

Friday, October 24, 2008

Can You See the Clouds from Windows?

During the course of our webinar entitled "The Pragmatist's Guide to Cloud Computing: 5 Steps to Real ROI," several of the attendees submitted questions regarding the status of Windows as an environment for cloud applications. In a partial answer to the question, Jeff Barr, a speaker during the webinar and a member of the Amazon Web Services team, responded that a beta implementation of Windows for EC2 was now available. The problem with the notion of “Windows for EC2” is that it perpetuates the broken, legacy model of tying your application to the infrastructure upon which it runs.

In the legacy model, applications became artificially tied to the physical server upon which they ran, and server utilization was low because it is very difficult to run multiple applications on a single instance of a general purpose operating system. The reason it is difficult to run multiple applications on a single instance of a general purpose operating system is because each application has unique needs which conflict or compete with the unique needs of other applications. Virtualization technology, such as that provided by VMware or Citrix with XenServer, breaks the bond of the application to a physical server by placing a layer of software, called a hypervisor, on the physical hardware beneath the operating system instances that support each application. The applications are “isolated” from one another inside virtual machines, and this isolation eliminates the conflicts.

Amazon embraces this virtualization model by using Xen to enable their Elastic Compute Cloud (EC2) service. So what's the problem? If the OS instances are not tied to the physical servers any longer (indeed you do not even know which physical system is running your application on EC2, nor do you need to know), why am I raising a hullabaloo over a “broken model?” The reason this new model of Windows for EC2 is broken is because your application is now artificially coupled to EC2. When you begin with a Windows Amazon Machine Image (AMI), install your application on top, configure-test, configure-test, configure-test, configure-test, configure-test to get it right, and then save the tested configuration as a new AMI, the only place you can run this tested configuration of your application is on Amazon's EC2. If you want to run the application on another virtualized cloud, say maybe one provided by RackSpace, or Terremark, or GoGrid, or even your own internal virtualized cloud of systems, you have to install the application yet again, configure-test, configure-test, configure-test, configure-test, configure-test to get it right again, and then save the tested configuration on the other cloud service. Why don't we just stop the madness and admit that binding the OS to the physical infrastructure upon which it runs is a flawed approach when applications run as virtual machine images (or virtual appliances) atop a hypervisor or virtualized cloud of systems like EC2?

The reason that we are continuing the madness is because madness is all we have ever known. Everyone knows that you bind an operating system to a physical host. Operating systems are useless unless they bind to something, and until the emergence of the hypervisor as the layer that binds to the physical host, the only sensible approach for operating system distribution was to bind it to the physical host. When you buy hardware, you make it useful by installing an operating system as step one. But if the operating system that you install as step one in the new virtualized world is a hypervisor in lieu of a general purpose operating system, how do we get applications to be supported on this new type of host? Here's your answer -- what we previously knew as the general purpose operating system now needs to be transformed to just enough operating system (JeOS or “juice”) to support the application, and it should bind to the application NOT THE INFRASTRUCTURE.

Virtualization enables the separation of the application from the infrastructure upon which it runs – making possible a level of business agility and dynamicism previously unthinkable. Imagine being able to run your applications on-demand in any data-center around the world that exposes the hypervisor (any hypervisor) as the runtime environment. Privacy laws prevent an application supporting medical records in Switzerland from running in an Amazon datacenter in Belgium? No problem, run the application in Switzerland. Need to run the same application in Belgium in support of a new service being offered there next month? No problem, run it on Amazon's infrastructure in Belgium. The application has to support the covert operations associated with homeland security and it cannot be accessed via any Internet connection? No problem, provide it as a virtual appliance for the NSA to run on their private network. Just signed a strategic deal with RackSpace that provides an extraordinary level of service that Amazon is not willing to embrace at this time? No problem, shut down the instances running on EC2 and spin them up at RackSpace. All of this dynamic capability is possible without the tedious cycle of configure-test -- if we will simply bind the operating system to the application in order to free it from the infrastructure and let it fly into the clouds.

So why doesn't Microsoft simply allow Windows to become an application support infrastructure, aka JeOS, instead of a general purpose operating system that is bound to the infrastructure? Because JeOS disrupts their licensing and distribution model. Turning a ship as big as the Microsoft Windows licensing vessel might require a figurative body of water bigger than the Atlantic, Pacific, and Indian oceans combined. But if they don't find a way to turn the ship, they may find that their intransigence becomes the catalyst for ever increasing deployments of Linux and related open source technology that is unfettered by the momentum of a mighty business model. Folks with valuable .Net application assets might begin to consider technology such as Novell's mono project as a bridge to span their applications into the clouds via Linux.

I can tell you that there are lots of folks asking lots of questions about how to enable Windows applications in the “cloud.” I do not believe the answer is “Windows for EC2” plus “Windows for GoGrid” plus “Windows for RackSpace” plus “Windows for [insert your data-center cloud name here].” If Microsoft does not find a way to turn the licensing ship and embrace JeOS, the market will eventually embrace alternatives that provide the business agility that virtualization and cloud computing promises.

Labels: , , , , ,

Tuesday, October 14, 2008

Will the Credit Crunch Accelerate the Cloud Punch?

It's no secret that the days of cheap capital might be over. While it is obvious that startups with lean capital structures are already embracing cloud offerings such as Amazon EC2 for computing and S3 for storage, it seems to me that this trend might accelerate further for both startups and even enterprise customers.

Cloud consumption in the startup segment is poised to accelerate as investors like Sequoia Capital warn their portfolio companies to “tighten up” in the face of this credit crunch. Even the well capitalized SaaS software providers might begin re-considering the “ridiculous” expense of building out their offerings based upon the classic salesforce.com model of large scale, proprietary datacenters with complex and expensive approaches to multi-tenancy. They might be better served by a KnowledgeTree model where on-demand application value is delivered via virtual appliances. In this model, the customer can deploy the software on existing gear (no dedicated server required) because the virtualization model makes for a seamless, easy path to value without setup hassles. Or they can receive the value of the application as a SaaS offering when KnowledgeTree spins up their instance of the technology on Amazon's elastic compute cloud. In both cases, the customer and KnowledgeTree both avoid the capital cost of acquiring dedicated gear to run the application.

Large enterprises as well will be re-considering large scale datacenter projects. When credit is tight, everyone from municipal governments to the best capitalized financial institutions must find ways to avoid outlays of precious capital ahead of the reality of customer collections. More and more of these customers will be sifting through their application portfolio in search of workloads that can be offloaded to the cloud in order to free up existing resources and avoid outlays for new capacity to support high priority projects. Just as the 9/11 meltdown was a catalyst for the adoption of Linux (I witnessed this phenomenon as the head of enterprise sales at Red Hat), a similar phenomenon might emerge for incremental adoption of cloud associated with the credit crunch of 2008. All new projects will be further scrutinized to determine “Is there a better way forward than the status quo?”

As enterprises of all sizes evaluate new approaches to minimize capital outlays while accelerating competitive advantage via new applications, rPath is offering a novel adoption model for cloud computing that might serve as a convenient bridge to close the credit crunch capital gap. For those that are interested in exploring this new model, pleas join us in a webinar along with the good folks at Forrester, Amazon, and Momentum SI on October 23rd. If necessity is the mother of invention, we might be poised for some truly terrific innovations in the cloud space . . . . and we will owe a debt of gratitude to the credit crunch for driving the new architecture forward.

Labels: , , , ,

Thursday, March 01, 2007

Licensing Hullabaloo

I love using the word “hullabaloo.” I don't get to use it often, but it is a great word to describe the cacophony of voices currently debating virtualization licensing practices.

It began with a New York Times article by Steve Lohr in which he described the competitive challenge that virtualization (and VMware) poses to Microsoft's operating system business. Then VMware posted a white-paper describing the licensing practices and their potential negative impact on customers. And Microsoft responded with a statement from Mike Neil, the General Manager of virtualization platforms for Microsoft. No doubt the analysts will begin weighing in on the matter soon. Why all the buzz? Why now? Because the impact of virtualization on the software industry promises to be similar to that of the Internet, and everyone wants to find a control point from which to steer their fortunes.

Virtualization is disruptive because it fundamentally changes the relationship between the hardware, the operating system that runs on the hardware, and the applications that run on the operating system. The hypervisor (virtualization platform) becomes the new layer on the hardware, and the operating system simply becomes an extension of the application as part of a software appliance. If the hypervisor becomes the “new” standard operating system, and any application can run on the hypervisor as a software appliance, it stands to reason that Microsoft and the other big operating system vendors such as Red Hat would be threatened by this change because they lose their point of control – how applications get attached to computers.

But what does all this have to do with licensing? Historically, there has been a strong correlation between the number of physical computers owned by a customer and the number of operating system licenses that a customer required. With virtualization, it is conceivable that every application is a software appliance with its own operating system attached. Moreover, these software appliances might come and go on the network based upon demand for the application. For example, a payroll application might run for a couple of days every month, but otherwise, it is not needed. With software appliances, the payroll software appliance would be deployed to a computer (atop the hypervisor) to run during the days before payday, and then be removed from the machine to make room for other applications to run more speedily during the rest of the month. Should the customer pay for a “full time” license to the operating system that is inside the payroll software appliance? Or should they have a “part time” license that more closely reflects the manner in which they use the payroll software appliance?

There is a lot at stake here for Microsoft. If the hypervisor becomes the “full time” operating system on the hardware that enables the “part time” software appliances to arrive and run as needed, Microsoft surely wants that hypervisor to be a Microsoft product, not one from VMware. If the operating system in the software appliances is simply an extension of an application, Microsoft may experience price erosion due to the minimalist nature of the operating system and the “part time” usage scenario.

In any case, Microsoft has a right to license their technology in any manner they see fit, so long as it does not break the law. If the licensing is not in the best interest of customers, customers will vote with their wallets and seek alternatives. Best of all for me, it gives an opportunity to use a great word like “hullabaloo” to describe the ruckus.

Labels: , , , , , ,