Tuesday, September 30, 2008

Larry Rains on the Cloud Parade

At Oracle world last week, Larry Ellison derided the current “cloud” craze, likening the technology industry's obsession with “fashion” to the women's apparel industry. In a sense, he is right. Everything is being labeled cloud these days. New datacenters from IBM – cloud. New browser from Google – cloud. New strategy from VMware – cloud. I myself commented to Ben Worthen of the Wall Street Journal that I too feel the cloud craze is a bit “nutty.” At the same time, I believe there is some real change underfoot in the industry, and I believe that Amazon's Elastic Compute Cloud (EC2) is leading the way in capturing the imagination about what is possible with a new approach.

The reason EC2 has captured the imagination of so many people in the industry is because it offers the possibility of closing the painful gap that exists between application development and production operations. Promoting applications from development to production has typically been a contentious negotiation between the line of business application developers and the IT production operations management crew. It is a difficult process because the objectives of apps and ops run orthogonal to one another. Apps is about new features to quickly respond to market demand, and ops is about compliance, stringent change control, and standardization to assure stability.

With EC2, developers don't negotiate with operations at all. They simply package up the innovations they want inside a coordinated set of virtual machines (virtual appliances in the case of the ISV vernacular), and deploy, scale, and retire based upon the true workload demands of the market. No requisitions for hardware. No laborious setup of operating environments for new servers. No filling out waivers for using new software components that are not production approved yet. No replacement of components that fail the waiver process and re-coding when the production components don't work with the new application features. No re-testing. No re-coding. No internal chargebacks for servers that are not really being used because the demand for the application has waned. No painful system updates that break the application – even when the system function is irrelevant to the workload. No. No. No.

The on-demand, self-service datacenter architecture of Amazon's EC2 is going to put huge pressure on the operations organization to respond with an internal “cloud” architecture – or lose the business of the developers who would rather “go to the cloud” than negotiate with ops. Here at rPath, we believe that the ops folks are going to need to provide the apps folks with a release (rBuilder) and lifecycle management system (rPath Lifecycle Management Platform) that enables the self-service capability and rapid promotion of EC2 while preserving compliance with operating policies that assure stability and security. And, if an application really takes off, you don't have to build a new datacenter to respond to the demand. Just scale out the workload onto Amazon, or another provider with a similar cloud architecture. IT operations now has a way to say “yes we can” instead of “no you can't.” Getting to “yes” from your IT ops provider by closing the gap between apps and ops is what the excitement of cloud is all about.

Labels: , ,

Sunday, September 28, 2008

The Ties that Bind

Last week I attended the MIT Emerging Technology Conference in order to listen in on the panel on cloud computing. The panel included participants from salesforce.com, Google, Amazon, as well as Mendel Rosenblum, the founder of VMware and a professor at Stanford University. The broad sentiment among the group was that cloud computing is an extension of the trend where network ubiquity and performance enables a transition of IT capability from fixed costs with a slow rate of change to variable costs and a rate of change that reflects true demand. Software as a service, platform as a service, and infrastructure as a service all qualify under this broad definition. Technology innovations such as rich browser interfaces, virtualization, and SOA are further hastening this trend.

Sitting at rPath, my favorite quote was one from Mendel Rosenblum regarding the changing role of the operating system in response to a question regarding the technological transformations that will follow these trends. Mendel's quote went something like this:

"An operating system is not very useful by itself. It needs to bind to something to provide value. Historically, you would bind the OS to hardware in order to expose the hardware to the applications. Now, with the widespread adoption of virtualization as the technology that binds to the hardware, the OS needs instead to bind to the application. Every time I say that you guys in the press claim I said the OS is going away. I'm not saying that. I'm just saying that it is going to change."

And this is what we have been saying at rPath for the past 3 years. When you bind an OS to the hardware in the legacy approach, the tie that binds is the OS installer that detects and loads drivers along with the utilities that are useful in managing the hardware lifecycle. When you bind an OS to an application to create a virtual machine, the tie that binds is technology that detects the needs of the application and loads the right JeOS (Just enough OS) elements while also bringing utilities that are useful in managing the application lifecycle. These are two very different approaches to operating system technology, and Mendel rightly points out that the future OS (and the related business model around the OS) will be different.

The great news is that this difference will further the move toward a more flexible datacenter - ultimately enabling seamless movement of application workloads across datacenters ala cloud computing. Binding the OS to the application as part of the application release process means that applications can be delivered to the infrastructure as a pre-defined, pre-tuned, coordinated set of virtual machines. Anyone with any experience with virtual machines knows that it is much easier to deploy a virtual machine (or a virtual appliance in the case of an application vendor) than to provision a new server and configure the application atop a new host OS. Metadata standards like OVF coupled with packaging innovations from companies such as rPath will free applications from the infrastructure and allow them to run in the clouds.

Needless to say, I was grinning ear to ear when I heard Mendel's comments. I hope they get repeated again and again.

Tuesday, September 16, 2008

VMware Strikes Back

The tech industry has been all abuzz lately about the competitive hullabaloo surrounding the new hypervisor technologies that are emerging to take on VMware's dominant hypervisor product. Microsoft launched Hyper-V with a party last week to upstage VMware's VMworld event this week. Red Hat purchased Qumranet to solidify its control of the KVM hypervisor technology. Now, VMware is striking back at the legacy OS vendors by labeling their new product category the Virtual Datacenter Operating System – a direct attack on the entrenched category of the general purpose operating system. The cold war of spies and covert operations to grab mindshare while outwardly promoting a message of peaceful co-existence has officially escalated to a hot war for the future architecture of the datacenter.

I, for one, am happy to see this rise in hostilities because I believe it will carry the industry to a much better place – and customers will be the primary beneficiary of the new approach. In the legacy datacenter, a general purpose operating system attempts to serve both the hardware infrastructure with device drivers while also serving the applications with system services. This approach has the disadvantage of artificially coupling applications to physical servers. Any attempt to move the application to another physical server typically requires that the configuration and validation process begin anew because it is extremely unlikely that the new server is absolutely identical to the previous one. This lack of flexibility leads to extreme overspending on capital equipment because an application with a period of low demand cannot relinquish its hardware resources to an application experiencing high demand. The hardware resources become application specific, and each application owner must size hardware capacity to meet peak demand. Server utilization in the datacenter averages 15 – 20%, and the general purpose OS is the culprit.

VMware has now declared that they offer an alternative approach to the general purpose operating system. The technology is not new, but marketing it under the category of an operating system is a very different tactic in this war for the datacenter. The conflict is now overt instead of covert, and this change was inevitable as VMware attempts to expand its footprint beyond its bread and butter business of Windows server consolidation and test lab operations. The new objective is the elastic datacenter and ultimately cloud computing.

The datacenter becomes elastic when applications are released by developers as coordinated sets of virtual machines (or virtual appliances in the case of a vendor release), each with Just Enough Operating System (JeOS or “juice”) attached to provide the system services required by the application. These applications can expand or contract on-demand because there is no onerous configuration process to ready the general purpose OS for a specific application. Instead, the hypervisor accepts the virtual machine and allocates it resources as specified by the OVF meta-data that is included with the image. Applications are up and running in a matter of seconds, and the process is totally repeatable to assure stability, security, and compliance as workloads scale, de-scale, and re-scale to meet the ever changing demand profiles of the enterprise application portfolio. Infrastructure can become a variable cost via this architecture because the scaling cycle can include hardware resources provided by third parties via a hypervisor layer – aka cloud computing as popularized by Amazon's Elastic Compute Cloud (EC2) via the Xen hypervisor.

The reason I label this new competitive tact by VMware as “warfare” is because the concept of a hypervisor as the infrastructure management layer with JeOS as the system services layer for the applications delivered as virtual machines destroys the value of the general purpose OS. If a hypervisor provides access to the infrastructure via device drivers, and applications receive system services from JeOS, and the flexibility of the datacenter improves, and the management of applications is simplified, and I can embrace cloud computing for variable cost infrastructure, why would I ever again buy a general purpose operating system? I won't. And customers won't either.

Unleash the dogs of war. Let's get to it so that we can all live happily ever after on the other side of history.

Labels: , , , ,

Thursday, September 04, 2008

Red Hat Escalates Hypervisor Wars

Red Hat today announced the acquisition of Qumranet, the company behind the Kernel-based Virtual Machine (KVM) bare metal hypervisor. With this acquisition, Red Hat is escalating the already fierce battle that is raging for control of the software layer that is rapidly replacing the general purpose OS as the access layer for hardware infrastructure. Qumranet is a very savvy acquisition by Red Hat because it plays to their strength as the primary maintainer of low level Linux kernel technology. The Linux kernel is a mature, high performing provider of hardware driver capability, and there is no doubt in my mind that it can become a significant competitor in the bare metal hypervisor space.

Given all of the competitive noise surrounding hypervisors these days – the Microsoft Hyper-V launch is next week followed by VMworld the following week – the stakes in this game are enormous. It represents a fundamental shift in the architecture for both server applications as well as desktop applications. No more will the general purpose OS be the table stakes for releasing or “certifying” an application to run in the customer's environment. Instead, the hypervisor is going to be the target, and applications will arrive pre-configured and ready to run as virtual machines with Just Enough OS (JeOS or “juice”) attached to provide system services and a connection to the hypervisor. Indeed this new architecture is one of the driving forces behind the concept of cloud computing. Amazon's Elastic Compute Cloud (EC2) is enabled by the Xen hypervisor, and the new CEO of VMware, Paul Maritz, has sounded off time and again about the importance of a cloud computing architecture since taking the helm of VMware.

Red Hat's new, aggressive move in this space is good news for customers who will inevitably embrace this cloud approach for enabling the flexible, elastic datacenter. It means that there will be more competition for the hypervisor design win, which translates to better features, better performance, and lower cost. This type of bare-knuckle competition is what the software market is all about, and customers are the big winners in this fight.