Friday, March 21, 2008

The Patching Dilemma

As virtual appliances pick up steam as an alternative approach to delivering SaaS value, I have seen a few analyst proclaim that the burden of patch delivery and management makes multi-tenancy via virtualization unattractive. They are correct . . . . if you attempt to build the customer virtual appliances using a legacy approach with a general purpose operating system without an integrated approach for lifecycle management. They are wrong if you consider a Just enough Operating System (JeOS or “juice”) approach with robust lifecycle management such as that offered by rPath. Let me provide an example.

rPath maintains a reference implementation of a full featured distribution of the Linux operating system as part of our rPath Appliance Platform offering. Compared with Red Hat and Novell, we probably offer about 80% of the software packages they provide as part of this reference. The remaining 20% represent desktop technology and other packages that do not matter for our target market, but are certainly important to their respective go-to-market strategies. As part of our commitment to maintain a full featured distribution, we have released about 200 security patches over the past 2 years. That is a lot of patches, but it is the reality of maintaining an OS. Keep this number in your head for a moment.

rPath also delivers our products to our customers as virtual appliances. Our ISV customers receive rBuilder and the rPath Appliance Platform as turn-key server applications completely managed by rPath on their network. Due to the unique packaging technology we pioneered, the operating system footprint to support rBuilder and the rPath Appliance Platform is about 50 Mb. When you use a JeOS architecture you eliminate any package that is not required by the application. Why is this important? Remember the 200 security patches released by rPath over the last 2 years? Only 3 were required to support our product implementation at our customers. That's correct – only 3.

Furthermore, because we deliver our patches to a specific implementation of the product (i.e. the customer did not assemble it themselves from multiple third party components, rendering clean application of patches virtually impossible), all of our customers received and applied the patches with no testing burden for them and no customer support burden for rPath.

Returning to the analysts that claim patching makes multi-tenancy via virtualization untenable, using a legacy approach with a general purpose operating system inside of snapshot virtual machine would ruin the economics of multi-tenancy via virtualization. With the rPath approach, coupled with rPath's technology for distributing patches to large numbers of systems with minimal administrator labor, you can host 66 customer virtual appliances for the same administrator effort as one virtual machine with the legacy model (3 patches vs. 200). And you avoid the expense of re-architecting and re-writing your code to support multi-tenancy – a VERY expensive proposition. And you avoid changing your business and sales model because customers can run the virtual appliances on-premise – but without the headaches of technology integration and multiple party maintenance management.

Virtual appliances deliver all of the value of SaaS to your customer base without all of the vendor hassles associated with changing your technology and changing your business model. However, just snapshotting an implementation of legacy components and ignoring the lifecycle management issues will not scale. Taking that approach would be crazy and unprofitable. rPath gives you the best of both worlds.

Labels: , , ,

Tuesday, March 11, 2008

A Big Switch or a Gradual Shift?

I just finished reading Nicholas Carr's new book, The Big Switch. I enjoyed the read, but I found the conclusions just a bit sensational. Not surprising, as all such books seek to be titillating and a bit controversial in order to hold our attention from cover to cover. The basic premise of the book is that there will be a “big switch” from internal application development, deployment, and management to external procurement of application services. The losers will be the skilled developers and IT staff that currently toil away inside the development centers and datacenters of corporations, and the winners will be the application providers such as Google and salesforce.com that provide applications on demand. I do not believe the "big switch" will be so black and white, but I do believe a gradual shift is underway.

The historical metaphor that Carr effectively uses to demonstrate the likelihood of this pending change is the switch from locally produced electrical power to regionally produced electrical power delivered via a high performing electrical grid infrastructure. In Carr's metaphor electricity is analogous to applications and the electrical grid is analogous to the Internet. There are clearly some parallels, but I believe the metaphor is flawed because information applications are more analogous to hair dryers, drill presses, and die stamping machines (i.e. applications that consume electricity) as opposed to the electricity itself.

Here is a simple example. Both a paper mill and a steel mill have a need for high voltage electricity, but the paper mill applies that electricity to an application that involves digesting wood chips into a slurry suitable for making paper while the steel mill applies that electricity to the transformation of molten iron ore into various steel products. The paper mill has no use for an application that transforms iron ore, and the steel mill has no use for an application that digests wood chips. Their application requirements are very different, but they do use very similar electrical inputs.

It is true enough that all businesses have a need for certain applications that are somewhat universal. Salesforce.com has certainly demonstrated that a single implementation of a customer relationship management and sales force automation application can be applied across a variety of businesses and delivered effectively via the Internet. Perhaps Google will indeed accomplish the same result for basic professional productivity application such as word processing and spreadsheet analysis. But what is the fate of proprietary applications? Is salesforce.com going to deliver chip design and analysis simulations to Intel? I doubt it. Is Google going to deliver portfolio and risk analysis applications to Goldman Sachs? Unlikely. If these applications are not candidates for the “big switch,” how might their delivery still be improved according to Carr's theory?

Carr identifies two key technology developments in the “big switch” from local to regionally produced power - alternating current (AC) and reliable transformers. Alternating current makes it possible to distribute high potential voltage over large distances while transformers reliably “step down” this voltage to levels where it can be safely and reliably consumed by a variety of applications (hair dryers, drill presses, etc.). Clearly fiber optics and broadband switching are the IT equivalent of alternating current by enabling efficient delivery over long distances. I believe that hypervisors coupled with virtual appliances are analogous to the transformer technology of the power system. When applications can reliably plug into a grid to receive “power” in a standardized and repeatable manner, it will be increasingly popular to let someone else deliver the power of the grid while the individual companies focus on the “design of the application” (i.e. the drill press, the chip digester, the ore smelter).

Currently, applications used by Goldman Sachs to perform portfolio and risk analysis are not easily portable to a band of computers that Intel uses for chip design and simulation. The only way Goldman could reliably “move” these applications to another “power” provider would be to literally unbolt the racks of machines from their datacenter, truck them to another datacenter, and rebolt them to the floor and re-attach them to power and network. The definition of the applications is hard-coupled to the machines that run the applications because there was never any thought of running them on a different “grid.” It takes effort to design applications to be totally independent from the computers they run upon.

If, however, Goldman Sachs were to “transform” these applications into a coordinated group of virtual appliances, then they could literally “plug” the applications into any set of computers that exposed a standard hypervisor. As standards emerge for reliable “transformation” of applications to virtual appliances, opportunities will emerge for utility providers of variable cost datacenter capacity (aka cloud computers such as Amazon's EC2) to supply the “power” to these applications. I do not believe it will occur as a “big switch,” but I am convinced that we are witnessing the beginning of a gradual shift in the division of labor for application delivery. Companies will increasingly focus their scarce resources on the definition of the application, and the machines that provide “power” to the application will increasingly be purchased as variable cost computing cycles. But I have to agree with Carr that The Big Switch is a much better title for a book than The Gradual Shift.