Friday, January 15, 2010

VMware Spits Into the Wind - Buys Zimbra

What's next? Tugging on Superman's Cape? Pulling the mask off the ol' Lone Ranger? In my opinion, the future of email and collaboration belongs to Google, with Microsoft playing very strong defense shifting folks directly to Azure (which doesn't include Exchange today, but I will bet a nickel it is in the works). If the acquisition of Zimbra is an attempt by VMware to arm the service providers with a similar capability, I sincerely hope VMware is not expecting to make any money along the way.

Don't get me wrong, as I have long been a huge fan of VMware and Zimbra. The former President and CTO of Zimbra, Scott Dietzen, is a friend and sits on the board of rPath, the company that I founded. Zimbra was an rPath customer. rPath was a Zimbra customer. But I think the notion of folks running email and basic collaboration functions at small scale (i.e. any scale that doesn't match Google and Microsoft hosted solutions in the future) is a lost cause – or at least a VERY low margin cause. And I don't think VMware should get into that business (i.e. the hosted solutions business) as it is a completely different beast than selling infrastructure software licenses (ask Microsoft, they haven't got it right but they are willing to spend BILLIONS and still be wrong).

It just makes no sense for anyone in the future to take on the burden of running an enterprise collaboration service (i.e. Exchange, Zimbra) with the software model. I even believe that RIM is going to have problems sustaining the Blackberry business as I witness the integration of Gmail with my iPhone. As I survey this market, I find more and more companies quietly moving their email/calendar/messaging services to Google. If VMware thinks this move is about Microsoft, I think they are wrong. I think it is about Google, and I would not be standing in line to pull the cape or remove the mask after spitting into the wind. VMware is no south Alabama boy named Slim.

Tuesday, November 17, 2009

Windows 7 Diagnosed with Cancer

As I read this news article about a downloadable tool for Windows 7 being "infected" with open source code, I could not help but recall the boisterous statements by Steve Ballmer during the uprising of Linux popularity in the early part of this decade. Ballmer proclaimed Linux and open source to be a "cancer" that would eat away at all that is good in the software industry. Not only the industry, but capitalism itself was at risk due to the contagious and destructive nature of this new approach to software distribution.

I suppose it is now time to begin the radiation and chemo treatments on the Redmond patient. The first tumor has been discovered, and no doubt there are others awaiting detection. As they are discovered, my guess is that the treatment will be far more damaging to the patient than the disease itself. Once you are infected with the potential of open source as a short cut to a better customer experience, paying full price on every line of code becomes challenging. Even for a patient that can spend as heavily as Microsoft on disease treatment.

Labels: , , ,

Thursday, August 27, 2009

Amazon Aims for Enterprises - Poo Poos Internal Clouds

Amazon's announcement yesterday regarding an enterprise feature for linking existing datacenter operations to Amazon's AWS via a Virtual Private Network feature did not surprise me. It is an obvious extension of their value proposition, and folks had already been accomplishing a similar capability with work-arounds that were simply a bit more cumbersome than Amazon's integrated approach. The more surprising piece of news, in my opinion, is the subtle racheting up of the rhetoric by Amazon regarding their disdain for the notion of “internal” cloud. Werner Vogels blog post explaining the rational for the new VPN features is a case in point. Here are a few tasty excerpts:

Private Cloud is not the Cloud

These CIOs know that what is sometimes dubbed "private [internal] cloud" does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher. . . .

What are called private [internal] clouds have little of these benefits and as such, I don't think of them as true clouds. . .

[Cloud benefits are]

* Eliminates Cost. The cloud changes capital expense to variable expense and lowers operating costs. The utility-based pricing model of the cloud combined with its on-demand access to resources eliminates the needs for capital investments in IT Infrastructure. And because resources can be released when no longer needed, effective utilization rises dramatically and our customers see a significant reduction in operational costs.

* Is Elastic. The ready access to vast cloud resources eliminates the need for complex procurement cycles, improving the time-to-market for its users. Many organizations have deployment cycles that are counted in weeks or months, while cloud resources such as Amazon EC2 only take minutes to deploy. The scalability of the cloud no longer forces designers and architects to think in resource-constrained ways and they can now pursue opportunities without having to worry how to grow their infrastructure if their product becomes successful.

* Removes Undifferentiated "Heavy Lifting."The cloud let its users focus on delivering differentiating business value instead of wasting valuable resources on the undifferentiated heavy lifting that makes up most of IT infrastructure. Over time Amazon has invested over $2B in developing technologies that could deliver security, reliability and performance at tremendous scale and at low cost. Our teams have created a culture of operational excellence that power some of the world's largest distributed systems. All of this expertise is instantly available to customers through the AWS services.

Elasticity is one of the fundamental properties of the cloud that drives many of its benefits. While virtualization has tremendous benefits to the enterprise, certainly as an important tool in server consolidation, it by itself is not sufficient to give the benefits of the cloud. To achieve true cloud-like elasticity in a private cloud, such that you can rapidly scale up and down in your own datacenter, will require you to allocate significant hardware capacity. While to your internal customers it may appear that they have increased efficiency, at the company level you still own all the capital expense of the IT infrastructure. Without the diversity and heterogeneity of the large number of AWS cloud customers to drive a high utilization level, it can never be a cost-effective solution.


OK. Let's examine Werner's sales proposition without the pressure to sell anything (as I am not currently trying to sell anyone anything). Clearly, Amazon is now attacking the vendors such as VMware that seem intent on attacking them by proclaiming that Amazon cannot give you enterprise features. Not only is Amazon delivering features targeted at the enterprise, but they are also scaling up the war of words by poo pooing the value proposition of these classic vendors – namely the notion of an internal cloud. Werner makes two assertions in dissing internal clouds:

First, he asserts that an internal cloud is not elastic. Well, why not? Just because your IT department has historically been labeled the NO department doesn't mean that it always must be that way. Indeed, the very pressure of Amazon providing the terrific services they provide without the mind-numbing procurement and deployment friction of your IT department is going to lead to massive changes on the part of IT. They are going to virtualize, provide self provisioning tools, and more closely align business application chargebacks to actual application usage. If the application owners are thoughtful about their architecture, they will be able to scale up and scale back based upon the realities of demand, and their IT transfer costs will reflect their thoughtfulness. Other business units will benefit from the release of resources, and server hoarding will be a thing of the past. All this is not to say that an IT department should “own” every bit of compute capacity they use. They don't. They won't. And there will probably be an increasing shift toward owning less.

But Werner claims that ownership is generally a bad thing in his second assertion that capex is bad and opex is good. Werner writes that cloud eliminates costs by eliminating capital spending. Well, it might - depending on the scenario. But his insinuation that capex is bad and opex is good is silliness. They are simply different, and the measurement that any enterprise must take is one relating to risk of demand and cost of capital. For a capital constrained startup with high risk associated with application demand, laying out precious capital for a high demand scenario in the face of potential demand failure makes no sense at all. However, for a cash rich bank with years of operating history relative to the transaction processing needs associated with servicing customer accounts, transferring this burden from capital expense to operating expense is equally senseless. Paying a premium for Amazon's gross profit margin when demand is fairly deterministic and your cost of capital is low is certainly a losing proposition.

The challenge and the opportunity of cloud for any enterprise is moving applications to an architecture that can exercise the cloud option for managing demand risk while simultaneously striking the right balance between capex and opex relative to the cost of capital. I find it funny that Amazon's new VPN feature is designed to make this opportunity a reality, while the blog post of their CTO announcing the feature proclaims that internal operations are too costly. Maybe they are viewing the VPN as a temporary bridge that will be burned when capex to opex nirvana is attained. Personally, I see it as the first of many permanent linkages that will be built to exercise the cloud option for managing demand risk. Lower costs associated with a proper portfolio balance of capex and opex is just icing on the cake.

Labels: , , , ,

Monday, August 24, 2009

VMware Springs Big for SpringSource

In a blog post back in May, I described why I believed a SpringSource and Hyperic combination was a good thing. In the new world of virtualized infrastructure and cloud computing, the application delivery and management approach is going to be lightweight and lean. At the time, however, I never imagined lightweight and lean would be worth $420M to VMware. While I have no doubt that a lightweight and agile approach to application delivery and management is going to replace the outdated heavy approach of J2EE and EJB, I am not quite convinced that VMware is getting in this deal what they want us to believe they are getting – general purpose operating system irrelevance.

VMware has done an incredible job abstracting the hardware away from the general purpose operating system. Now they have moved to the other end of the stack in an attempt to abstract the application away from the operating system. If the operating system is not responsible for hardware support and it is likewise not responsible for application support, then it is irrelevant, right? It is a good theory, but it is not quite true.

While the majority of application code will certainly be written in languages that can be supported by SpringSource (java, grails), there will remain lots and lots of application utilities and services that are provided by various programs that are not, and will never be, written in Java or the related languages supported by SpringSource. All of these various programs will still need to be assembled into the system images that represent a working application. And while I absolutely believe the general purpose operating system should die an ugly death in the face of virtualized infrastructure and cloud computing, I do not believe that operating systems can be rendered irrelevant to the application. I simply believe they become lighter and more application specific. I also believe that we are going to see a proliferation of application language approaches, not a consolidation to Java alone.

Acquiring SpringSource puts VMware on the path to providing not only Infrastructure as a Service technology, but also Platform as a Service technology. From what I have seen to date in the market, PaaS lags far, far behind IaaS in acceptance and growth. I have written multiple posts praising the Amazon approach and decrying the Google and Salesforce approach for cloud because the latter requires developers to conform to the preferences of the platform provider while the former allows developers to exercise creativity in the choice of languages, libraries, data structures, etc. That's not to say that PaaS cannot be a valuable part of the application developer toolkit. It's just that the market will be much more limited in size due to the limitations in the degrees of freedom that can be exercised. And if developers love one thing more than anything else, it is freedom.

VMware's acquisition of SpringSource moves them into the very unfamiliar territory of developer tools and runtimes. It is a different sale to a different audience. Developers are notoriously fickle, and it will be interesting to see how a famously insular company like VMware manages to maintain the developer momentum built by the SpringSource team.

Labels: , , , , , , , , ,

Wednesday, July 22, 2009

Microsoft Embraces Linux Virtual Appliances

In a very savvy move aimed at gaining competitive advantage in the cloud computing space, Microsoft yesterday announced that they were contributing source code to the Linux kernel in order to optimize the performance and management of virtual appliances on Microsoft's hypervisor, Hyper-V. One of the goals of cloud computing is elasticity – applications scale up and down based upon the hour by hour demand for the application. Well, you cannot have hour by hour elasticity for an application when it takes days/weeks/months to install, provision, and instrument the application onto the infrastructure. Virtual appliances eliminate this challenge by allowing the application owner to pre-configure the application as a set of virtual machines that are ready to respond to demand. The “set-up” is done “off-line.” Microsoft, realizing that Linux is the de facto underpinning for virtual appliances that run on Amazon's EC2, is now contributing code to Linux that will optimize the performance and management characteristics of Linux-based virtual appliances on Hyper-V – the virtualization technology that underpins Microsoft's Azure cloud.

Microsoft's new status as a Linux kernel contributor sheds light on the amazing shift that is occurring in the battle-lines for IT infrastructure. The operating system is going to split, with one half becoming the control software for the hardware (the hypervisor) and the other becoming the control software for the application (virtual appliances). Given their huge advantage with developers based upon the installed base of applications that run on Microsoft-only application frameworks (.Net, etc.), Microsoft has determined that they need to pull out all of the stops in order to be certain they do not get ripped off the hardware in favor of VMware (the dominant hypervisor) or Xen (the hypervisor that supports Amazon's market leading cloud service). Linux is no longer the biggest threat to Microsoft in the datacenter when datacenters begin embracing a cloud architecture such as Amazon's in order to enable IT-as-a-Service.

If indeed this move enables higher elasticity and simpler management of Linux based virtual appliances that run atop Hyper-V, the competitive pressure might force VMware to follow suite and make their drivers and tools available as source code that is included in the Linux kernel. To be clear, Microsoft does not currently plan to support Linux virtual appliances on Azure, but that position may be shifting with changes of this type. With Amazon currently holding the dominant position in cloud and with VMware holding the dominant position in the datacenter for virtualization, Microsoft might have lots of crafty tricks up its sleeve to re-assert themselves in this new theater of datacenter war where hypervisors and virtual appliances rule the day.

Labels: , , , ,

Tuesday, June 30, 2009

IBM Cloud Fizzles

Based on my positive review below of IBM's CloudBurst technology for building internal clouds, I tuned into the IBM webinar for the external cloud companion product with high hopes. I was hoping to hear about a consistent architecture across the two products that would allow an enterprise to federate workloads seamlessly between the internal and external cloud. Boy, was I disappointed.

It seems the IBM external cloud is nothing more than an IBM hosted capability for running virtual appliances of IBM Rational Software products. Among my many disappointments:

- no ability to run virtual appliances defined by me. They don't even publish a specification.

- no federation between internal and external. They are not even the same architecture because one runs Xen and the other runs VMware, and they do not provide a conversion utility.

- private beta (alpha maybe?) for invited customers only. Why make an announcement?

- no timetable for general availability of a product. Why make an announcement?

This announcement was a terrible showing by IBM to say the least. It is obvious to me that the CloudBurst appliance folks (call them “left hand”) and the Smart Business cloud folks (call them “right hand”) were two totally different teams. And the left hand had no idea what the right hand was doing. But each was intent not to be outdone by the other in announcing “something” with cloud in the title. And they were told to “cooperate” by some well meaning marketing and PR person from corporate. And this mess of a situation is the outcome. Good grief!

Labels: , , , ,

Monday, June 29, 2009

IBM CloudBurst Hits the Mark

IBM rolled out a new infrastructure offering called CloudBurst last week. Aimed at development and test workloads, it is essentially a rack of x86 systems pre-integrated with VMware’s virtualization technology along with IBM software technology for provisioning, management, metering, and chargeback. I believe IBM, unlike Verizon, has hit the cloud computing mark with this new offering.

First, IBM is targeting the offering at a perfect application workload for cloud – development and test. The transient nature of development and test workloads means that an elastic computing infrastructure with built-in virtualization and chargeback will be attractive to IT staff currently struggling to be responsive to line of business application owners. The line of business application owners are holding the threat of Amazon EC2 over the head of the IT staff if they cannot get their act together with frictionless, elastic compute services for their applications. By responding with a development and test infrastructure that enables self-service, elasticity, and pay-as-you-go chargeback capability, the IT staff will take a step in the right direction to head off the Amazon threat. Moving these dev/test workloads to production with the same infrastructure will be a simple flick of the switch when the line of business owners who have become spoiled by CloudBurst for dev/test complain that the production infrastructure is not flexible, responsive, or cost competitive.

Second, IBM embraced virtualization to enable greater self-service, and elasticity. While they do not detail the use of VMware’s technology on their website (likely to preserve the ability to switch it out for KVM or Xen at some future date), IBM has clearly taken an architectural hint from Amazon by building virtualization into the CloudBurst platform. Virtualization allows the owners of the application to put the infrastructure to work quickly via virtual appliances, instead of slogging through the tedious process of configuring some standard template from IT (which is never right) to meet the needs of their application – paying for infrastructure charges while they fight through incompatibilities, dependency resolution, and policy exception bureaucracy. CloudBurst represents a key shift in the way IT will buy server hardware in the future. Instead of either a bare-metal unit or pre-loaded with a bloated general purpose OS (see the complaint about tedious configuration above), the systems will instead come pre-configured with virtualization and self-service deployment capability for the application owners - a cloud-computing infrastructure appliance if you will. Cisco has designs on the same type capability with their newly announced Unified Computing System.

Third, it appears that IBM is going to announce a companion service to the CloudBurst internal capability tomorrow. From the little information that is available today, I surmise that IBM is likely going to provide a capability through their Rational product to enable application owners to “federate” the deployment of their applications across local and remote CloudBurst infrastructure. With this federated capability across local (fixed capital behind the firewall) and remote sites (variable cost operating expense from infrastructure hosted by IBM), the IBM story on cloud will be nearly complete.

The only real negatives I saw in this announcement were that IBM did not include an option for an object storage array for storing and cataloging the virtual appliances, nor did they include any utilities for taking advantage of existing catalogs of virtual appliances from VMware and Amazon. While it probably hurt IBM’s teeth to include VMware in the offering, perhaps they could have gone just a bit further and included another EMC cloud technology for the object store. Atmos would be a perfect complement to this well considered IBM cloud offering. And including a simple utility for accessing/converting existing virtual appliances really would not be that difficult. Maybe we’ll see these shortcomings addressed in the next version. All negatives aside, I think IBM made a good first showing with CloudBurst.

Thursday, June 18, 2009

Verizon Misses with Cloud Offering

About two weeks back, I was excited to see a headline about Verizon partnering with Red Hat to offer their customers a “new” cloud computing offering. I was hopeful that the details would reveal a KVM hypervisor based elastic compute capability coupled with an OVF based specification for virtual appliances to run on the service. I was also hoping to discover some details on storage as a service, with all of the services accessible via a management capability exposed via RESTful APIs. Boy, was I disappointed. Turns out the new Verizon cloud offering is just the old Verizon hosting offering with a new name.

Why is it so difficult for all of these old school infrastructure providers to understand the new path being blazed by Amazon AWS? Why can't they offer even a reasonable facsimile of the capability provided by Amazon? Surely it is the threat of Amazon that is leading them to re-name the old hosting stuff as the new cloud stuff. Why not go all the way and actually offer something that is competitive? Here is a recipe for any that are interested:

First, provide a X86 hypervisor based, virtualized compute service that allows the customer to bring their applications with them as pre-packaged, pre-configured virtual machines (virtual appliances). Don't ask them to boot a “standard OS” and then spend hours, days, weeks, months configuring it to work for them (because what you specified as the “standard” is certainly not what they have tested with their applications, and the whole purpose of elasticity is defeated if you can't quickly put images to work on the network in response to application demand). Better yet, let them boot existing Amazon Machine Images and VMware virtual appliances. Providing this capability is not rocket science. It is just work.

Second, provide a simple storage service (see Amazon S3 for what it should do) for storing unstructured data as well as for storing their virtual appliances that boot on the virtualized, elastic compute service. If you don't want to take the time to develop your own, follow AT&T's lead and go buy the capability EMC offers as part of the Atmos product line. You don't even have to think, you just need to write a check and viola – an Amazon S3 type capability running on your network. What could be easier?

Third, provide a block storage capability for attaching to virtual appliance images that must store state, such as database images. Most of the hosting companies already provide this type of SAN offering, so this part should be a no-brainer. Just price it with a very fine grained, variable cost approach (think megabyte-days, not months).

Fourth, provide access to the infrastructure management services via simple, RESTful APIs. You don't have to go overboard with capability at first, just make certain the basics are available in a manner that allows the services to be run effectively over the Internet without any funky protocols that are specific to your network implementation.

Finally, go sign up partners like rPath and RightScale to offer the next level of manageability and support for the virtual machines that will run on the network. These are the final touches that indicate to your customers that you are serious about providing a terrific capability for the complete lifecycle of your cloud computing offering. Instead of asking them to be patient with you while you re-name your hosting offering as a cloud offering in the hopes that it will assuage their bitterness that Amazon-like capability is not available on your network.

Labels: , , , , ,

Tuesday, June 02, 2009

Federation - The Enterprise Cloud Objective

I know the title to this blog post sounds a bit like a Star Trek episode, but I believe I have an useful point to make with the term federation - even at the risk of sounding a bit corny. I have been watching with interest the lexicon of terms that are emerging to describe the architecture and value of cloud computing. VMware uses the terms Internal/External/Private to describe the distribution of application workloads across multiple networks in a coordinated fashion. Sun uses the terms Private/Public/Hybrid, respectively, to describe the same architecture (although they would argue for Sun branded components in lieu of Vmware/EMC branded components). I think both of these term sets as descriptors for a cloud architecture that distributes workloads across multiple networks are flawed and confusing. Rather than simply complaining, however, I am willing to offer a solution.

The term Federation describes the end state of an effective cloud architecture perfectly, and I think we should all begin using it when we attempt to sell our respective goods and services to enable the enterprise cloud. Whether part of a Internal/External/Federation combination or a Private/Public/Federation combination or Network1/Network2/Networkn/Federation, the common term accurately describes the end objective of cloud computing.

First, some attribution. This term was presented to me as a descriptor for cloud value during my work with the cloud infrastructure group at EMC (the folks that own the Atmos product line) over a year ago. It is now my turn to put some greater structure on this enviable original thought that belongs to EMC.

A good general definition for Federation (independent of an IT context) is a union of member entities that preserves the integrity of the policies of the individual members. Members get the benefits of the union while retaining control over their internal affairs.

In the case of a technology infrastructure federation (aka a cloud architecture), the primary benefit of the union is the lower cost and risk associated with a pool of technology assets which are available across a diversified set of independent networks. In other words, application workloads should be distributed to the network with the lowest risk adjusted cost of execution – i.e. based upon the risk policies of the enterprise. If the risk of running a mission critical, enterprise workload on Amazon's AWS network is deemed high (for whatever reason, real or perceived), that workload might stay on a proprietary network owned by the enterprise. Likewise, a low risk workload that is constantly being deferred due to capacity or complexity constraints on the enterprise network might in fact be run for the lowest cost at Amazon or a comparable provider. For a startup, the risk of depleting capital to purchase equipment may dictate that all workloads run on a third party network that offers a variable cost model for infrastructure (Infrastructure as a Service, IaaS).

Independent of the proprietary calculus for risk that must be undertaken by every enterprise relative to their unique situation, it should become clear to all that the distribution of application workloads across multiple networks based upon the cost/capability metrics of those networks will lower the risk adjusted cost of enterprise computing. The same diversification theories that apply to managing financial portfolio risk also apply to managing the distributed execution of application workloads. The historical challenge to this notion of application workload federation is the lack of an efficient market – the transaction cost associated with obtaining capacity for any given application on any given network were too high due to complexity and lack of standards for application packaging (de facto or otherwise). Now, with virtualization as the underpinning of the network market, virtual appliances as the packaging for workloads, high bandwidth network transit and webscale APIs for data placement/access, the time is coming for an efficient market where infrastructure capacity is available to applications across multiple networks. And Federation is the perfect word to describe a cloud architecture that lowers the risk adjusted cost of computing to the enterprise. Enterprise. Federation. Clouds. Star Trek.

Labels: , , , , , ,

Friday, May 15, 2009

Oracle Lowers Expectations

I was floored when Oracle announced the acquisition of Sun after Sun's deal with IBM fell apart. I never saw Oracle buying Sun in a million years – too much reliance on hardware revenue. Then, Oracle announced this week that they intend to purchase Virtual Iron Software. It seems that Oracle is serious about lowering their influence in the software technology ecosystem. They are going after the lower layers of the infrastructure, and they are putting their money where their mouth is in order to put some real assets into the battle for the next generation, virtualized datacenter.

Sun provides some terrific infrastructure assets in the form of the Solaris operating system, the ZFS file system, the Java programming language, and several other lesser known projects that are nonetheless useful technology in assembling a world class datacenter infrastructure. I actually believe that Solaris is not so relevant as an operating system (too difficult to do the driver work for all the variations in X86 hardware) as it will be relevant as a collection of useful system software and engineering expertise for delivering innovation and support. If you have not seen Sun's investment in this area, do some research on the Image Packaging System. It is Sun's implementation of the rPath approach for tailoring the operating system to the needs of an application (JeOS) and providing robust lifecycle management – they even wrote it in python, not java, to make it easier to mimic rPath's features.

Virtualization is going to change the notion of the operating system, and Solaris will get torn apart into a series of useful components and support libraries that will be attached as JeOS to various applications that will run on a cloud of virtualized infrastructure (witness Amazon EC2). The lines between Linux, Solaris, and other open system components will blur. This outcome is a good one for Oracle because it is very disruptive to the existing providers of one size fits all, general purpose operating systems. Especially if Oracle follows through on their commitment to virtualization as evidenced by their acquisition of Virtual Iron.

Virtual Iron was an early entrant into the hypervisor space – behind VMware but ahead of XenSource. They never really got out of the gate from a marketing perspective, but they always had some useful technology for managing virtual infrastructure. If Oracle makes a strong commitment to Xen as well as the management interface for controlling virtual infrastructure, they could definitely emerge as the strong contender to take on VMware in this market. I do not believe Citrix has a strong commitment to the datacenter infrastructure market (they prefer the desktop with a strong Microsoft alliance), Red Hat is far behind in market adoption with their late commitment to KVM in lieu of Xen, and Microsoft has so much anxiety over Google that worrying about VMware is likely lower on the list of priorities.

The race to the bottom of the software stack just became more interesting yet again. With Oracle continuing to raise the bar with lower expectations, it will be fun to watch the feeding frenzy for the assets that will win the hearts and minds of those datacenter customers that are certain to embrace virtualization and cloud as the new architecture for scalable, elastic computing.

Wednesday, May 06, 2009

Cloud Application Management - Agile, Lean, Lightweight

The acquisition of Hyperic by SpringSource got me thinking about the next generation of application delivery and management for cloud applications. At first, I was cynical about this combination – two small companies with common investors combining resources to soldier on in a tough capital environment. While this cynical thinking probably has a kernel of truth to it, the more I thought about the combination the more I thought that it makes sense beyond the balance sheet implications. Indeed, I believe the future of application delivery and management will combine agile development with lean resource allocation and lightweight management. This new approach to application delivery and management is one that complements the emerging cloud architecture for infrastructure.

Agile development, with its focus on rapid releases of new application functionality, requires a programming approach that is not overly burdened with the structure of J2EE and EJB. Spring, Rails, Grails, Groovy, Python all represent the new approach – placing a premium on quick delivery of new application functionality. Application functionality takes center stage, displacing the IT infrastructure dominance of the legacy application server oriented approach. Developers will use what works to deliver the application functionality instead of using what works for the IT organization's management framework. The new approach does have implications for scalability, but we will get to that issue in a moment.

Lean is one of the newer terms emerging to describe the future of application delivery. I first referenced lean as an IT concept by relating it to the lean approach for manufacturing operations in a blog post about a year ago. With lean application delivery, applications scale horizontally to consume the infrastructure resources that they require based upon the actual demand that they are experiencing. The corollary is that they also contract to release resources to other applications as demand subsides. This “lean” approach to resource allocation with dynamic scaling and de-scaling is what a cloud architecture is all about – elasticity. Rather than optimizing the code to “scale up” on an ever bigger host, the code remains un-optimized but simple – scaling out with cheap, variable cost compute cycles when the peaks in demand require more capacity. Giving back the capacity when the peaks subside.

With the lean approach for resource allocation, a lightweight management approach that measures only a few things replaces the old frameworks that attempt to measure and optimize every layer in an ever more complex infrastructure stack. If the service is under stress due to demand, add more instances until the stress level subsides. If the service is under extremely light load, eliminate resources until a more economical balance is struck between supply and demand. If an instance of a service disappears, start a new one. In most cases, you don't even bother figuring out what went wrong. It costs too much to know everything. This lightweight approach for management makes sense when you have architected your applications and data to be loosely coupled to the physical infrastructure. Managing application availability is dramatically simplified. Managing the physical hosts becomes a separate matter, unrelated to the applications, and is handled by the emerging datacenter OS as described by VMware or the cloud provider in the case of services like those provided by Amazon AWS.

Take a look at the rPath video on this topic. I think it reinforces the logic behind the SpringSource and Hyperic combination. It rings true regarding the new approach that will be taken for rapid application delivery and management in a cloud infrastructure environment. Applications and data will be loosely coupled to the underlying infrastructure, and agile development, lean resource allocation, and lightweight management will emerge as the preferred approach for application delivery and management.

Labels: , , , , ,

Monday, April 20, 2009

McKinsey Recommends Virtualization as first step to Cloud

In a study released last week, the storied consulting company, McKinsey & Company, suggested that moving datacenter applications wholesale to the cloud probably doesn't make sense – it's too expensive to re-configure and the cloud is no bargain if simply substituted for equipment procurement and maintenance costs. I think this conclusion is obvious. They go on to suggest that companies adopt virtualization technology in order to improve the utilization of datacenter servers from the current miserable average of ten percent (10%). I think this is obvious too. The leap that they hesitated to make explicitly, but which was called out tacitly in the slides, was that perhaps virtualization offers the first step to cloud computing, and a blend of internal plus external resources probably offers the best value to the enterprise. In other words cloud should not be viewed as an IT alternative, but instead it should be considered as an emerging IT architecture.

With virtualization as an underpinning, not only do enterprises get the benefit of increased asset utilization on their captive equipment, they also take the first step toward cloud by defining their applications independent from their physical infrastructure (virtual appliances for lack of a better term). The applications are then portable to cloud offerings such as Amazon's EC2, which is based on virtual infrastructure (the Xen hypervisor). In this scenario, cloud is not an alternative to IT. Instead, cloud is an architecture that should be embraced by IT to maximize financial and functional capability while simultaneously preserving corporate policies for managing information technology risk.

Virtualization as a step to cloud computing should also be viewed in the context of data, not simply application and server host resources. Not only do applications need compute capacity, they also need access to the data that defines the relationship of the application to the user. In addition to technology such as VMware and Citrix's Xen technology, enterprises also need to consider how they are going to abstract their data from the native protocols of their preferred storage and networking equipment.

For static data, I think this abstraction will take the form of storage and related services with RESTful interfaces that enable web-scale availability to the data objects instead of local network availability associated with file system interfaces like NFS. With RESTful interfaces, objects become abstracted from any particular network resource, making them available to the network where they are needed. Structured data (frequently updated information typically managed by a database server technology) is a bit trickier, and I believe solving the problem of web-scale availability of structured data will represent the “last mile” of cloud evolution. It will often be the case that the requirement for structured data sharing among applications will be the ultimate arbiter of whether an application moves to the cloud or remains on an internal network.

The company that I founded, rPath, has been talking about the virtualization path to cloud computing for the past three years. Cloud is an architecture for more flexible consumption of computing resources – independent of whether they are captive equipment or offered by a service provider for a variable consumption charge. About nine months ago, rPath published the Cloud Computing Adoption Model that defined this approach in detail with a corresponding webinar to offer color commentary. In the late fall of last year, rPath published a humorous video cartoon that likewise offered some color on this approach to cloud computing. With McKinsey chiming in with a similar message, albeit incomplete, I am hopeful that the market is maturing to the point where cloud becomes more than a controversial sound-byte for replacing the IT function and instead evolves into an architecture that provides everyone more value from IT.

Labels: , , , , ,

Monday, April 13, 2009

Outsourcing gives way to Now-sourcing via Cloud Technology

The theory behind the value of outsourcing, aside from labor arbitrage, was that the outsourcer could deliver IT resources to the business units in a more cost effective manner than the internal IT staff due to a more highly optimized resource management system. The big problem with outsourcing, however, was the enormous hurdle the IT organization faced in transitioning to the “optimized” management approach of the outsourcer. In many cases this expensive hurdle had to be crossed twice – once when the applications were “outsourced” and then again when the applications were subsequently “in-sourced” after the outsourcer failed to live up to service level expectations set during the sales pitch. Fortunately, the new architecture of cloud computing enables outsourcing to be replaced with “now sourcing” by eliminating the barriers to application delivery on third party networks.

The key to “now sourcing” is the ability to de-couple applications and data from the underlying system definitions of the internal network while simultaneously adopting a management approach that is lightweight and fault tolerant. Historically, applications were expensive to “outsource” because they were tightly coupled to the underlying systems and data of the internal network. The management systems also pre-supposed deep access to the lowest level of system structure on the network in order to hasten recovery from system faults. The internal IT staff had “preferences” for hardware, operating systems, application servers, storage arrays, etc., as did the outsourcer. And they were inevitably miles apart in both the brands and structure not to mention differences in versions, release levels, and the management system itself. Even with protocols that should be a “standard,” each implementation still had peculiarities based upon vendor and release level. NFS is a great example. Sun's implementation of NFS on Solaris was different than NetApp's implementation on their filers, leading to expensive testing and porting cycles in order to attain the benefits of “outsourcing.”

I believe a by-product of the “cloud” craze will be new technology, protocols, and standards that are designed from the beginning to enable applications to run across multiple networks with a much simpler management approach. A great example is server virtualization coupled with application delivery standards like OVF. With X86 as a de facto machine standard and virtualization as implemented by hypervisor technology like Xen and VMware, applications can be “now sourced” to providers like Amazon and RackSpace with very little cost associated with the “migration.”

Some will argue that we are simply trading one protocol trap for another. For example, Amazon does not implement Xen with OVF in mind as an application delivery standard. Similarly, VMware has special kernel requirements for the virtual machines defined within OVF in order to validate your support agreement. Amazon's S3 cloud storage protocol is different than a similar REST protocol associated with EMC's new Atmos cloud storage platform. And the list of “exceptions” goes on and on.

Even in the face of these obvious market splinters, I still believe we are heading to a better place. I am optimistic because all of these protocols and emerging standards are sufficiently abstracted from the hardware that translations can be done on the fly – as with translations between Amazon's S3 and EMC's Atmos. Or the penalty of non-conformance is so trivial it can be ignored – as with VMware's kernel support requirements which do not impact actual run-time performance.

The other requirement for “now sourcing” that I mentioned above was a fault tolerant, lightweight approach to application management. The system administrators need to be able to deliver and manage the applications without getting into the low level guts of the systems themselves. As with any “new” approach that requires even the slightest amount of “change” or re-factoring, this requirement to re-think the packaging and management of the applications will initially be an excuse for the IT staff to “do nothing.” In the face of so many competing priorities, even subtle application packaging and management changes become the last item on the ever lengthening IT “to do” list – even when the longer term savings are significant. But, since “now sourcing” is clearly more palatable to IT than “outsourcing” (and more effective too), perhaps there is some hope that these new cloud architectures will find a home inside the IT department sooner rather than later.

Labels: , , ,

Monday, March 30, 2009

Maintenance Woes Handicap Microsoft's Azure

Last week, Microsoft product manager Steve Martin indicated in his blog that Azure would be a Microsoft-only service – hosted only by Microsoft in Microsoft data-centers. It seems that maintenance challenges, or the ability to “innovate freely,” are limiting the availability of Azure to a pure service play. To be fair, Microsoft has indicated that key technology developed to support Azure will find its way into Microsoft Windows server products, which, of course, can be purchased and deployed by anyone. But I have serious doubts about Microsoft as a service provider because I do not think they have the operational mentality to succeed on the margins that are available to service providers – they are not cheap and efficient like Amazon. Microsoft is already demonstrating that the ties to their past technology architecture are already handicapping the potential success of their future cloud endeavors.

Anyone that pays attention to my musings in this blog knows that I am a huge fan of Amazon's web services – particularly the elastic compute cloud (EC2). And I believe Amazon is going to be very successful in this space because they have led the way with an architecture that effectively de-couples the definition of the application from the definition of the compute resources. Plus, they have an operational mentality – they are cheap and efficient. Aside from Amazon, I believe the really big winners are going to be the technology companies that enable existing service providers to respond to Amazon with a technology model that loosely couples applications to the infrastructure - eliminating the complex maintenance challenges that have historically precluded elasticity and enabling seamless application delivery across public and private clouds. Here are some of the potential players and my handicap for their success.

VMware – I believe virtualization technology is going to play a critical role in cloud due to its ability to de-couple the applications from the host computers – enabling elegant maintenance and true elasticity. VMware is certainly the leader in this space, and there is little doubt that they have relevant technology. The key question is going to be their willingness to respond to the “cheapness” and “openness” requirement of the service providers. VMware has historically sold technology to enterprises in a typical perpetual software licensing model. Cloud will require much more sharing of risk/reward in the sales model and a willingness to be flexible in the technology requirements to co-innovate with the service providers and ecosystem partners. VMware is not known for its partnering mentality.

Citrix – Xen is currently the incumbent technology for virtualization at Amazon, and Citrix is clearly aligned with Xen due to their $500M purchase of XenSource. However, cloud to date seems to be all about Linux and related open source infrastructure due to the flexibility of the licensing model, and Citrix has historically been very tightly aligned to Microsoft and its associated technology and licensing model. I think it is an open question whether Citrix is willing to embrace a radically different business model to monetize Xen as part of a service provider cloud ecosystem.

Cisco – the announcement earlier this month about the Cisco unified computing system is all about a new brand of infrastructure that is unencumbered by the legacy approach which tightly couples applications to the compute host resources. I believe Cisco “gets it” much more so than the current mega-vendors HP and IBM – each of which is going to be hampered with the legacy approaches for application delivery and management. But, as is always the case with new ventures like Ciscos unified computing system, the sky is always bluest when there is nothing yet to lose. We'll see if they can actually execute in the market with a new model.

Microsoft – While Microsoft certainly has many strengths, I do not think any of them lend themselves particularly well to cloud success. Of all vendors, they have the most to lose, and the least to gain, with this new approach. Their sales and distribution model discourages elasticity, they are way behind the leaders in the market for virtualization technology, and they do not have a mindset to be operationally efficient as a service provider. I believe their biggest opportunity in cloud is to provide a new desktop approach that seamlessly integrates all of the services a user would expect to be effective and productive in a networked world. If Apple doesn't continue to run away with that opportunity, maybe Microsoft will make some hay in this space.

Google – I have criticized Google again and again for their AppEngine model that essentially requires application providers to recode their applications to fit Google's infrastructure. I actually do not think Google wants to be a big cloud player for “infrastructure” services in the manner that Amazon is currently defining the market. Instead I think Google is going to be a next generation application platform provider more in the mold of Apple with its consumer products, but geared more to the needs of the business user. As a giant SaaS platform for applications like email, office productivity, VOIP, calendaring, geographical services, etc., I think Google has a great play with their current approach. Just don't expect them to compete with Amazon in the infrastructure space.

Salesforce.com – I think Salesforce.com is completely out in left field for everything other than applications that need to reside close to their CRM application. I personally feel that Marc should just go ahead and take on SAP and Oracle in the business applications space and forget about trying to morph their highly proprietary application platform into a general purpose application delivery platform. They have a terrific sales team, a terrific customer base, and the competing applications from Oracle, SAP, and others are just awful in terms of the grief customers endure to maintain and support them.

EMC – Some folks may be a bit puzzled by the inclusion of EMC in this list of potential cloud players, but remember that EMC owns nearly 90% of VMware which means that cloud is definitely top of mind at EMC. And they actually have been thinking about the data problem associated with cloud. Namely, how do I ensure my data is available to my applications when they become portable across multiple compute service providers? Their Atmos product is effectively Amazon S3 in a box with some interesting features around policy management and data federation. As an equity play, EMC may be a cheap way to own VMW, and they might make some hay themselves if they execute in the cloud market ahead of the other storage providers.

The rest of the A-list technology providers – HP, IBM, Oracle, Red Hat, Sun – either do not get cloud, are actively bashing it (witness Oracle), have no assets that are currently relevant, or are so far behind in investing that, short of a series of acquisitions, I believe they will remain effectively irrelevant. Of course there will be lots of lip service and cute marketing tricks such as IBM's recent open cloud manifesto, but until there is some substance, I stand by the above handicaps.

Monday, March 16, 2009

Killing the OS Octopus

The inspiration for this blog post title comes from a Paul Maritz (CEO of VMware) quote, during his presentation to financial analysts last week. Paul used the phrase "severing the tentacles of complexity" multiple times when referring to the new level of business flexibility that is possible when applications are liberated from their physical host by encapsulating them inside of a virtual machine with “just enough operating system (or JeOS).” They can be provisioned much more quickly because there is no need to provision physical assets. They can be moved from datacenter to datacenter more quickly because there is no onerous installation and validation process required. Indeed, virtualization enables cloud computing because the applications are no longer defined by the physical computers upon which they run. But, until VMware truly embraces a JeOS approach with their operating system support matrix, they are simply recommending “isolating” the tentacles of complexity. And the result will be a perilous and expensive condition often referred to as VM sprawl.

So what is the difference between “isolating” the tentacles of complexity and “severing” them? Isolating the tentacles means shoving the previous definition of your application running on a physical server into a virtual machine box. When you put a virtual machine box around the octopus, it can no longer create mischief with other application octopi running on the same physical host. Its tentacles are “isolated,” and utilization on the physical host can be much higher. This approach is valuable, and it has catapulted VMware into the spotlight as one of the hottest technology companies on the planet.

However, the octopus is still alive and well inside the box, and system administrators must continue to feed that hungry animal in exactly the same way they did when it was living on a physical server host. The level of maintenance has not been reduced. The level of security vulnerability has not been reduced. Although isolated, it is still a resource hog because those crazy tentacles demand CPU, and memory, and disk to flail and flap as they do. This condition of ever expanding system administration grief associated with the frictionless deployment of virtualized applications whose tentacles of complexity have simply been isolated and not severed is known as VM sprawl. And it will be a nightmare of system administration expense for those that embrace it.

In order to avoid the nightmare of VM sprawl, the tentacles of the complexity octopus must actually be severed, not simply isolated. Application developers and system administrators alike must re-think the category of the operating system in the context of a virtualized datacenter. Since the operating system is no longer the conduit for managing the hardware, it should become a simple shared library for system services required by the application. Two great example of this approach are rPath and BEA's (now Oracle) liquid VM technology. With both of these platforms, the operating system is specified in a manner that explicitly supports the needs of the application – without any extra bloat associated with the typical general purpose OS approach. As a result, the OS in both of these cases is 10X or more smaller than the smallest installation option offered by a general purpose OS. In theory, this should lead to a 10X reduction in the scope and scale of administration activities. Severing the tentacles of complexity by re-thinking the OS eliminates the perils of VM sprawl.

But VMware does not currently support this approach. They only support the legacy vendors of general purpose OS technology. Sure, these new approaches have terrific performance and value, and VMware is happy to have them contribute to the value of their virtual appliance market, but their support statement pretty clearly favors “isolation” of complexity over true elimination of complexity. But the winds of change are steadily and surely blowing in favor of this new approach in the market. Red Hat, for example, just announced that they are going to market with a bare metal hypervisor that is directly competitive with the VMware approach in lieu of their historical product architecture - where virtualization was simply a feature of the general purpose operating system. And Paul Maritz was pretty clear in his presentation that “severing the tentacles of complexity” and a “just enough operating system” approach are important to VMware. Perhaps we are drifting toward the precipice of an all out war for the definition of the future datacenter operating system. I said it back in 2006, and I'll say it again today -- let's fry up that OS octopus polvo frito and serve it with some spicy mango chutney and cold beer.

Labels: , ,

Wednesday, March 04, 2009

Will Agile drive a Hybrid Cloud Approach?

Some workloads are perfectly suited for cloud deployment. Generally, these are workloads with transient or fluctuating demand, relatively static data (lots of reads, few writes), and no regulated data compliance issues (i.e. patient healthcare records). Test fits this description perfectly – especially with the growing popularity of Agile methods. With its focus on rapid iteration and feedback to achieve faster innovation and lower costs, Agile demands a flexible and low cost approach for testing cycles. I have no doubt that developers will begin using variable-cost compute cycles from services like Amazon EC2 because of its flexibility and pay-for-what-you-use capability. But I am also willing to bet that testing with Amazon will put further pressure on the IT organization to respond with a similar, self-service IT capability. I think a hybrid-cloud architecture with complementary internal and external capability will emerge as a productive response to the demand for true end-to-end agility.

Some time ago, I authored a blog post titled “When Agile Becomes Fragile” that outlined the challenge of implementing Agile development methods while attempting to preserve the legacy IT approach. What good is rapid development when the process for promoting an application to production takes several months to absorb even a few weeks of new development? If developers take their Agile methods to the cloud for testing (which they will), it becomes a slippery slope that ultimately leads to using the cloud for production. Rather than the typical, dysfunctional IT response of “don't do that – it's against policy,” I think the IT organization should instead consider implementing production capacity that mimics and complements cloud capability such as that offered by Amazon.

Along with all of the cool technology that is emerging to support Agile methods, new technology and standards are also emerging to support the notion of a hybrid-cloud. The new Atmos storage technology from EMC and the OVF standard for virtualizing applications are two good examples of hybrid-cloud technology. Atmos gives you the ability to describe your data in a manner that automatically promotes/replicates it to the cloud if it has been approved for cloud storage/availability. Whether applications run on an external cloud or on your “internal cloud,” the supporting data will be available. Similarly, OVF has the potential to enable virtualized applications to run effectively externally on the cloud or internally – without significant manual (and error prone) intervention by system administrators (or those developers that play a sysadmin on a TV show). In both cases, the goal is to enable greater flexibility for applications to run both internally and on the cloud – depending on the profile of the application and the availability of resources.

Agile is yet another important technology change that is going to pressure IT to evolve, and rPath is sponsoring a webinar series that dives into this topic in some detail. Whether you are a developer, an architect, or a system administrator, these webinars should be interesting to you. For the IT staff, the series may offer a glimpse at an approach for IT evolution that is helpful. In the face of Agile and cloud pressure, the alternative to evolution – extinction – is much less appealing.

Tuesday, February 24, 2009

Red Hat Goes Streaking - Dumps Xen

Yesterday Red Hat announced their revised virtualization strategy. Most interesting to me was Red Hat's declaration that the hypervisor should indeed lie "naked" on the metal. Red Hat also announced end-of-life for Xen support and some stuff regarding desktop virtualization and protocols that I don't pretend to understand. While it was expected that Red Hat would end-of-life support for Xen after their acquisition of the KVM technology from Qumranet, the fact that Red Hat went streaking into the market with a bare metal approach to the hypervisor was a pretty significant strategy reversal.

Until yesterday, Red Hat had always adopted the Microsoft line on the hypervisor - it is simply a feature of a general purpose operating system. Red Hat historically claimed that the hypervisor should not lie naked on the bare metal, but instead it should be wrapped up inside the general purpose OS - a little extra bloating that never hurt anybody. It looks like some combination of market forces - likely VMware financial success and Amazon's adoption of Xen for their Elastic Compute Cloud - have forced Red Hat to consider a new approach. Now Microsoft stands alone in their contention that a hypervisor is just a new general purpose OS feature, and the rest of the market can move on to the market share land grab for the large-scale Linux datacenters. It should be an interesting race, because all three players - Xen (Citrix), KVM (Red Hat), and VI (VMware) are all coming from different positions of strength - and weakness.

In almost every case I have observed, Xen is the incumbent virtualization technology among the Linux datacenter consumers that have virtualized any production workloads. However, not many have actually virtualized because the historical compelling case for virtualization - server consolidation - falls on deaf ears among the Linux crowd. With Linux, as opposed to Windows, it is possible to run multiple application workloads on a single server without significant instability. Server utilization can be quite high without virtualization - hence no requirement for Linux server consolidation. But, the new benefits of virtualization - flexibility, security, and elasticity - apply equally, if not especially, well to the Linux workloads. If application workloads are separated into unique, small footprint, virtual machines (virtual appliances, if you like) that run atop a bare naked hypervisor, then:

- flexibility is better because one application can be managed/administered without interfering with another workload running on the same host

- security is better because vulnerable services like DNS are isolated and easier to maintain/secure due to a smaller footprint

- elasticity is better because application workloads can be scaled quickly when demand increases and also retired quickly when demand recedes (cloud, anyone?)

Because of these new business benefits, Xen is beginning to take hold in the Linux datacenters. Citrix, however, has not historically been a strong brand among the Linux savvy crowd, so it is unclear if they have the stomach for pursuing a new market segment. VMware is the 800 pound gorilla in hypervisors generally, but they too have shown very little Linux savvy (their management console only runs on Windows) - probably because 90% of their revenues come from virtualizing Windows workloads. Then there is Red Hat, the 800 pound Linux gorilla who has been in denial about the importance of a bare metal approach for the hypervisor - until now.

Based on this set of circumstances, I would say that the virtualization opportunity for the Linux datacenter market is a wide open race. Now that Red Hat has decided to run the race naked, it should be more fun to watch.

Labels: , , , ,

Sunday, January 25, 2009

Is the Cloud Game Already Over?

This is the thought that crossed my mind a few weeks back as I pondered Amazon's beta release of the Amazon Web Services Console. The reason the game might be over is because Amazon is apparently so far ahead of the competition that they can now divert their engineering attention to the management console instead of core platform functionality. To me, this signals a competitive lead so vast that absent quick and significant re-direction of resources and potential strategic acquisitions of capability, Amazon's competitors are doomed in the cloud space.

I saw this dynamic once before during my time at Red Hat. Red Hat had such a lead in the market with almost total mind share for the platform (Red Hat Linux, now Red Hat Enterprise Linux), that the company could launch a strategic management technology, Red Hat Network, while others were grasping for relevance on the core platform. Note that in the case of Red Hat, no one else has come close to their lead in the Linux market space. And no one else has really gotten around to building out the management technology that was offered by Red Hat Network 8 years ago.

Consider these other challenges facing Amazon's competitors:

1. Lack of machine image definitions - Amazon published the AMI spec for EC2 about 2 years ago. To my knowledge, all of the competitors that use virtualization (Amazon uses Xen) are still requiring customers to boot a limited set of approved "templates" which must then be configured manually, and subsequently lose their state when retired.

2. Proprietary versus open - when you require the customer to program in a specific language environment that is somewhat unique to a particular "cloud" platform (ala Google with Python and Salesforce with Apex), you dramatically limit your market to virtual irrelevance out of the gate. Amazon doesn't care, so long as you can build to an X86 virtual machine.

3. Elastic billing model - until you have a platform for billing based upon the on-demand usage of resources, you don't have a cloud with the key value proposition of elasticity. You simply have hosting. To my knowledge, most competitors are still on a monthly payment requirement. Hourly is still a long way away for these folks.

Perhaps I am wrong, but I bet I am not. If I am right, the day will come in the not too distant future (after the equity markets recover) when Amazon spins out AWS as a tracking stock (similar to the EMC strategy with VMware) with a monster valuation (keeping this asset tied to an Amazon revenue multiple makes no sense), and the valuations on the technology assets that help others respond to Amazon go nutty (witness the XenSource valuation on the day VMware went public). I say "Go, Amazon, Go!"

Labels: , , , ,

Tuesday, December 23, 2008

Cloud in Plain English

I must take my hat off to Jake Sorofman, who runs marketing for rPath. Jake has done an incredible job distilling a bunch of complex stuff into a consumable and entertaining video. Do yourself a favor, and check out his Cloud Computing in Plain English video. Al Gore never looked so good.

And Happy Holidays!

Friday, December 12, 2008

Is JeOS a Tonic for VM Sprawl?

It seems that everyone is worried about VM sprawl these days. When system capacity is easy to consume because the application is prepackaged as a virtual machine (a virtual appliance in the case of an ISV), the virtual infrastructure capacity is quickly gobbled up by those that have been waiting for IT to get around to provisioning systems for them. No more friction due to virtualization means no more waiting and wanting. It also leads to VM sprawl. But why is VM sprawl bad?

VM sprawl is bad because the scale of the management problem used to be throttled by the capital spending associated with the size of the infrastructure. Now, the scale of the management problem is equal to the true demand for application capacity as represented by the number of application images, or virtual machines, that get deployed. This scale factor associated with application images throws the old yardstick of X system admins per Y server machines out the window. What are we going to do to lower the work profile associated with so many new systems that now need to be managed?

I think at least part of the tonic for VM sprawl is the new acronym coined by Srinivas Krishnamurthi of VMware – JeOS. JeOS stands for Just enough OS and it is pronounced “juice.” In my mind, this liquid pronunciation is appropos given my view of its potential potency as a tonic for VM sprawl. A huge part of the burden of system management is the patching and associated regression testing for maintaining the security and functionality of the general purpose OS. If you can shrink the size of the OS by 90% (which is where we have measured the typical size for most applications built with our rBuilder technology) by eliminating any elements not required by the application, you can eliminate about 90% of the patching burden. More importantly, you eliminate the bigger burden of regression and stability testing that is coupled to the patching process.

With a JeOS approach, the number of virtual machines can theoretically grow by 10X the legacy approach without any impact to the cost structure associated with patching and testing the OS changes. I suspect a 10X reduction represents a real win for most shops as the OS patching and testing associated with security and infrastructure performance is a very sizable portion of their management spending. So if you are feeling the pangs associated with VM sprawl, I strongly suggest a healthy slug of JeOS each morning and once again in the afternoon to clear your system of the painful bloating that is brought on by virtualizing the general purpose OS.

Labels: , ,