Tuesday, June 30, 2009

IBM Cloud Fizzles

Based on my positive review below of IBM's CloudBurst technology for building internal clouds, I tuned into the IBM webinar for the external cloud companion product with high hopes. I was hoping to hear about a consistent architecture across the two products that would allow an enterprise to federate workloads seamlessly between the internal and external cloud. Boy, was I disappointed.

It seems the IBM external cloud is nothing more than an IBM hosted capability for running virtual appliances of IBM Rational Software products. Among my many disappointments:

- no ability to run virtual appliances defined by me. They don't even publish a specification.

- no federation between internal and external. They are not even the same architecture because one runs Xen and the other runs VMware, and they do not provide a conversion utility.

- private beta (alpha maybe?) for invited customers only. Why make an announcement?

- no timetable for general availability of a product. Why make an announcement?

This announcement was a terrible showing by IBM to say the least. It is obvious to me that the CloudBurst appliance folks (call them “left hand”) and the Smart Business cloud folks (call them “right hand”) were two totally different teams. And the left hand had no idea what the right hand was doing. But each was intent not to be outdone by the other in announcing “something” with cloud in the title. And they were told to “cooperate” by some well meaning marketing and PR person from corporate. And this mess of a situation is the outcome. Good grief!

Labels: , , , ,

Monday, June 29, 2009

IBM CloudBurst Hits the Mark

IBM rolled out a new infrastructure offering called CloudBurst last week. Aimed at development and test workloads, it is essentially a rack of x86 systems pre-integrated with VMware’s virtualization technology along with IBM software technology for provisioning, management, metering, and chargeback. I believe IBM, unlike Verizon, has hit the cloud computing mark with this new offering.

First, IBM is targeting the offering at a perfect application workload for cloud – development and test. The transient nature of development and test workloads means that an elastic computing infrastructure with built-in virtualization and chargeback will be attractive to IT staff currently struggling to be responsive to line of business application owners. The line of business application owners are holding the threat of Amazon EC2 over the head of the IT staff if they cannot get their act together with frictionless, elastic compute services for their applications. By responding with a development and test infrastructure that enables self-service, elasticity, and pay-as-you-go chargeback capability, the IT staff will take a step in the right direction to head off the Amazon threat. Moving these dev/test workloads to production with the same infrastructure will be a simple flick of the switch when the line of business owners who have become spoiled by CloudBurst for dev/test complain that the production infrastructure is not flexible, responsive, or cost competitive.

Second, IBM embraced virtualization to enable greater self-service, and elasticity. While they do not detail the use of VMware’s technology on their website (likely to preserve the ability to switch it out for KVM or Xen at some future date), IBM has clearly taken an architectural hint from Amazon by building virtualization into the CloudBurst platform. Virtualization allows the owners of the application to put the infrastructure to work quickly via virtual appliances, instead of slogging through the tedious process of configuring some standard template from IT (which is never right) to meet the needs of their application – paying for infrastructure charges while they fight through incompatibilities, dependency resolution, and policy exception bureaucracy. CloudBurst represents a key shift in the way IT will buy server hardware in the future. Instead of either a bare-metal unit or pre-loaded with a bloated general purpose OS (see the complaint about tedious configuration above), the systems will instead come pre-configured with virtualization and self-service deployment capability for the application owners - a cloud-computing infrastructure appliance if you will. Cisco has designs on the same type capability with their newly announced Unified Computing System.

Third, it appears that IBM is going to announce a companion service to the CloudBurst internal capability tomorrow. From the little information that is available today, I surmise that IBM is likely going to provide a capability through their Rational product to enable application owners to “federate” the deployment of their applications across local and remote CloudBurst infrastructure. With this federated capability across local (fixed capital behind the firewall) and remote sites (variable cost operating expense from infrastructure hosted by IBM), the IBM story on cloud will be nearly complete.

The only real negatives I saw in this announcement were that IBM did not include an option for an object storage array for storing and cataloging the virtual appliances, nor did they include any utilities for taking advantage of existing catalogs of virtual appliances from VMware and Amazon. While it probably hurt IBM’s teeth to include VMware in the offering, perhaps they could have gone just a bit further and included another EMC cloud technology for the object store. Atmos would be a perfect complement to this well considered IBM cloud offering. And including a simple utility for accessing/converting existing virtual appliances really would not be that difficult. Maybe we’ll see these shortcomings addressed in the next version. All negatives aside, I think IBM made a good first showing with CloudBurst.

Thursday, June 18, 2009

Verizon Misses with Cloud Offering

About two weeks back, I was excited to see a headline about Verizon partnering with Red Hat to offer their customers a “new” cloud computing offering. I was hopeful that the details would reveal a KVM hypervisor based elastic compute capability coupled with an OVF based specification for virtual appliances to run on the service. I was also hoping to discover some details on storage as a service, with all of the services accessible via a management capability exposed via RESTful APIs. Boy, was I disappointed. Turns out the new Verizon cloud offering is just the old Verizon hosting offering with a new name.

Why is it so difficult for all of these old school infrastructure providers to understand the new path being blazed by Amazon AWS? Why can't they offer even a reasonable facsimile of the capability provided by Amazon? Surely it is the threat of Amazon that is leading them to re-name the old hosting stuff as the new cloud stuff. Why not go all the way and actually offer something that is competitive? Here is a recipe for any that are interested:

First, provide a X86 hypervisor based, virtualized compute service that allows the customer to bring their applications with them as pre-packaged, pre-configured virtual machines (virtual appliances). Don't ask them to boot a “standard OS” and then spend hours, days, weeks, months configuring it to work for them (because what you specified as the “standard” is certainly not what they have tested with their applications, and the whole purpose of elasticity is defeated if you can't quickly put images to work on the network in response to application demand). Better yet, let them boot existing Amazon Machine Images and VMware virtual appliances. Providing this capability is not rocket science. It is just work.

Second, provide a simple storage service (see Amazon S3 for what it should do) for storing unstructured data as well as for storing their virtual appliances that boot on the virtualized, elastic compute service. If you don't want to take the time to develop your own, follow AT&T's lead and go buy the capability EMC offers as part of the Atmos product line. You don't even have to think, you just need to write a check and viola – an Amazon S3 type capability running on your network. What could be easier?

Third, provide a block storage capability for attaching to virtual appliance images that must store state, such as database images. Most of the hosting companies already provide this type of SAN offering, so this part should be a no-brainer. Just price it with a very fine grained, variable cost approach (think megabyte-days, not months).

Fourth, provide access to the infrastructure management services via simple, RESTful APIs. You don't have to go overboard with capability at first, just make certain the basics are available in a manner that allows the services to be run effectively over the Internet without any funky protocols that are specific to your network implementation.

Finally, go sign up partners like rPath and RightScale to offer the next level of manageability and support for the virtual machines that will run on the network. These are the final touches that indicate to your customers that you are serious about providing a terrific capability for the complete lifecycle of your cloud computing offering. Instead of asking them to be patient with you while you re-name your hosting offering as a cloud offering in the hopes that it will assuage their bitterness that Amazon-like capability is not available on your network.

Labels: , , , , ,

Tuesday, June 02, 2009

Federation - The Enterprise Cloud Objective

I know the title to this blog post sounds a bit like a Star Trek episode, but I believe I have an useful point to make with the term federation - even at the risk of sounding a bit corny. I have been watching with interest the lexicon of terms that are emerging to describe the architecture and value of cloud computing. VMware uses the terms Internal/External/Private to describe the distribution of application workloads across multiple networks in a coordinated fashion. Sun uses the terms Private/Public/Hybrid, respectively, to describe the same architecture (although they would argue for Sun branded components in lieu of Vmware/EMC branded components). I think both of these term sets as descriptors for a cloud architecture that distributes workloads across multiple networks are flawed and confusing. Rather than simply complaining, however, I am willing to offer a solution.

The term Federation describes the end state of an effective cloud architecture perfectly, and I think we should all begin using it when we attempt to sell our respective goods and services to enable the enterprise cloud. Whether part of a Internal/External/Federation combination or a Private/Public/Federation combination or Network1/Network2/Networkn/Federation, the common term accurately describes the end objective of cloud computing.

First, some attribution. This term was presented to me as a descriptor for cloud value during my work with the cloud infrastructure group at EMC (the folks that own the Atmos product line) over a year ago. It is now my turn to put some greater structure on this enviable original thought that belongs to EMC.

A good general definition for Federation (independent of an IT context) is a union of member entities that preserves the integrity of the policies of the individual members. Members get the benefits of the union while retaining control over their internal affairs.

In the case of a technology infrastructure federation (aka a cloud architecture), the primary benefit of the union is the lower cost and risk associated with a pool of technology assets which are available across a diversified set of independent networks. In other words, application workloads should be distributed to the network with the lowest risk adjusted cost of execution – i.e. based upon the risk policies of the enterprise. If the risk of running a mission critical, enterprise workload on Amazon's AWS network is deemed high (for whatever reason, real or perceived), that workload might stay on a proprietary network owned by the enterprise. Likewise, a low risk workload that is constantly being deferred due to capacity or complexity constraints on the enterprise network might in fact be run for the lowest cost at Amazon or a comparable provider. For a startup, the risk of depleting capital to purchase equipment may dictate that all workloads run on a third party network that offers a variable cost model for infrastructure (Infrastructure as a Service, IaaS).

Independent of the proprietary calculus for risk that must be undertaken by every enterprise relative to their unique situation, it should become clear to all that the distribution of application workloads across multiple networks based upon the cost/capability metrics of those networks will lower the risk adjusted cost of enterprise computing. The same diversification theories that apply to managing financial portfolio risk also apply to managing the distributed execution of application workloads. The historical challenge to this notion of application workload federation is the lack of an efficient market – the transaction cost associated with obtaining capacity for any given application on any given network were too high due to complexity and lack of standards for application packaging (de facto or otherwise). Now, with virtualization as the underpinning of the network market, virtual appliances as the packaging for workloads, high bandwidth network transit and webscale APIs for data placement/access, the time is coming for an efficient market where infrastructure capacity is available to applications across multiple networks. And Federation is the perfect word to describe a cloud architecture that lowers the risk adjusted cost of computing to the enterprise. Enterprise. Federation. Clouds. Star Trek.

Labels: , , , , , ,