As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.
After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.
Following are Dave’s thoughts on learning the right lessons from the Amazon outage.
Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion. In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.
Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.
Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.
What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.
Wrong lesson #2: Amazon is not to be trusted.
On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.
What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com. That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.
Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.
Right lesson #1: Amazon is not infallible, and the cloud is not magic.
This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all. Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.
Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.
Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.
This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.
We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable. Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.
Given the inevitability of outages, how can one avoid downtime? My answer is that an application should be able to run on more than one, or many, different public cloud services. This answer has several implications:
Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.
Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods. Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best. Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes). If they need to scale up, they manually add another server and manually add it to the load balancer queue.
This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.
Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.
Automating an application deployment in the cloud is a best practice with numerous benefits, including:
In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.
In March I wrote about Standing Cloud‘s public launch of a free service to test drive open source applications. As reported today in TechCrunch, Standing Cloud has now launched it’s Business Edition, a service that enables users to also host applications on any of four cloud providers. As with the free edition, they have made it remarkably easy to get started: you register (including providing payment information), select your cloud provider and application, and go. Currently, the easiest setup is with Rackspace, where Standing Cloud can automatically assign a billing account to you; this easy setup is coming soon for GoGrid and Amazon EC2.
In the Business Edition launch, the Standing Cloud service provides fast and simple installation, automated regular and manual backups and simple application monitoring. Importantly, the backups can be restored to any cloud service, not just the one where you started. You can also easily reboot the application if it seems troubled, upload text files (including templates or other customizations), and if you are so inclined, access the server command line through a terminal window that operates within your browser. Through September 30, the only fees Standing Cloud is charging is for the server time and bandwidth usage; after that it will cost an additional $19.95/month for their service – which by then will include a number of additional features.
There are now more than fifty open source applications available in Standing Cloud – so they’ve organized them in a searchable list format. There is also no technical limitation requiring that the applications be open source, so if you are looking to promote or enable users to host your commercial application, send me an email and I’ll connect you with the right person at Standing Cloud.
When I arrived at my house in Homer, I hadn’t been here for two years. It took me one phone call to get Internet (DSL) working again (ACS had to reset my password) and within about 15 minutes everything was working just fine. Except I couldn’t print. I have an old HP 3330 with a JetDirect USB to Internet print server. The printer doesn’t have many miles on it – I’ve only used it for a total of about three cumulative months. It took me about thirty minutes to fight through all the nonsense of the Internet to figure out how to set it up as a print server for my new Mac (which I’ve never used with it before) and for Amy’s Windows 7 computer. It turns out that nothing really works except hard wiring its IP address into our printer setup. Um – yeah – that’s obvious – especially for someone who doesn’t know what an IP address is.
We had a Pogoplug board meeting at our office two weeks ago. For each company we invest in, we try to have a board meeting with all four Foundry Group partners attending at least once a year. This was our “group Pogoplug board meeting” (although I missed the best part – which is the dinner the night before – because I participated in Governors VC Roundtable at the Governors Mansion.)
At the board meeting, Daniel, Jeb, Brad, and Smitty (actually, mostly Jeb) showed off the new Pogoplug “print anywhere” function. They printed a document from an iPad to an Epson printer. Boom – paper came out of the printer. The Pogoplug had two things connected to it – the Epson printer (via a USB port) and our network (via a 10BaseT connection). The iPad was connected to the AT&T 3G network. Did I mention that they printed a document via the iPad? When was the last time you saw someone do that? Yeah, it could have been a web page via an iPhone also.
My immediate thought was how cool it would be to start printing porn on my partner Ryan’s printer connected via Pogoplug since he’d shared his Pogoplug-connected hard drive with me and would probably share his printer also. But then I undermined my evil plot by mentioning the idea. Oops.
If you already have a Pogoplug, this will be a simple free software update coming later this summer. If you don’t have a Pogoplug, seriously, why not? The thing is magic.
Not long after I posted about Dave Jilk’s experience with the Pogoplug, he started using the phrase “Pogoplug Simple” to describe one of the goals of Standing Cloud. The idea is that technology products should be so easy to set up and use that the experience is vaguely unsatisfying – you feel like you didn’t do anything. Standing Cloud – a company we provided seed funding for last year – is launching publicly this week with its Trial Edition and I think they’ve managed to make cloud application management “Pogoplug Simple.”
The Trial Edition makes it easy to test drive any of about thirty different open source applications on any of several cloud server providers. Register with your email address, log in, pick an application, click Install, and in about 30 seconds you’ll be up and running with a fully-functioning application accessible on the web. You don’t need an Amazon EC2 or Rackspace account, as with “appliance” providers. You don’t need to learn about “instances” and “images” and security groups. You don’t need to know how to install and configure a web server or MySQL, or download, install and configure software code. You don’t even need to configure the application itself – Standing Cloud plugs in default values for you. And it’s free.
Like the Pogoplug, the Standing Cloud Trial Edition doesn’t do anything that a motivated IT professional couldn’t do another way. It’s just a lot faster and easier. But for someone who is *not* an IT professional, it removes some rather high barriers to both open source applications and cloud computing.
The Trial Edition is just the beginning for Standing Cloud. Soon you will be able to host, manage, and scale applications with the same emphasis on simplicity. Give it a try and give me feedback.