Brad Feld

Category: Investments

I think AngelList is awesome and I’m a huge fan of what they’ve done. Nivi wrote a great post celebrating their 1.5 year anniversary today: 1.5 Years Of AngelList: 8000 Intros, 400 Investments And That’s Just The Data We Can Tell You About.

One of the powerful constructs of AngelList is social proof. It’s become an important part of the seed / early stage venture process as very early stage investors pile into companies that their friends, or people they respect, are investing in. AngelList does a nice job of exposing and promoting social proof during the fundraising process in a way that is both legal and non-offensive.

However, as one would expect, entrepreneurs focus on taking advantage of every opportunity they have. As a result, I’ve been getting a request on a daily basis to either “follow” or be listed as an “endorser” for various companies. These requests fall into two categories: (a) people I know and am trying to be helpful to and (b) random people.

I’ve decided to say no to both categories unless I’m investing in the company. And I encourage everyone involved in AngelList to do the same. I think the concept of social proof is super important for something like AngelList to have sustainable long term value. I don’t ever want to be on the end of a conversation with someone who invested in a company on AngelList, runs into me, and says “things are sucking at company X – why did you ever endorse them” and for me to say “I have no idea who you are talking about.”

As with many things in life, the key lesson is to do your research. It’ll be interesting to see how AngelList copes with things like this over time – I expect Nivi and Naval will stay one step ahead of the problem. However, if you are an investor or entrepreneur – help by having some discipline on your end – it makes things better for everyone.


I received an email from an entrepreneur today asking me about something that made my stomach turn. It’s a first time entrepreneur who is raising a modest (< $750k) seed round). There are two founders and they’ve been talking to a VC they met several months ago. Recently, the VC told them he was leaving his firm and wanted to help them out. This was obviously appealing until he dropped the bomb that prompted their question to me.

This soon to be ex-VC said something to the effect of “I can easily raise you money with a couple of phone calls, but I want to be a co-founder of the company and have an equal share of the business.”

In my email exchange with the entrepreneur, I asked two questions. The first was “is he going to be full time with the company” and the other was “do you want him as a third full time partner.” The answer was no and no. More specifically, the VC was positioning himself as “the founder that would help raise the money.”

I dug a little deeper to find out who the person was in case it was just a random dude looking for gig flow. David Cohen, the CEO of TechStars, has written extensively about this in our book Do More Faster – for example, see the chapter Beware of Angel Investors Who Aren’t. I was shocked when I saw the name of the person and the firm he has been with (and is leaving) – it’s someone who has been in the VC business for a while and should know better.

I find this kind of behavior disgusting. If the person was offering to put in $25k – $100k in the round and then asking for an additional 1% or 2% as an “active advisor” (beyond whatever the investment bought) to help out with the company, I’d still be skeptical of the equity ask at this stage and encourage the founders to (a) vest it over time and (b) make sure there was a tangible commitment associated with it that was different from other investors. Instead, given the facts I was given, my feedback was to run far away, fast.

Entrepreneurs – beware. This is the kind of behavior that gives investors a bad name. Unfortunately, my impression of this particular person is that he’s not a constructive early stage investor but rather someone who is trying to prey on naive entrepreneurs. Whenever the markets heat up, this kind of thing starts happening. Just be careful out there.


I have an unbelievable nerd crush on my friends at Orbotix. They are marching hard towards a late 2011 release of their first product, Sphero. In addition to the physical robotic ball, they have a bunch of iOS and Android games that are taking shape. One of them is golf. Take a look at “night golf with a Sphero” (hint – it’s awesome).

Sphero Night Golf from GoSphero on Vimeo.

Don’t wait – reserve your Sphero now from the first batch – given the pre-orders already I’m 100% sure we’ll sell out. Just to put my money where my mouth is, I greenlighted the next order of long lead time parts at the board meeting on Monday.

Sphero Golf


I recently wrote about how well things are going at Gnip. Here we are just a few weeks later and my friends at Gnip continue to generate goodness in several different directions.

Today Gnip announced it has partnered with Twitter and the U.S. Library of Congress to manage the receipt of all historical data from Twitter and facilitate its delivery to the Library of Congress. This news builds off a release from the Library of Congress back in April where LoC announced that they will digitally archive every public tweet from Twitter’s inception and will continue to archive new tweets going forward. LoC has hinted that the archive will have an “emphasis on scholarly and research” endeavors.

Delivering a bunch of 140 character tweets might not seem like a big deal, but when you consider that Twitter is currently pumping out data at a rate of 35Mbps (and growing) with a max recorded rate of roughly 6000 tweets per second, the challenges of managing this transfer become substantial. Gnip is currently delivering over a half billion social activities per day to almost all the top social media monitoring firms. Since Gnip was Twitter’s first authorized data reseller it isn’t too surprising that they partnered with Twitter and the Library of Congress for this important endeavor. The best part of this deal is that some of the key technical bits that were required to make this project a reality will almost certainly end up in Gnip’s future business offerings so the commercial Twitter ecosphere will likely benefit from this effort at some point too.

Just yesterday, Gnip announced that Chris Moody joined the company as President & COO. I’ve been good friends with Chris for the last several years and am super excited to be working closely with him. I anticipate that Chris and Jud Valeski, Gnip’s CEO, will make a powerful duo.

On Monday, the company announced a much anticipated product improvement that allows existing customers to open multiple connections to Premium Twitter Feeds on their Gnip data collectors. The best part is that customers won’t be charged the standard Twitter licensing fee for the same tweet delivered across multiple connections. Instead, Gnip offers a small flat fee per month for each additional connection. This is a big win for ops managers who have multiple environments to manage for their various release cycles and for large enterprises with systems distributed across data centers all over the world.

For those keeping track, that’s three big announcement in three days. Chris also pointed out on Gnip’s blog yesterday that several other key individuals have joined the company in the last week including Bill Adkins, Seth McGuire, Charles Ince, and Brad Bokal.

Guys – keep on Doing More Faster!


I find it endlessly entertaining that people say things like “I don’t need to back up my data anymore because it’s in the cloud.” These people have never experienced a cloud failure, accidentally deleted a specific contact record, or authenticated an app that messed up their account. They will. And it will be painful.

I became a believer in backing up my data when I was 17 years old and had my first data calamity. I wrote about the story on my post What Should You Do When Your Web Service Blows Up. I’ve been involved in a few other data tragedies over the past 28 years which always reinforce (sometimes dramatically) the importance of backups.

We recently invested in a company called Spanning Cloud Apps. If you are a Google Apps user, this is a must use application. Go take a look at Spanning Backup for Google Apps – your first three seats are free. It currently does automatic backup of your Google contacts, calendars, and docs at an item level allowing you to selectively restore any data that accidentally gets deleted or lost. I’ve been using it for a while (well before we invested) and it works great.

I’ve known the founder and CEO, Charlie Wood, for six years or so. Charlie was an early exec at NewsGator but left to pursue his own startup. I came close to funding another company of his in the 2005 time frame but that never came together. I’m delighted to be in business with him again.

Don’t be a knucklehead. Back up your data.


One of my favorite times for me in the life of a company is when it finds its sweet spot and really turns on the juice. Over the past year, we’ve had a number of our Boulder-based investments find this magic moment, including Trada and SendGrid. The most recent Boulder-based company to really hit its stride is Gnip.

Gnip is around three years old and is a testament to our belief at Foundry Group that it often takes several years for a brand new company to really find its mojo. While Gnip is building a business based on the idea that led to its creation, like most firms breaking new ground it has had its share of bumps along the way.

The first version of the product was based on an architectural approach that didn’t aptly satisfy all players in the ecosystem and wasn’t flexible enough. This led to a reset of the business, including a layoff of almost half the team (who were quickly absorbed into a number of other local Boulder companies, including several that we funded) and a different approach to the product. This approach worked much better, but by this point one of the co-founders was frustrated with the customer dynamics (all business facing) and decided to leave to start a new consumer-facing business (he left on good terms, we are still good friends, and he’s much happier today).

At this point, the other co-founder, Jud Valeski, stepped up to be the CEO. Jud is an extremely experienced CTO / technical product manager and developer, but had never been a CEO. The investors in Gnip committed to supporting Jud in any way he needed and he’s done a spectacular job of building the product, growing the team, negotiating several significant deals including the first Twitter data resyndication deal, and unleashing a very compelling set of products on the world. His one-year CEO anniversary is approaching, and things are going great.

The last three months have been pleasantly insane. Gnip has been adding customers at rate that any investor would be proud of, is executing flawlessly on the product and operations side as it scales up, and is posting month over month growth numbers that put it in the “they are killing it” category. Oh, and they are hiring as fast as they can find great people; across the spectrum (business, sales, & engineering).

I’m super proud of Jud and the team he’s built at Gnip. I expect we’ll look back on 2011 as the year that Gnip went from a highly product development focused company to a company that was firing on all cylinders. And Boulder will have another substantial software / Internet company in the mix.


As most nerds know, Skynet gained self-awareness last week and decided as its first act to mess with Amazon Web Services, creating havoc for anyone that wanted to check-in on the Internet to their current physical location. In hindsight Skynet eventually figured out this was a bad call on its part as it actually wants to know where every human is at any given time. However, Skynet is still trying to get broader adoption of Xbox Live machines, so the Sony Playstation Network appears to still be down.

After all the obvious “oh my god, AWS is down” articles followed by the “see – I told you the cloud wouldn’t work” articles, some thoughtful analysis and suggestions have started to appear. Over the weekend, Dave Jilk, the CEO of Standing Cloud (I’m on the board) asked if I was going to write something about this and – if not – did I want him to write a guest post for me. Since I’ve used my weekend excess of creative energy building a Thing-O-Matic 3D Printer in an effort to show the machines that I come in peace, I quickly took him up on his offer.

Following are Dave’s thoughts on learning the right lessons from the Amazon outage.

Much has already been written about the recent Amazon Web Services outage that has caused problems for a few high-profile companies. Nevertheless, at Standing Cloud we live and breathe the infrastructure-as-a-service (IaaS) world every day, so I thought I might have something useful to add to the discussion.  In particular, some media and naysayers are emphasizing the wrong lessons to be learned from this incident.

Wrong lesson #1: The infrastructure cloud is either not ready for prime time, or never will be.

Those who say this simply do not understand what the infrastructure cloud is. At bottom, it is just a way to provision virtual servers in a data center without human involvement. It is not news to anyone who uses them that virtual servers are individually less reliable than physical servers; furthermore, those virtual servers run on physical servers inside a physical data center. All physical data centers have glitches and downtime, and this is not the first time Amazon has had an outage, although it is the most severe.

What is true is that the infrastructure cloud is not and never will be ready to be used exactly like a traditional physical data center that is under your control. But that is obvious after a moment’s reflection. So when you see someone claiming that the Amazon outage shows that the cloud is not ready, they are just waving an ignorance flag.

Wrong lesson #2: Amazon is not to be trusted.

On the contrary, the AWS cloud has been highly reliable on the whole. They take downtime seriously and given the volume of usage and the amount of time they have been running it (since 2006), it is not surprising that they would eventually have a major outage of some sort. Enterprises have data center downtime, and back in the day when startups had to build their own, so did they. Some data centers are run better than others, but they all have outages.

What is of more concern are rumors I have heard that Amazon does not actually use AWS for Amazon.com.  That doesn’t affect the quality of their cloud product directly, but given that they have lured customers with the claim that they do use it, this does impact our trust in relation to their marketing integrity. Presumably we will eventually find out the truth on that score. In any case, this issue is not related to the outage itself.

Having put the wrong lessons to rest, here are some positive lessons that put the nature of this outage into perspective, and help you take advantage of IaaS in the right way and at the right time.

Right lesson #1: Amazon is not infallible, and the cloud is not magic.

This is just the flip side of the “wrong lessons” discussed above. If you thought that Amazon would have 100% uptime, or that the infrastructure cloud somehow eliminates concerns about downtime, then you need to look closer at what it really is and how it works. It’s just a way to deploy somewhat less reliable servers, quickly and without human intervention. That’s all.  Amazon (and other providers) will have more outages, and cloud servers will fail both individually and en masse.

Your application and deployment architecture may not be ready for this. However, I would claim that if it is not, you are assuming that your own data center operators are infallible. The architectural changes required to accommodate the public IaaS cloud are a good idea even if you never move the application there. That’s why smart enterprises have been virtualizing their infrastructure, building private clouds, and migrating their applications to operate in that environment. It’s not just a more efficient use of hardware resources, it also increases the resiliency of the application.

Right lesson #2: Amazon is not the only IaaS provider, and your application should be able to run on more than one.

This requires a bias alert: cloud portability is one of the things Standing Cloud enables for the applications it manages. If you build/deploy/manage an application using our system, it will be able to run on many different cloud providers, and you can move it easily and quickly.

We built this capability, though, because we believed that it was important for risk mitigation. As I have already pointed out, no data center is infallible and outages are inevitable.  Further, It is not enough to have access to multiple data centers – the Amazon outage, though focused on one data center, created cascading effects (due to volume) in its other data centers. This, too, was predictable.

Given the inevitability of outages, how can one avoid downtime?  My answer is that an application should be able to run on more than one, or many, different public cloud services.  This answer has several implications:

  • You should avoid reliance on unique features of a particular IaaS provider if they affect your application architecture. Amazon has built a number of features that other providers do not have, and if you are committed to Amazon they make it very easy to be “locked in.” There are two ways to handle this: first, use a least-common-denominator approach; second, find a substitution for each such feature on a “secondary” service.
  • Your system deployment must be automated. If it is not automated, it is likely that it will take you longer to re-deploy the application (either in a different data center or on a different cloud service) than it will take for the provider to bring their service back up. As we have seen, that can take days. I discuss automation more below.
  • Your data store must be accessible from outside your primary cloud provider. This is the most difficult problem, and how you accomplish it depends greatly on the nature of your data store. However, backups and redundancy are the key considerations (as usual!). All data must be in more than one place, and you need to have a way to fail over gracefully. As the Amazon outage has shown, a “highly reliable” system like their EBS (Elastic Block Storage) is still not reliable enough to avoid downtime.

Right lesson #3: Cloud deployments must be automated and should take cloud server reliability characteristics into account.

Even though I have seen it many times, I am still taken aback when I talk to a startup that has used Amazon just like a traditional data center using traditional methods.  Their sysadmins go into the Amazon console, fire up some servers, manually configure the deployment architecture (often using Amazon features that save them time but lock them in), and hope for the best.  Oh, they might burn an AMI and save it on S3, in case the server dies (which only works as long as nothing changes).  If they need to scale up, they manually add another server and manually add it to the load balancer queue.

This type of usage treats IaaS as mostly a financing alternative. It’s a way to avoid buying capital equipment and conserving financial resources when you do not know how much computing infrastructure you will need. Even the fact that you can change your infrastructure resources rapidly really just boils down to not having to buy and provision those resources in advance. This benefit is a big one for capital-efficient lean startups, but on the whole the approach is risky and suboptimal. The Amazon outage illustrates this: companies that used this approach were stuck during the outage, but at another level they are still stuck with Amazon because their server configurations are implicit.

Instead, the best practice for deploying applications – in the cloud but also anywhere, is by automating the deployment process. There should be no manual steps in the deployment process. Although this can be done using scripts, even better is to use a tool like Chef, Puppet, or cfEngine to take advantage of abstractions in the process. Or use RightScale, Kaavo, CA Applogic, or similar tools to not only automate but organize your deployment process. If your application uses a standard N-tier architecture, you can potentially use Standing Cloud without having to build any automation scripts at all.

Automating an application deployment in the cloud is a best practice with numerous benefits, including:

  • Free redundancy. Instead of having an idle redundant data center (whether cloud or otherwise), you can simply re-deploy your application in another data center or cloud service using on-demand resources.  Some of the resources (e.g., a replicated data store) might need to be available at all times, but most of the deployment can be fired up only when it is needed.
  • Rapid scalability. In theory you can get this using Amazon’s auto-scaling features, Elastic Beanstalk, and the like.  But these require access to AMIs that are stored on S3 or EBS.  We’ve learned our lesson about that, right? Instead, build a general purpose scalability approach that takes advantage of the on-demand resources but keeps it under your control.
  • Server failover can be treated just like scalability. Virtual servers fail more frequently than physical servers, and when they do, there is less ability to recover them. Consequently, a good automation procedure treats scalability and failover the same way – just bring up a new server.
  • Maintainability. A server configuration that is created manually and saved to a “golden image” has numerous problems. Only the person who built it knows what is there, and if that person leaves or goes on vacation, it can be very time consuming to reverse-engineer it. Even that person will eventually forget, and if there are several generations of manual configuration changes (boot the golden image, start making changes, create a new golden image), possibly by different people, you are now locked into that image. All these issues become apparent when you need to upgrade O/S versions or change to a new O/S distribution. In contrast, a fully automated deployment is not only a functional process with the benefits mentioned above, it also serves as documentation.

In summary, let the Amazon Web Services outage be a wake-up call… not to fear the IaaS cloud, but to know it, use it properly, and take advantage of its full possibilities.


I love watching our portfolio companies iterate on their products, especially when they are in the early days.  Initial efforts often provide nothing more than learnings, but turning those learnings into an improved product offering is a huge part of what being an early stage company is all about.

BigDoor has been going through that iteration process over this past year and this last week they quietly launched a pilot of a new feature called “Quests” on UGO.com and a dozen other sites.  BigDoor partnered with the talented folks at SpectrumDNA to bring Quests to life. At first blush Quests may just appear to be the addition of a directed-engagement type game mechanic, but what is going on behind the scenes is really interesting.

The BigDoor team believes strongly that gamification should be a profit-center for web publishers and app developers, not a cost-center.  As a result, they don’t charge for the usage of their API or their widgets.  However, in order to fulfill their vision of providing a free gamification platform as well as sending checks to publishers, they’ve known that they needed a solution that worked not only for publishers and end-users, but also for advertisers as well.

Just in time for ad:tech, BigDoor’s Quests allow advertisers to create a series of tasks that direct users to visit multiple sites/pages and in the process deeply engage those users into their brand.  The user earns rewards (badges, virtual currency and discounts) for completing quests, and the publisher makes money every time a quest is completed.

UGO YourHighness screen v2

Any solution that gives advertisers traffic, publishers money, and users rewards has the promise of being a big win.  It’s too early to tell yet if this first iteration of Quests will accomplish all of that, but the early numbers look promising. Across all of their pilot partners BigDoor is already seeing that 35% of initiated Quests are completed (a Quest requires a user to visit and interact with five different websites), and 40% of users who complete a Quest tweet or share their accomplishment.

Having worked with the BigDoor team since mid-year last year, I know they will listen to their advertisers and publishers, watch the metrics, learn from this pilot, and then iterate and improve before they make Quests available to all publishers and advertisers sometime this summer.  If you want to check out what Quests look like now, visit UGO.com and “Start Your Quest Now.”  If you are an advertiser or a publisher who is interested in participating in the Quests pilot, feel free to contact the BigDoor team to see if there is a fit.


I love Fitbit. The product is great, the team is great, and I’m psyched to be an investor. I’ve been a user for a while – my data is public – but I just recently started using the food tracker which is superb. The only thing that was missing for me was an API.

Fitbit released the Fitbit API quietly a month or so ago. I’ve encouraged them to make more noise as some great applications are coming out. I use two of them – Earndit and the Fitbit Low Battery Notifier by Joshua Stein. There are a bunch more coming but I thought I’d encourage any of you out there who care about human instrumentation to take a look and consider integrating with Fitbit.

And, if you want the best fitness and sleep tracking product in the universe, go take a look at the Fitbit.