Brad Feld

Category: Technology

These days I’m regularly exposed to patent trolls. Sometimes I read about them, sometimes friends email me about them, and sometimes companies I’m an investor in gets sued by them. Whenever I read the claims in the lawsuits, I often think that the claim in question is “obvious.” For those of you out there who know how patents are supposed to work, for something to be patentable it needs to be “non-obvious” as well as “unique.” While the specific claims may not be obvious to the patent troll, especially those who are lawyers who own patents they’ve picked up from other people (bankrupt companies, individuals who applied for and got a patent, patent factories), they are often extremely obvious to any software developer.

For a while I was frustrated by software patents. I tried to educate some of my friends in government about this. I was hopeful when the Supreme Court heard Bilski that they would take a stand on it. And I hoped that the people I talked to in the Obama administration, who acknowledged that they understood the issue, would try to do something about it. I hoped that the Patent Reform Act would actually have some teeth in it that would help address the completely messed up dynamics around software patents and my strong belief that this is a huge tax on the innovation process.

I had zero impact. Zero. As I sit here at the end of 2011, the software patent situation has spun completely out of control. In addition to endless patent trolls, who are multiplying like tribbles, large companies are now fighting massive legal battles with each other using patents. Some of the inventors (including a number of amazing software engineers) listed on the patents are finally speaking up against the patents, but since they’ve assigned them to companies they are no longer at, or the company that owns the patent acquired the company the original patent creator was at, their only recourse (and impact) is to get tangled up in a lawsuit as a witness.

In his 2003 letter to shareholders, Warren Buffet famously called derivatives, “”financial weapons of mass destruction” that could harm not only their buyers and sellers, but the whole economic system. “ You may recall that AIG, thanks to its non-transparent and heavy investments in derivatives, was almost bankrupt once the mortgage-backed securities it was insuring began to drop in value. The $85 billion bailout of AIG was the beginning of the government’s response to the financial crisis and we are still feeling the after-effects of that calamity.

Today, we are experiencing a similar threat to innovation with patents playing the role of “weapons of mass destruction.” Sadly, the America Invents Act, which seeks to provide the Patent Office with tools to operate better and passed recently, does precious little to address the patent litigation mess.

Like derivatives, there are thousands of software patents that are not transparent and remain available to do damage in the hands of patent trolls – and even respectable companies – who use them in lawsuits that bear little relationship to protecting inventions or spurring innovation. As others have detailed, there are increasingly destructive dynamics at play here and the easiest solution is to abolish patents in areas – most notably, software and business methods – where they are doing more harm than good.

Unlike the financial system, which derivatives helped bring to its knees, it is not clear how our innovation system will get to a breaking point that will require attention from policymakers. The Supreme Court could address the problem, but it missed a golden opportunity in the Bilski case, where it declined to end (by a 5-4 vote) the patenting of business methods. Perhaps the Supreme Court will realize that the situation requires fixing, looking for other ways to limit the damage.

The are simple options, such as disclosure where patent applicants should be required to disclose the source code behind their inventions, thereby ensuring that the invention is real and not merely a basis for a future lawsuit, which is what many software patents have become. Indeed, this requirement of the Patent Act (Section 112) is applied with some rigor in the biotech context, but has yet to be happen with regard to software. Such a change cannot come soon enough.

At some point the software industry is going to have to do something about this. We seem to not be able to rely on the government to take action that will affect change. I can only hope there are other leaders in the software industry, especially the amazing developers creating the innovations in the first place, who will take some collective action before it’s too late.


Next week at Defrag I’ll be giving a talk titled “Resistance is Futile”. I’ll be talking about my premise that the machines have already taken over. A few days ago a friend of mine emailed me a perfect image to summarize where we are today. Ponder and enjoy.


I’ve been railing against software patents for a number of years. I believe software patents are an invalid construct – software shouldn’t be able to patented.

For a while, I felt like I was shouting alone in the wilderness. While a bunch of software engineers I know thought software patents were bogus, I had trouble getting anyone else to speak out against software patents. But that has changed. In the last few month the issue of software patents – and the fundamental issues with them – have started to be front and center in the discussion about innovation.

There have been two dynamite stories on NPR recently – the first on This American Life titled When Patents Attack! and one on Planet Money titled The Patent War. If you have an interest in this area, the two are well worth listening to.

In the past week, the discussion exploded starting with a post from Google titled When patents attack Android. The word “patent” shows up in 20 of the Techmeme River articles from the last week. Martin Fowler, a software developer, had a well thought out article titled SoftwarePatent. And they kept coming, such as Why Google Is Right Yet Short-Sighted To Complain About Mobile Patents.

But my favorite was Mark Cuban’s post titled If you want to see more jobs created – change patent laws. He starts strong:

“Sometimes it’s not the obvious things that create the biggest problems.  In this case one of the hidden job killers in our economy today is the explosion of patent litigation.”

And he ends strong:

“We need to face the facts, patent law is killing job creation. If the current administration wants to improve job creation, change patent law and watch jobs among small technology companies develop instantly.”

I hope my friends in the White House are listening. And to all the software engineers who are co-authors on patents that they aren’t proud of, or think are bogus, or were forced to create the patent by their company, or were paid a bonus by their company to write a patent on nothing, or are now working for a company that is getting sued for a patent they co-authored that they aren’t even sure what it says, speak up!


Jeff Clavier is hanging out with Amy and me in Paris for a few days. We had an incredible dinner last night at L’Arpege – we’d been there once before with another friend (Ed Roberto) about five years ago and it was even better than we remembered it to be. We got home five hours after we started dinner which included an epic cheese course and two dessert courses.

Jeff’s been spending a lot of time on Google+ as have I and many of the VCs and tech early adopters that I know (my VC Circle is my largest circle.) Google+ is rumored to have reached 10m users already and show no sign of slowing. My experience with it has been fascinating – I didn’t do much beyond set up my account, figure out the right login approach since I use Google Apps and Google+ doesn’t yet work with a Google Apps account, and put up a few posts. I’ve got 1400 followers already who presumably simply auto-discovered me via Google’s algorithms (they do have a great social graph already given all the Gmail emails and address books.)

Recently I wrote a post titled Rethinking My Social Graph. I’ve struggled to get my Facebook social graph in order (3000 friends later – lots of acquaintances, not that many friends) and pondered how I use LinkedIn (promiscuously – I link with pretty much anyone). Twitter has been my ultimate broadcast tool and when I think about Google+ vs. Facebook, I realize that the power is the “follow” model vs. the “friend” model.

Facebook has become not that useful for me because while it’s the friend model, I’ve treated it as a follow model. As a result, there isn’t that much intimate communication on it for me, or if there is, it’s completely lost in the noise of the people who I’m acquaintances with. I’ve tried to solve this by sorting them into Lists but there are two problems. The UI for doing this is awful / tedious / excruciating and the control over what you do with lists is weak, especially in places where you really want the control (such as the news feed).

In contrast, Google+ nailed this with the follow model, letting anyone that is interested in what I have to say follow me, while I only follow people I’m interested in. While this is the Twitter model, you get much finer control over both consumption and broadcast through the use of Circles. Now that I have enough activity on Google+, I’m starting to understand and see the impact of this. Oh – and I guess I should start calling it G+ like all the cool kids do.

As Jason and I are about to launch our new book Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist I’ve been once again thinking about communication and promotion via social media. My experience setting up the blog and twitter feed for Startup Marriage reminded me how easy it is to get the tech set up, but how challenging it is to get engagement. And my investment in Gnip is showing me the continued geometric expansion of social data across an ever increasing number of platforms.

Get ready – I think we have now finally “just begun.”


I’ve worn glasses since I was three years old. I was trying to look at something on my iPad yesterday without them on and I heard Amy burst out laughing with “you really can’t see a thing without your glasses.” True – my eyes are defective. I’ve contemplated getting LASIK’s a few times but chickened out each time – if 42 years of glasses have worked, I expect another 42 will be just fine.

For years I’ve fantasized about getting glasses that have a heads-up display (HUD) integrated into them. This HUD would be connected to a computer somehow, which would of course be connected to the Internet, which would then give me access to whatever I wanted through my glasses. I can’t remember a sci-fi movie over the past decade that didn’t have this technology available and since my jetpack now seems like it’s finally around the corner (I’m hoping to get one for my 46th birthday), I have hope for my HUDglasses.

The pieces finally exist since I’m carrying a computer in my pocket (my iPhone or my Android) that’s always connected to the Internet. My glasses just need bluetooth to pair with my phone, an appropriate display, a processor, a camera, and the right software. Optimally I could control it via a spatial operating environment like Oblong’s g-speak.

I’m interested in investing in a team going after this. The magic will be on the software side – I want to work with folks that believe the hardware will be available, can integrate existing products, are comfortable with consumer electronics products, but are obsessed with “assembling the hardware” and “hacking the software.”

If this is you, or someone you know, please aim them at me. In the mean time, I tried to hunt down Tony Stark but don’t have his email address.


I was going to write a different post this morning, but I came across this post by Matt Haughey titled Ev’s assholishness is greatly exaggerated and, after reading it, sat for a few minutes and thought about it. Go read it now and come back.

Welcome back. I’m not an investor in Twitter directly (I am indirectly in a tiny amount through several of the VC funds I’m an investor in) but I’m an enormous Twitter fan and user. I also wasn’t an investor in Odeo so, as the cliche goes, I don’t have a dog in the hunt. But I have a few friends who were so I have second hand knowledge about the dynamics around the Odeo to Twitter evolution.

When I read (well – skimmed) the latest round of noise about “how founders behave”, possibly stoked by Paul Allen’s new book on the origins of Microsoft along with his 60 Minutes appearance, I was annoyed, but I couldn’t figure out exactly why. I had a long conversation with a friend about this when I was Seattle on Tuesday and still couldn’t figure out why I was annoyed.

Matt, who I don’t know, nailed it. As he says in the last sentence of his post, [it’s] just melodramatic bullshit.

Creating companies is extremely hard. I’ve been involved in hundreds of them (I don’t know the number any more – 300, 400?) at this point and there is founder drama in many of them. And non-founder drama. And customer drama. And partner drama. And drama about the type of soda the company gives or doesn’t give away. The early days of any company – successful or not – are complex, messy, often bizarre, complicated, and unpredictable. Some things work out. Many don’t.

We’re in another strong up cycle of technology entrepreneurship. It’s awesome to see (and participate) in the next wave of the creation of some amazing companies. When I look back over the last 25 years and look at the companies that are less than 25 years old that impact my life every day, it’s a long list. I expect in 15 more years when I look back there will be plenty of new names on that list that are getting their start right now.

So, when the press grabs onto to the meme of “founders are assholes” or ex-founders who didn’t stay with the companies over time whine about their co-founders or when people who didn’t really have any involvement with the creation of a company sue for material ownership in the company because of absurd legal claims, it annoys me. It cheapens the incredibly hard and lonely work of a founder, creates tons of noise and distraction, but more importantly becomes a distraction for first time entrepreneurs who end up getting tangled up in the noise rather than focusing on their hard problems of starting and building their own company.

When I talk to TechStars founders about this stuff, I try to focus them on what matters (their business), especially when they are having issues with their co-founders (e.g. focus on addressing the issues head on; don’t worry about what the press is going to write about you.) When I hear the questions about “did that really happen” or “what do you think about that’ or “isn’t it amazing that X did that” or “do you think Y really deserves something” it reminds me how much all the noise creeps in.

I like to read People Magazine also, but I read it in the bathroom, where it belongs, as does much of this. It’s just melodramatic bullshit. Don’t get distracted by it.


A post in the New York Times this morning asserted that Software Progress Beats Moore’s Law. It’s a short post, but the money quote is from Ed Lazowska at the University of Washington:

“The rate of change in hardware captured by Moore’s Law, experts agree, is an extraordinary achievement. “But the ingenuity that computer scientists have put into algorithms have yielded performance improvements that make even the exponential gains of Moore’s Law look trivial,” said Edward Lazowska, a professor at the University of Washington.

The rapid pace of software progress, Mr. Lazowska added, is harder to measure in algorithms performing nonnumerical tasks. But he points to the progress of recent years in artificial intelligence fields like language understanding, speech recognition and computer vision as evidence that the story of the algorithm’s ascent holds true well beyond more easily quantified benchmark tests.”

If you agree with this, the implications are profound. Watching Watson kick Ken Jennings ass in Jeopardy a few weeks ago definitely felt like a win for software, but someone (I can’t remember who) had the fun line that “it still took a data center to beat Ken Jennings.”

While that doesn’t really matter because Moore’s Law will continue to apply to the data center, but my hypothesis is that there’s a much faster rate of advancement on the software layer. And if this is true it has broad impacts for computing, and computing enabled society, as a whole. It’s easy to forget about the software layer, but as an investor I live in it. As a result of several of our themes, namely HCI and Glue, we see first hand the dramatic pace at which software can improve.

I’ve been through my share of 100x to 1000x performance improvements because of a couple of lines of code or a change in the database structure in my life as a programmer 20+ years ago. At the time the hardware infrastructure was still the ultimate constraint – you could get linear progress by throwing more hardware at the problem. The initial software gains happened quickly but then you were stuck with the hardware improvements. If don’t believe me, go buy a 286 PC and a 386 PC on eBay, load up dBase 3 on each, and reindex some large database files. Now do the same with FoxPro on each. The numbers will startle you.

It feels very different today. The hardware is rapidly becoming an abstraction in a lot of cases. The web services dynamic – where we access things through a browser – built a UI layer in front of the hardware infrastructure. Our friend the cloud is making this an even more dramatic separation as hardware resources become elastic, dynamic, and much easier for the software layer folks to deploy and use. And, as a result, there’s a different type of activity on the software layer.

I don’t have a good answer as to whether it’s core algorithms, distributed processing across commodity hardware (instead of dedicated Connection Machines), new structural approaches (e.g. NoSql), or just the compounding of years of computer science and software engineering, but I think we are at the cusp of a profound shift in overall system performance and this article pokes us nicely in the eye to make sure we are aware of it.

The robots are coming. And they will be really smart. And fast. Let’s hope they want to be our friends.


As I embarked on my journey to learn python, I began by exploring a number of different approaches.  I finally settled on using “beginner’s mind” (shoshin to those of you out there that know anything about Zen Buddhism).

Rather than just dive in and build on my existing programming skills and experience, I decided to start completely from scratch. Fortunately, MIT’s Introductory Computer Science class (6.00 Introduction to Computer Science and Programming) is available in its entirety – including all 24 lectures – on MIT’s OpenCourseWare.

I fired up Lecture #1 (Goals of the course; what is computation; introduction to data types, operators, and variables) and spent an enjoyable hour remembering what it was like to be in 10-250.  If you want a taste, here’s the lecture.

The lectures are all up on iTunes so I’m going to watch #2 on my way from Keystone to Boulder this morning (Amy is driving). I’ve got plenty of reading to do and I look forward to diving into the problem sets.

While watching the lecture, Professor Eric Grimson reminded me that this was not a course about “learning Python”, rather it was a course aimed at providing students with an understanding of the role computation can play in solving problems. A side benefit is that I will learn Python and – in Eric’s words – “feel justifiably confident of [my] ability to write small programs that allow them to accomplish useful goals.”

Beginner’s Mind can be a powerful thing.


January’s Tech Theme of the Month is going to be Python.  I realize it’s still December; I decided to get a head start.

Last month’s tech theme was videoconferencing.  I learned a lot, including the unfortunate continued split between low end and high end and general lack of ability to have a single universal solution.  Oh – and bandwidth still matters a lot.  I expect by the end of January we’ll have much better videoconferencing infrastructure set up at Foundry Group with the single goal of eliminating some travel.

I’ve thought about learning Python for a while.  I don’t code much anymore and I regularly find myself wishing I could do something with a simple UI and heavy back-end processing – mostly to move data between web services that I use via the web services APIs.  I stopped programming around 1993 although I occasionally had to dive back in and support something that I had previously written until the late 1990’s, when I stopped for good because I simply had no time.  As a result, the languages I feel like I have mastery over are BASIC, Pascal, Scheme, and Dataflex and the corresponding environment that I’m comfortable developing in ends with MS-DOS and Novell Netware.  While I did stuff in plenty of other languages as a result of courses I took (IBM 370 assembler, SAS, Fortran) or projects  I had to figure out (PL/SQL + Oracle, Paradox, dBase), I don’t feel like I did enough with these to claim mastery.

Every couple of years, I fuck around with a new language and environment.  PHP is the one that has stuck the best and I can read it and hack around if necessary.  But I don’t really like PHP – it feels sloppy and I constantly am having to look up syntax because it’s not comfortable to me.  I went through a “ok – I’ll figure out Ruby on Rails” phase a few summers ago but stalled when I realized that Rails wasn’t a particularly practical environment for what I wanted to play around with.

Python may be a miss for me, but when I look at Python code I feel very comfortable with the syntax.  A few folks that I know that are like me (e.g. not developers anymore, but were once, and occasionally bust out an IDE to hack on something) swear by Python.  But the biggest motivation for me was that 6.01 is now taught using Python.

In 1984, I took 6.001: Structure and Interpretation of Computer Programs.  This is the first computer science class most MIT students take.  I had written a lot of software, including several commercially released products, almost entirely in BASIC (with a little Pascal and assembly.)  6.001’s programming language of choice was Scheme (a LISP dialect) and working with it was an epiphany for me.  I continued to write commercial apps in BASIC and started using a 4GL called Dataflex, but my coding approach was heavily influenced by what I learned in 6.001.  The abstractions you could build in BASIC, if you tried, were surprisingly good and Dataflex was a remarkably robust 4GL – very UI constrained, but very powerful (and fast compared to everything else at the time.)

So – if you look at my history, I’m comfortable with imperative languages.  I got a good dose of functional programming from MIT but never did anything with it commercially.  And I stopped developing software before object-oriented programming became popular.  Python feels like a mix of imperative and functional when I look at it and read about it so I’m optimistic that I can use my regular programming muscles without having to fight through the OOP stuff that I don’t really know and doesn’t feel natural to me.

MIT has an IAP course (the MIT January session) titled 6.189: A Gentle Introduction to Programming Using Python.  As with many MIT courses, it’s available on MIT OpenCourseWare so I’m going to take the course over the next month and see how it goes.

If you are a Python expert and have any suggestions for sites, tools, or blogs I should pay attention to, please leave them in the comments.