Yippee – the criticism of the software patent stupidity is starting to heat up and some really smart people are making both useful arguments about the issues and interesting proposals about the solution. In addition, there are some general articles starting to appear that explain that while patents (and property rights) have an important role in our society and in encouraging and supporting innovation and entrepreneurship, there are some well understood problems that emerge from patenting small components of complex systems – especially when the vector of innovation is steep (like – for example – with software or the Internet.)
James Surowiecki has a great short article in the New Yorker Magazine titled The Permission Problem. In it, he gives a great example of what Columbia law professor Michael Heller calls the "anti-commons" in his book The Gridlock Economy: How Too Much Ownership Wrecks Markets, Stops Innovation, and Costs Lives.
"In the second decade of the twentieth century, it was almost impossible to build an airplane in the United States. That was the result of a chaotic legal battle among the dozens of companies—including one owned by Orville Wright—that held patents on the various components that made a plane go. No one could manufacture aircraft without fear of being hauled into court. The First World War got the industry started again, because Congress realized that something needed to be done to get planes in the air. It created a “patent pool,” putting all the aircraft patents under the control of a new association and letting manufacturers license them for a fee. Had Congress not stepped in, we might still be flying around in blimps."
The anti-commons is a great reference point for what has happened with software patents. Simply put, if too many people own individual pieces of a valuable asset – especially if those pieces are overlapping and vaguely defined (e.g. software) – you can end up with gridlock instead of innovation. Surowiecki explains:
"When something you own is necessary to the success of a venture, even if its contribution is small, you’ll tend to ask for an amount close to the full value of the venture. And since everyone in your position also thinks he deserves a huge sum, the venture quickly becomes unviable."
So – we have (a) Google Sued For Patent Infringement For Keeping Track Of How Many Ads People Click On. At the same time, we have (b) U.S. Patent Office Rejects All Ninety-Five NeoMedia Patent Claims. For those of you uncertain about my perspective, (a) is bad. (b) is good. Hopefully (b) motivates the folks at Google to fight like hell to invalidate silly patents, rather than take a "let’s retrench and patent everything in sight" position.
Finally, I read an article by Timothy Lee on Ars Technica last week titled Patent Office finds voice, calls for software patent sanity. We need smart people to step up, shout from the rooftop about how fubared the software patent system is, and provide real alternatives. I’m optimistic that this is finally starting to happen.
I’m knee deep in reading The Pixar Touch: The Making of a Company, listening to Pink Floyd, and enjoying a perfect Homer, Alaska evening with Amy. I was an undergraduate at MIT during the Lucasfilm days of what became Pixar and vaguely remember early Media Lab / Project Athena animation stuff and watching super cool computer graphics wizardry post-SIGGRAPHs at the Computer Museum in Boston. Fortunately, YouTube has all the old classic computer animations online, including The Adventures of Andre and Wally B.
Awesome! (the animations and the book). While sneak peaks of this one and other old Pixar shorts are available on the Pixar website, due to the magic of the Internet they are all available on YouTube.
Microsoft is trying to. It’s up on PeerToPatent and you can comment on this patent application (and the USPTO will presumably listen to you) if you’d like to help keep the world free of really silly software patents. Following is the abstract.
"A method and system for recommending potential contacts to a target user is provided. A recommendation system identifies users who are related to the target user through no more than a maximum degree of separation. The recommendation system identifies the users by starting with the contacts of the target user and identifying users who are contacts of the target user’s contacts, contacts of those contacts, and so on. The recommendation system then ranks the identified users, who are potential contacts for the target user, based on a likelihood that the target user will want to have a direct relationship with the identified users. The recommendation system then presents to the target user a ranking of the users who have not been filtered out."
Dear friends at Microsoft – please stop patenting stuff like this. Just implement it in Outlook – or – even better – Exchange.
I hate writing blog posts like this – it makes me tired. If I’m the guys at Xobni, I’m working on (a) getting my patent filing updated and filed and (b) commenting on the PeerToPatent site about my prior art. Actually, I’m probably just ignoring this and innovating like crazy. But that’s just me.
Today’s Wall Street Journal has an article titled Tech Giants Join Together To Head Off Patent Suits. It describes the efforts of a new organization named Allied Security Trust who’s goal is to "buy up key intellectual property before it falls into the hands of parties that could use it against them." The named companies that have joined Allied Security Trust are Verizon, Google, Cisco, Telefon, Ericsson, and HP.
Allied Security Trust appears to be an example of the emerging construct of a "patent commons". There are already a number of existing patent commons such as the Patent Commons Project aimed at protecting open source software.
There are two types of patent commons – offensive and defensive. So far the folks that have been putting together patent commons for potentially offensive purposes have kept a very low profile and often have denied publicly that they will use their patent portfolio’s offensively. However, I’ve heard directly from a number of people involved in some of these organizations that the long term goal is to aggressively license the patent commons once it is large enough.
I’m not a fan of the offensive patent commons. However, I am a huge fan of the defensive patent commons. As I’ve written in the past, I strongly believe that the entire ecosystem around software patents is completely fubared. The courts – especially in the US – are poorly equipped to deal with the software patent issues and the USPTO has demonstrated that it’s either not up to the task or unable structurally to change the way things work. Our government – especially Congress – has demonstrated that it lacks the political will to address the situation. And, while the Supreme Court has finally waded in with a few key decisions, it still has an extremely long way to go if it really wants to address the underlying issues.
Having studied this for the last few years, it’s my strong belief that the software / computer industry has to solve the problem. Recently, I’ve been advocating the idea of defensive patent commons – ones that are organized by clusters of large companies – but open to all that are interested. There are lots of challenges in organizing this, including determining who can join, what the price of admission is, and what the ongoing costs of supporting the organization are, but these are solvable issues if the broad construct is adopted.
I’ll reserve judgement on the Allied Security Trust until I learn more about it, but it seems like it’s a step in the right direction if the brief description in the WSJ is accurate. A key indicator to me will be whether organizations like Allied Security Trust vow to only use their patents defensively. The absence of this will always raise suspicion that it’s a veiled effort to create a mega-patent-troll or that unintended consequences might result from future activity.
Ultimately, a defensive patent commons is analogous to the idea of patent insurance, which is also starting to emerge. I think a defensive patent commons is ultimately going to be a more powerful mechanism if organized correctly, but the analogy is a useful one to understand better how a defensive patent commons might operate.
There is a great Bill Gates email from January 2003 titled Windows Usability Systematic degradation flame that is making the rounds on the web. I love a good rant and even though this one is dated, Gates says in great detail what a large number of Windows users have summarized over the years as "shit – why won’t my damn computer do <blah>."
I’m a heavy computer user and have some variation of this thought on a daily basis. One of my special talents is finding bugs and breaking things – just ask any of the companies that I’ve invested in who their most "useful" (where useful is a euphemism for "annoying") alpha tester is. Think of me as helping improve software quality on planet earth.
Now – software quality is a complicated thing to measure. Not all bugs are overt ones. Let me give you an example of a particular pernicious Microsoft one that no one seems to ever prioritize to fix (no – I’m not going to pick on Windows Calculator again, although I could.)
I use a Windows Mobile-based Dash. I expect I’ll try the iPhone again on July 11th now that it actually syncs with Exchange, but until then I’m tethered to my Dash. I love the form factor and have trained my muscle memory to deal with having to press multiple keys to do things that I should be able to do with one keystroke – mostly due to design flaws in Windows Mobile. I’ve used some variant of Windows Mobile for the past eighteen months (I think starting with Windows Mobile 5; I’m currently using Windows Mobile 6.1.) If I were Mr. Windows Mobile UI Designer, I’d change a bunch of things, but it works well for what I need it for, which is primarily email, calendar, tasks, contacts, phone calls, IM, and twitter. And sync. My data needs to transparently sync with my Exchange server without me having to do anything. Oh – and my BlueAnt bluetooth headset. And I’m sure there are a few other things.
Here’s the problem – the sort algorithm on contact lookup is terrible. I have a large contact list (5048 as of today). Searching for "Stan Feld" should be immediate since that’s how it’s listed in the address book. Progressively typing S then T then A then N should bring up "Stan Feld" immediately. Typing "Stan Feld" into the To: field on the email program should be immediate.
Nope. The delay is anywhere from 10 to 30 seconds. At some point I decided to try to figure out the underlying algorithm. My guess is that it’s doing a full table scan of first_name + last_name for each letter typed. There doesn’t appear to be an index – either fixed or dynamic – and as a result the time for most searches is approximately linear based on the number of letters typed.
Now – if this problem was in Windows Mobile 5 but fixed in an update, I’d let it slide. I’ve done at least three (I think four) major updates of the software since I’ve had my Dash. There has been virtually no improvement in this feature.
Whenever someone asks me about my Dash / Windows Mobile, I tell them that I generally like it except for this one thing. I then describe the thing. Occasionally I’ll show the thing. And then I feel stupid that I’m still using this phone since I spend so much time looking up contacts or completing names in email fields.
Having written my share of sort algorithms, I expect this is less than 50 lines of code regardless of which language it is written in. It is sophomore in college computer science type stuff, not PhD stuff. Optimizing this to improve performance by 10x – 100x is maybe a day or two of a single programmer’s time.
This is not a Microsoft-specific problem. I could have picked on anyone. I’ve got a long list of Apple issues like this, plenty of Google issues including some remarkably silly ones, and – well – don’t get me started on the Yahoo ones. All of the companies I invest in have problems like this. It’s just an endemic part of software. And one that users shouldn’t have to put up with.
It’s also not limited to software. When filling up my car recently, the gas pump clicked off at $75. I’d noticed this happening periodically, but now it was happening every time. Gas is now over $4 / gallon. Each of my cars has a 20+ gallon gas tank. $75 doesn’t fill up the tank in any of them (and in at least one it doesn’t come close.) There was a point in time when I’m sure someone decided that a way to mitigate credit card fraud at the gas pump was to limit the amount of each transaction to $75. Now all that does is inconvenience a large number of customers with a mysterious cut off point.
If you develop products (especially software) for a living, never forget that people remember the little things.
I was going to call this post "Private Beta is Bullshit" but then I realized I might be wrong. Rather than decide, I’m looking for reasons to change my mind. Please help me. In the spirit of thinking out loud on my blog, I’m going to go through a history lesson from my perspective to frame the problem.
When I started writing commercial software in the 1980’s, there was a well-defined "beta process." Your first beta was actually called an alpha – when you had your friends and a few lead users play around with your stuff which was guaranteed to break every few minutes. But they were a good source of much needed early feedback and testing. Then came beta – you shipped your disks out to a wider audience, including a bunch of people you didn’t know but who were interested in your product, and they banged away looking for bugs. You had a bug collecting and management process (if you were really cutting edge you even had a BBS for this) and while there wasn’t a code freeze, you spent all of your time fixing bugs and hardening / optimizing the code. If you had a complex system, you started shipping release candidates (RCs); less complex went straight to a release (GA). Inevitably some bugs were found and a bug fix version (v x.01) was released within a week or two. At this point you started working on the next version (v x+1.0); if there were meaningful bugs still in v x you did small incremental releases (v x.02) on the previous code base.
This approach worked well when you shipped bits on disks. The rise of the commercial Internet didn’t change this approach much other than ultimately eliminate the need to ship disks as your users could now download the bits directly.
The rise of the web and web-based applications in the later 1990’s (1997 on) did change this as it was now trivial to "push" a new version of your app to the web site. Some companies, especially popular consumer sites and large commercial sites, did very limited testing internally and relied on their users to help shake down the web app. Others had a beta.website.com version (or equivalent) where limited (and often brave) users played around with the app before it went in production. In all cases, the length of time of the dev/test/production loop varied widely.
At some point, Google popularized the idea of an extended beta. This was a release version that had the beta label on it which is supposed to indicate to people that it’s an early version that is still evolving. Amazingly, some apps like Gmail (or Docs or Calendar), seem to never lose their beta label while others like Reader and Photos have dropped them already. At some point, "beta" stopped really meaning anything other than "we’ve launched and we probably have a lot of bugs still so beware of using us for mission critical stuff."
With the rise of the Web 2.0 apps, beta became the new black and every app launched with a beta label, regardless of its maturity (e.g. a whole bunch of them were alphas.) Here’s where the problem emerged. At some point every beta got reviewed by a series of web sites led by TechCrunch (well – not every one – but the ones that managed to rise above the ever increasing noise.) When they got written up, many of them inevitably ran into The First 25,000 Users Are Irrelevant problem (which builds on Josh Kopelman’s seminal post titled 53,651 – which might be updated to be called 791K.) During this experience, many sites simply crash based on the sudden load as they weren’t built to handle the scale or peak load.
Boom – a new invention occurred. This one is called "private beta" and now means "we are early and buggy and can’t handle your scale, but we want you to try us anyway when we are ready for you." I’ve grown to hate this as it’s really an alpha. For whatever reason, companies are either afraid to call an alpha an alpha or they don’t know what an alpha is. For a web app, operational scale is an important part of the shift from alpha to beta, although as we’ve found with apps like Twitter, users can be incredibly forgiving with scale problems (noisy – but forgiving).
So – why not get rid of the "private beta" label and call all of these things alphas. Alphas can be private – or even public – but they create another emotional and conceptual barrier between "stuff that’s built but not ready for prime time" (alpha), "stuff that getting close but still needs to be pounded on by real users and might go down" (beta), and "stuff that’s released" (v x.0). That seems a lot more clear to me than "private beta", "beta" (which might last forever), and ultimately v x.0.
In the grand scheme of things this might simply end up in "Brad Pet Peeve" land, but recently it’s been bothering me more than my other pet peeves so it feels like there’s something here. Call me out if there isn’t or pile on if there is.
I’ve been fascinated with SEO for a long time. I’ve experienced a lot of random SEO events as a result of my blog, such as all the traffic I get from Google on the word "iBrick" that link to my post titled iBrick (currently #2 in the index – sometimes #1.)
A reader sent me a note yesterday that I am the #1 search term on Google for the phrase Boulder Justice Center Community Service which is linked to my post titled My Afternoon at the Boulder County Justice Center. I decided to take a look at the other search services and see where I landed on this search.
All of this reminded me of the great presentation my friend Micah Baldwin at Lijit did at Tech Cocktail in Chicago last week titled SEO is All Grown Up: My Quest to be the #1 Douche bag.
Micah has been on a Douchebag Quest – I Want To Be The #1 Douche Bag. As of today, he’s #4 on Google. In case you are baffled, Micah’s quest is tongue in check as he tries to use a negative phrase to demonstrate how SEO works.
All of my personal iPhone iBrick experiences have been sad. Denied credit, couldn’t get activated, couldn’t sync with Exchange, had to cut my fingernails.
I’m reconsidering my position as a result of Vista Perfection 2.0.
I’ve been bouncing around the world of video conferencing for a while. The guys at Raindance – a company I was on the board of from 1997 – 2002 – knew this stuff cold (and what worked / didn’t work) as they were previously the founders of LinkVTC (one of the first video conferencing bridge service companies.) One of the applications of the stuff Oblong (one of our new investments) is doing applies to video conferencing, and the little cameras on top of my computers occasionally get used.
While video conferencing is "ok" (and definitely 10x better today and at least 100x cheaper than it was a decade ago) it still sucks. My reaction to the demo of Cisco’s On-State TelePresence Holographic Video Conferencing system was "bitching." It’s pretty amazing to see it, even via online video. There are definitely some hacky aspects to it (as my partner Ryan points out, there is some sort of transparent screen being used), but it’s still incredible.
Another reason for airlines to be scared.