Om Malik has a great post up titled Alexa can be injurious to your wealth that describes a number of flaws with Alexa as a ranking of a web-based service. I have never taken Alexa data very seriously – while Om points out some flaws there are lots of others that anyone with a basic understanding of web analytics will be able to determine quickly. While it’s potentially useful for either long term trends or very rapid variance effects (e.g. The First 25,000 Users Are Irrelevant problem or – as Josh Kopelman more succinctly put it – 53,651) I’m always very careful about how I interpret the data (and I usually don’t give it much weight.)
Om’s parting comment is “In closing, if you are a startup that brings up your Alexa ranking in a meeting with us and tout that as your shining achievement, it would be time for my smoke break!” I couldn’t agree more (even though I don’t smoke – how about “it’s time for my treadputer break.”) I don’t want to hear about your Alexa ranking – I want to hear about what your users are really doing, how much they are actually using things, and what drives increased adoption.
My partner Ryan McIntyre has a nice, detailed follow up post on Intelligence Amplification. Stan James – founder and CTO of Lijit (one of our investments in the Intelligence Amplification theme) – calls it the “Digital Cortex.”
Our friendly neighborhood Supreme Court is having some fun discussing the current legal definition of “patent obviousness.” It sounds like there were some entertaining snippets in the conversation as the Supreme Court considers rewriting it. A change here would have a wide ranging impact which – in my mind – based on my previously stated view on software patents – would be a hugely positive thing. At the minimum, it’s provocative to think about the potential impact. The best line of the day – offered apparently with complete ironic intent – appears to be Chief Justice Roberts asking an attorney: “Who do you get to be an expert to tell us something’s not obvious? The least insightful person you can find?”
I hate the phrase Web 3.0. I’ve never really liked the phrase Web 2.0 either, but I didn’t notice that I didn’t like it until after it had become used in almost every conversation I had with anyone about what they were working on. As I started making new investments in companies that tried to deal with the TAR problem (such as Me.dium, Lijit, Collective Intellect, and HiveLive), I realized I wanted a name for this. I came up with the lame name “dynamics of information” as a placeholder.
I’ve been searching for a new name for this and my partner Ryan McIntyre came up with the phrase “Intelligence Amplification” which I love. It’s especially sweet if you catch the mildly ironic reference to “Artificial Intelligence.” While I still haven’t locked down this label as final for this theme of investing, articles such as “Applying Semantic Web Ideals” from the weblog The Intelligent Enterprise – in addition to highlighting my friend Nick Bradbury as having a major clue around this stuff – reinforce the chocolately goodness of this name in my non-silicon based mind.
I’m playing around with “extreme sync” these days – exercising as many platforms as I can simultaneously just for fun. My central mail / calendar / contacts / tasks live on our Exchange server. I’ve got sync working well across all my Windows devices (Office 2003 and Office 2007) along with my new Dash Windows Mobile PDA. My Macs “mostly work” – everything is doing great (via Entourage) except “tasks.” I’m a heavy Outlook / Exchange Task user so this is a critical one for me. Anyone out there solve this?
My partner Ryan pointed me to Mark Morford’s incredible review of the new MacBook Pro on SFGate. Actually – in Mark’s words – it’s not really a MacBook Pro – it’s a “brand new lick-ready smooth-as-love Apple MacBook Pro Core 2 Duo Super Orgasm Deluxe Ultrahard Modern Computing Device Designed by God Herself Somewhere in the Deep Moist Vulva of Cupertino Yes Yes Don’t Stop Oh My God Yes.” Um, yeah – that’s not exactly how I felt when I got my new Levono T60 with Vista on it the other day. At least mine works, compared to Ryan’s.
On Tuesday night, I hung out in LA with my long time friends and frat brothers John Underkoffler and Kevin Parent, co-founders of Oblong along with Kwin Kramer and Tom Wiley. I hadn’t seen John and Kevin in a while, so it was fun to catch up with them. More fun, however, was seeing and playing with the amazing stuff they are creating.
In the movie Minority Report, Tom Cruise’s character John Anderton is shown interacting with a computer on a wall using a futuristic user interface in which he uses hand gestures to manipulate images and video. Underkoffler created that – including the beginnings of a new UI paradigm and a language for describing it.
Wind the clock forward four years. John, Kevin, Kwin, and Tom are hard at work commercializing this next generation user interface. It’s magical to watch John control a set of complex applications projected on the screen with his hands. No mouse, no keyboard – just gestures. All that’s missing was speech – and – for someone who has spent some time working on speech related companies – it’s pretty clear where that could fit in.
Pause. Ponder. After a few minutes, John gave the gloves to me and taught me the UI. It took about five minutes for me to get comfortable (probably less time that it takes a windows / mouse novice to deal with the Windows / Mac UI.) While I had some trouble with my non-dominant hand (right hand), I could feel the “brain wiring” taking place as I got more and more comfortable working with the applications.
These weren’t trivial applications. A few of them were set up just to demonstrate the UI characteristics. But – there were deeper ones that included a 3–D view of LA (think pan and zoom, along with annotate objects.) Or – a time sequenced example of traffic moving down the street (time forward, time back, pan, zoom). Or – a time sequenced map of the world showing all flight patterns over an elapsed period of time, including selecting specific origins and destinations to filter the data.
All of this was running on top of a Mac G5.
We went out for sushi afterwards and talked about it for several more hours. I’m 40 years old. In my life, there have been only two major UI paradigms that I’ve interacted with. The first was character-based ASCII terminals and keyboard (circa 1979). The second was WIMP (ironically, I saw a Xerox Alto around 1980 – so while the Mac was the first popular WIMP UI, I actually saw a WIMP UI before that.) Of course, you have punch cards and toggle switches – but let’s start in 1977 when the Apple II – which was arguably the first mainstream personal computer – came out.
So – 1977 – character-based becomes mainstream. 1984 – WIMP appears but probably doesn’t really become mainstream until Windows 3.0 – around 1990. Speech – which has been stumble-fucking around since I was a kid – is still not mainstream (HAL – “I feel much better now, I really do.”) I supposed you could argue that there is a new paradigm for handheld devices, but it’s so poor that it’s hard to consider it an innovation. 20 years and we’ve got nothing that is a discontinuous UI paradigm.
John, Kevin, Kwin, and Tom are inventing it right now. Awesome.
I’m personally going to boycott the phrase “Web 3.0” since “Web 2.0” makes me tired enough. There have been some great quips going around the system about this, including Gordon Weakliem’s “I haven’t even gotten around to upgrading to Web 1.0 Service Pack 2”, Michael Parekh’s “Web 2007 versions”, Peter Rip’s “Web 2.0 + 1”, and Nick Bradbury’s “Web 3.0 Does Not Validate.” While I recognize the inevitability of the newest increment of the Web x.0 label, I don’t have to like it.
Dion Hinchcliffe has an interesting post up on ZDNet titled Is IBM making enterprise mashups respectable? Dion enumerates five styles of mashups: presentation, client-side data, client-side software, server-side software, and server-side data. It’s a good segmentation and useful framework for describing integration layers between web services.