Speech Recognition Is Only Part of the Future
A week or so ago, Fred Wilson Dictated a Blog Post. In it he dictated a blog post on his Nexus One phone. He then discovered Swype which now has an unofficial Android app. As usual the comment threads on AVC were very active and had lots of thoughts about the future (and past) of voice and keyboard input.
When I talk about Human Computer Interaction, I regularly say that “in 20 years from now, we will look back on the mouse and keyboard as input devices the same way we currently look back on punch cards.”
While I don’t have a problem with mice and keyboards, I think we are locked into a totally sucky paradigm. The whole idea of having a software QWERTY keyboard on an iPhone amuses me to no end. Yeah – I’ve taught myself to type pretty quickly on it but when I think of the information I’m trying to get into the phone, typing seems so totally outmoded.
Last year at CES “gestural input” was all the rage in the major CE booths (Sony, Samsung, LG, Panasonic, …). In CES speak, this was primarily things like “changing the channel on a TV using a gesture”. This year the silly basic gesture crap was gone and replaced with IP everywhere (very important in my mind) and 3D (very cute, but not important). And elsewhere there was plenty of 2D multitouch, most notably front and center in the Microsoft and Intel booths. I didn’t see much speech and I saw very little 3D UI stuff – one exception was the Sony booth where our portfolio company Organic Motion had a last minute installation that Sony wanted that showed off markerless 3D motion capture.
So – while speech and 2D multitouch are going to be an important part of all of this, it’s a tiny part. If you want to envision what things could be like a decade from now, read Daniel Suarez’s incredible books Daemon and Freedom (TM) . Or, watch the following video that I just recorded from my glasses and uploaded to my computer (warning – cute dog alert).