Brad Feld

Tag: HCI

I’ve been hinting about a new conference that we’ve been working on with Eric Norlin that complements Defrag and Glue.  Eric is about to launch it and the splash page for the Blur Conference is up.

If you are familiar with Defrag and Glue, you know they are built around two of Foundry Group’s themes (Protocol and Glue respectively).  Blur is being built around our Human Computer Interaction theme, but with a twist.  Instead of simply being able to “see cool stuff up close”, our goal with Blur will be to create an environment where you can actually use and work with this stuff.  We’ll have user-oriented demos, hackathons, and tons of crazy shit no one has ever seen before.

Plus, we’ll give away a lot of cool toys, have a ton of smart people who are working on the next generation of HCI in one place, and have some fun surprises.  And we are doing it in an environment that is especially tuned for a conference like Blur.

I’m incredibly excited about what Eric has put together for this year’s Glue Conference (as I wrote about the other day).  He’s setting a high bar for Blur, where the goal will now be to have a few brains explode!  Get ready – it’s never dull around here.


If you are a long time reader of this blog, you know that I’m a huge believer that the way we interact with computers in 20 years will be radically different than how we interact with them today.  I’ve put my money where my mouth is as Foundry Group has invested in a number of companies around human computer interaction, including Oblong.

For the past few years, every time someone talks about next generation user interfaces, a reference to the movie Minority Report pops up.  Sometimes the writer gets this right and links it back to John Underkoffler, the co-founder of Oblong, but many times they don’t.  Today the NY Times got it right in their article You, Too, Can Soon Be Like Tom Cruise in ‘Minority Report’.

John Underkoffler, who helped create the gesture-based computer interface imagined in the film “Minority Report,” has brought that technology to real life. He gave a demonstration at the TED Conference in Long Beach, Calif., on Friday.

That’s a picture of John Underkoffler at Ted on Friday giving one of his jaw dropping demos of Oblong’s g-speak spatial operating environment.  Lest you think this is science fiction, I can assure you that Oblong has several major customers, is generating meaningful revenue, and is poised to enter several mainstream markets with g-speak derived products.

The company has been steadily building momentum over the past few years since we invested.  The TechCrunch article The iPad Is Step 1 In The Future Of Computing. This Is Step 2 (Or 3) gives you a little of the history.  More of the history is at Oblong’s post origins: arriving here that go back to 1994.  I personally have stories going back to 1984 when I first met John, but we’ll save those for another day.

While there is an amazing amount of interesting stuff suddenly going on around HCI (and we have invested in a few other companies around this), Oblong is shipping step 2 and about to ship step 3 while most are working on step 1.  As John likes to say, “the old model of one human, one machine, one mouse, one screen is passe.”


I weigh 209.4 this morning.  That’s down from 220 when I Declared A Jihad on My Weight on 10/27/08 although it doesn’t look like I’ll make my Anti-Charity goal of 200 by 1/31/09 (more on that in a post on 2/1/09).

I was thinking about my weight this morning as I entered it into the online system at GoWear.  I thought about it again when I entered it into Gyminee.  And then into Daytum. I’m going for a run in a little while so I’ll enter it again into TrainingPeaks

Here’s what I’m doing:

  1. Go to the appropriate web site.
  2. Choose the appropriate place to enter the data.
  3. Type 209.4 and press return.

Four times.  Every day.  Pretty ridiculous.  If you reduce the data to its core elements, they are:

  1. Web site id [GoWear, Gyminee, Daytum, TrainingPeaks]
  2. User Id (almost always bfeld)
  3. Timestamp (or two fields – date, time) – automatically generated by my computer
  4. Weight

The only actual piece of data that I need to measure is weight.  I measure this by standing on a scale each morning.  The scale is a fancy one – it cost about $100, looks pretty, and has a bunch of extra things I don’t look at such as BMI.  I have an identical scale in my house in Keystone (although the battery is dead and needs to be replaced.)

Some day, in the future, I’ll be able to step on the scale.  And that will be it.  My weight will automatically go into whatever online systems I want it to.  I won’t have to do anything else. 

Of course, one of the assumptions is that my scale(s) are “network compatible”.  While you may joke that this is the age old “connect my refrigerator to the Internet problem” (and it is), I think it’s finally time for this to happen.  As broadband and wifi become increasing ubiquitous and inexpensive, there is no reason that any electronic devices shouldn’t be IP connected, in the same way that microprocessors are now everywhere and pretty much everything has a clock in it (even if some of them still blink 12:00.)

So, accept this assumption.  Then, I’m really only taking about a “Brad-centric” data payload.  While I’ll have a lot more data than simply weight that I might want in my payload, let’s start with the simple case (weight).  Right now, we are living in a system-centric world where data is linked first to a system and then a user.  Specifically, you have to operate in the context of the system to create data for a user.

Why not flip this?  Make things user-centric.  I enter my data (or a machine / device collects my data.)  I can configure my data inputs to feed data into “my data store” (which should live on the web / in the cloud).  Then, systems can grab data from my data store automatically.  All I have to do is “wire them up” which is a UI problem that – if someone is forward thinking enough – could also be solved with a single horizontal system that everyone adopts.

Right now there is a huge amount of activity around the inverse of this problem – taking widely diffuse data and re-aggregating it around a single user id.  This deals with today’s current reality of how data is generated (system-centric) but doesn’t feel sustainable to me as user-generated data continues to proliferate geometrically.

Enough.  As I said in my tweet earlier today, “thinking about data.  thinking about running.  thinking about thinking.”  Time to go run and generate some more data.


This year at Sundance, Oblong unveiled Tamper.  The Tamper application is a gestural interface for cinematic design.  It is built on Oblong’s g-speak spatial operating environment and is a fun example of how Oblong’s core technology can be applied to a film editing system.

Tamper is part of the New Frontier on Main exhibit located at 333 Main Street on the lower level.  Oblong has set up a channel on YouTube to show some of the various videos that folks at Sundance are making with Tamper. 

I love working with these guys – they are mind-bendingly creative.