I love data. And I adore playing with it graphically, as I learn a lot from graphing longitudinal data about things I’m involved in. However, I find that almost all of the web services I use suck at providing visualization / graphing tools for their data. For example, I’ve never really found any of the graphing options in any of the running software I use satisfying or useful.
I’ve known about Tableau Software for a long time. The CEO and founder is Christian Chabot – we worked together at Softbank Venture Capital. Tableau has built a significant software company and when Christian called me up to ask if they could play around with some of my running data as part of their launch of their new web-based services, I agreed.
The hardest part of this exercise was getting granular running data out of the various systems that I keep it in. I use a Garmin watch and have very detailed GPS and heart rate data on every run I do. However, the two primary systems I store this data in (MotionBased and TrainingPeaks) have abysmal data export systems. After fighting with them for a while, I eventually did the equivalent of “scraping” the data by exporting the data underlying a bunch of individual runs.
Once I got the data out, Tableau was pretty amazing. It was extremely easy to use (in comparison – say – to Microsoft Excel where you can spend hours and still not get the format you want.) And – it was extremely fast.
After I played around it with some, the data wizards at Tableau took over and created the widget that you see above. There are a few things to note about it:
Tableau has been around for years and has thousands of customers, but visualizations like these are still in private beta as they make sure they hammer out all the bugs on their latest release. I’m not an investor, but based on what I see I wish I was. Nicely done Tableau (and Christian).
I’ve never been great with remembering names the first time I meet someone. I’ve tried all the usual tricks and haven’t found one that works. For me, it’s a matter of regular meeting – eventually I’ll start to remember your name.
The most embarrassing part is forgetting the names of spouses that I work with. For whatever reason, I only have room in my brain for one of the two names (gender doesn’t matter). I’ve solved this by referring to wives as “Tifanny” and husbands as “Herman”. So far, 100% of the time, I get the correct name out of the spouse, along with a chuckle. I figure that on the day that I don’t, that person’s name is actually either Tiffany or Herman.
For a while I managed to associated names with images of people in my head and use that to help me a little. I realized that all the faces on the Internet has completely ruined this for me. The namespace in my brain is now full of name:image pairs and – in addition to no longer having room for names, I no longer have room for images either. I went to the Apple Store the other day and asked for some extra memory, but they didn’t have a type that was technically compatible with my brain.
So – please don’t be offended when I call you Tiffany or Herman. More importantly, when you come up to me and say hi, if you don’t get a “hey <correct name>” back, remind me of your name and affiliation. If it’s comfortable, I’ll often say “remind me who you are”, although there are many circumstances where this either doesn’t work or I forget.
As Tim discovered the other day, if you say “hi, I’m Tim” enough times, I’ll eventually replace some other name:image pair in my brain with yours.
I heard the line “it’s all about the faces” from someone in the past few weeks. The line stuck with me and I’ve been thinking about it a lot lately. Last night I tried an experiment and changed my twitter avatar to a graphic done by Anthony Dimitre, a really talented local designer. While there was plenty of positive feedback, there was also a lot of “I don’t like it” feedback, including the tweet “i think the avatar makes you seem less accessible than a normal pic” from @joshpayne.
While I like the avatar that Anthony made for me, Josh’s comment rang true and I changed the avatar back to the photo I’ve been using. Now, I might need a new photo (or a new face for that matter), but that’s a different issue. When I think about my experience on the web, there is no question that photos make people feel more real and accessible.
When I got my iPhone, I started taking quick pictures of my friends and family and adding these pictures to their contact record. These photos got synced with Outlook and ended up in the top right corner of my emails from these folks (in addition to showing up on my phone whenever they called me.) This was cool, but it forced me to take pictures of people and go through a convoluted UI experience to get the pictures associated with their contact record.
Even though this was a lot of extra work, the power of the photo matters. I’m happier when I see Amy’s picture pop up on my phone. Or, when my partner Jason calls me, I remember our great dinner at Uchi in Austin a few months ago (his photo was taken in front of the sign late at night.) When I ponder the rise of Facebook and Twitter, and reflect on the early coolness of MyBlogLog, the power of the photo seems very real.
This hit home with me during the most recent two week iteration for Gist. I get the new features between one and two weeks before everyone else (they do a release every two weeks) and there’s been an awesome new one that has appeared. If a contact record appears without a photo (I guess II should call it an avatar), I have a chance to add a new image from a Google search. Suddenly, between the data Gist imports from Facebook, Twitter, and the photos it is finding for me on the web, many of my most recently used contacts have photos that appear whenever I interact with them. I have a real, positive emotional response to this.
Now, this data isn’t yet syncing back with my email contact list, so I’m only seeing it when I either go into Gist or open up the Gist Dossier in Outlook. That just makes it even more noticeable that it’s missing from within my inbox (which is my most actively used form of communication.) But – that’s just a matter of time.
As the social web continues its extraordinary growth, “faces” seem to be a small, but critically important part of it.
Well – that serves me right. If you requested a Gist beta invite, be patient. I’m grinding through my inbox and you’ll have your invite by tomorrow at the latest. Thanks everyone who requested one, especially for all the kind feedback on the blog.
But that’s not what I’m thinking about this morning. Last week I read an intro O’Reilly book to HCI called Designing Gestural Interfaces: Touchscreens and Interactive Devices. It was ok, but one of the insights – that the public restroom has become a test bed for gestural interface technology – really stuck with me.
I found myself in a restrooms at DIA last night before I got in my car for the hour long drive home. I generally hate public restrooms as my OCD kicks into high gear around everyone’s germs. I no longer think that bad things are going to happen to me if I don’t touch every street sign on a walk, nor do I get stuck in my house in the morning because I have to do everything in multiple of three’s (and – if I blow it, then nines, and, if I blow it then 27’s, ugh – yuck.) However, I still dislike the idea of the public restroom. But sometimes you’ve just gotta go.
It was pretty late at night and I found myself in a recently cleaned and completely empty restroom at one end of Level 6 at DIA. I decided to perform an experiment – could I go about my business without touching a single thing other than myself or my clothes. I like to wash my hands before I go to the bathroom (You don’t? Think about it for five seconds. You’ve been shaking hands and touching things all day? C’mon.) The soap dispenser spit out soap after I put my hands under it. The sink automatically turned on when I put my hands under it (I had to move them around a little.) I walked up to the toilet, did my thing, and walked away to the sound of a toilet flushing. Back to the sink for a redo of the previous drill. I wandered over to the towel dispenser which automatically dispensed some towels when I waved my hands under it.
The only think I had to touch was the door. Even that seems easy to solve – automatic opening and closing doors have been around forever. None of the gestures I did were particular complex and – as I think about it – all were pretty obvious.
Life is a laboratory. Don’t forget to always be exploring and experimenting.
A few weeks ago I wrote a post titled The Maturing of the Implicit Web. In it I talked about new releases from AdaptiveBlue and OneRiot. As I sit here in my hotel room in Seattle waiting for TA McCann (CEO of Gist) to show up for our pre-board meeting run, I’m pondering how much work I’m starting to get the web to do for me.
For example, as part of my morning information routine, I go through my Gist dashboard. This is a list of all the new information that Gist has found from a wide variety of data sources about people and companies in my social network. It derives the social network from my email inbox, integrates it with my Facebook, LinkedIn, and Twitter social graphs, and then presents it to me in a way that is prioritized by what it thinks I find most interesting. The level of relevance to me is amazing now that I’ve had it running for a few months. While Gist is still in closed beta, if you want an invite just email me.
Gist is synthesizing the data for me from a variety of other web services. At our board meeting today we have a long list of potential partners and data services that we prioritize based on (a) quality of data, (b) availability of data, and (c) ability to integrate the data. Exactly one year ago I wrote a post titled No API? You Suck! I feel so strongly about this; if I wrote the post again today I would have titled it “No API? You Really Suck!”
One of the data sources that has a strong API layer is our company Gnip. They recently announced that they will be integrating data from the Facebook Platform into their data set. This comes on the heals of their announcement about adding WordPress as a data publisher to their system. Gnip now has over 30 data publishers actively flowing through their system and have found rapid adoption from a number of interested customers. Oh – and Gnip’s API is well documented, public, and evolving rapidly.
So – it was with great pleasure when I saw Alex Iskold’s announcement that there is now a Glue API. There is a tremendous amount of interesting semantic data in AdaptiveBlue’s Glue system – the API liberates it for anyone that wants to put some energy into working with it.
But wait, there’s more. OneRiot also just released their API, which – while not public yet – is available by request. OneRiot has a fascinating set of real time data available via a search interface that gets better and more relevant every week. They’ve also demonstrated that they can build to search scale as they have some superb technical search folks on their team.
Gang – thanks for not sucking. Y’all are helping set things up so the web does more of the work for me!
Do you remember when the blink tag appeared on the scene? I do. I was giving a reference on a great CEO I’ve worked with since 1996. The VC I was talking to and I took a trip down memory lane to 1994 or so and the blink tag came up. I wondered if it still worked – let’s see.
I’ve always hated the fucking blink tag. It’s so obnoxious.
Well – that’s interesting – it doesn’t render in Windows Live Writer in either the edit or the preview view. I just looked the blink tag up on Wikipedia. I love the quote by its inventor – Lou Montulli: he considers it to be “the worst thing I’ve ever done for the Internet."
Follow Up: Blink works in Firefox but doesn’t seem to work in IE 7 or Chrome or Safari on my iPhone. What’s up with that? How can we maintain the integrity of the universe (and subsequently the Internet) if the blink tag vanishes? Any know the story on Opera?
I’ve been fascinated with the notion of the Implicit Web since I determined that I was tired of my computer (and the Internet in general) being stupid. I wanted it (my computer as well as the Internet) to pay attention to what I, and others, were doing. Theoretically “my compute infrastructure” should learn, automate repeated tasks (automatically), figure out what information I actually want, and make sure I get it when I want it.
In 20 years, I expect we will snicker at the idea of having to go search for information by typing a few words into a text box on the screen. It’s way better than 20 years ago, but when you step back and think about it, it’s pretty lame. I mean, I’ve got this incredible computer on my desk, a gazillion servers in the cloud, this awesome social network, yet I find myself typing the same stuff into little boxes over and over again. Ok – it’s all pretty incredible given that it wasn’t so long ago that people had to rub sticks together to get fire, but can’t it be amazing and lame at the same time?
Several companies that I’ve got a personal investment in that play in and around the implicit web recently came out with new releases that I’m pretty excited about; each addresses different problems, but does it in elegant and clever ways.
The first – OneRiot – came out with a new twist on using Twitter for search. OneRiot’s goal is to provide a search engine for the real time web. To that end, they’ve historically gotten their data on what people are looking at from a collection of browser-based sensors (anonymous, opt-in only). They’ve built a unique search infrastructure that takes a variety of factors, including number of people on a specific URL in a particular time period, freshness of the content, and typical content weighting algorithms. A little while ago they realized that people were tweeting a huge number of URLs, mostly via URL shorteners (which are loathed by some very smart people.) Twitter search addresses keywords in the tweet, but it doesn’t do anything with the URL’s, especially the shortened ones. So, OneRiot built a pre-processor that grabs tweets from Twitter’s API that include a URL, tosses the shortened URL into OneRiot’s search corpus (which expands the URL and indexes the full page text), and then references it back to the original tweet. It also correlates all tweets with the same URL (including re-tweets) across any URL shortened service. Now, imagine incorporating any URL data that’s real time that has an API, such as Digg. Aha! It’s alpha so forgive it if it breaks – but give it a try.
The second – AdaptiveBlue – has released their newest version of Glue. Glue is a contextual network that uses semantic technology to automatically connect people around everyday things such as books, music, movies, stars, artists, stocks, wine, and restaurants. It uses a browser-based plugin to build this contextual network implicitly. When you are on a site such as Amazon, Last.fm, Netflix, Yahoo! Finance, Wine.com, or Citysearch, the Glue bar automatically appears when it recognizes an appropriate object, categorizes it, and let’s you take specific action on it if you want. Glue has been evolving nicely and now includes the idea of connected conversations between friends (e.g. talk about whatever you like regardless of the site you are visiting), smart recommendations (e.g. implicit recommendations), and web wide top lists of the aggregated activity of all Glue users.
In addition, we’ve finally found a company that we think is attacking a wide swath of the problem of the Implicit Web the correct way, at least given today’s technology. We hope to close the investment and start talking publicly about it early next month.
For now, I expect the applications around the Implicit Web to continue to fall into the early adopter / you need to see it to believe it category (where it’s harder to explain than just to show). In the near term, if you are interested in this are, try out OneRiot and Glue – they are both evolving and maturing very nicely.
Sorry – I just could not help myself when crafting that title. I wonder if it’ll get good SEO juice. And yes – it was a really sunny day today in Boulder.
By now everyone in the tech universe knows that Oracle has signed an agreement to acquire Sun. I – for one – did not see that coming. There has already been plenty of analysis on the good and the bad of it from a tech industry perspective. However, I haven’t seen much commentary on what it means for the Colorado tech community.
If you live outside Colorado, your first reaction is probably “who cares.” However, did you know that both companies have their second largest US operations in Colorado? Nope, that hadn’t occurred to me either until I met with senior execs at both companies two weeks ago with Colorado Governor Bill Ritter. Think about it – two native Silicon Valley companies have their largest US operation in Colorado.
During my two day trip to Silicon Valley with Governor Ritter, Don Elliman (head of the Colorado office of economic development), and Mike Locatis (Colorado CIO), we had about a dozen meetings. The Sun and Oracle meetings were uniquely interesting because of the large presence each company has in Colorado. Sun’s comes from a combination of organic growth and their acquisition of StorageTek; Oracle’s comes from organic growth and their acquisitions of PeopleSoft (which had previously acquired JD Edwards), BEA (which had a good sized operation in Boulder that resulted from two other acquisitions), and Hyperion (which had previously acquired Decisioneering). I don’t have the exact number of total employees of both companies in Colorado but I’m guessing it’s around 10,000 with a heavy concentration of them near Boulder, Denver, and Colorado Springs.
In both the Oracle and Sun meetings, the executives that we met with were extremely enthusiastic about their teams in Colorado. Even if you discount their enthusiasm based on the fact they were talking to the governor of Colorado, it was sincere and substantiated by their perspectives on the capability, quality, and loyalty of their Colorado-based workforces. It was clear that regardless of future acquisition activity, both companies had plans to continue to grow their bases in Colorado.
Now, acquisitions are always complex and this one isn’t expected to close until sometime this summer. However, given the existing presence of both companies in Colorado, I expect there will be additional focus on the appropriate integration dynamics. While they will likely include some rationalization of people and facilities, I expect it will be healthy for the long term growth of Colorado as a technology center, especially given the positive experiences each company has had with large workforces in Colorado.
One of the neat things about business is that it runs in cycles. I’ve been involved in the software business since 1985 when I started my first company. Since then, I’ve seen numerous cycles with a wide range of amplitudes. I don’t try to time any of the cycles, the peaks, or the troughs; rather I just invest and work hard all the way through each of the cycles.
Given the negative sentiment across many parts of the business universe (e.g. the NY Times headline this morning Jobless Rate Hits 8.5%; 663,000 Jobs Lost), I’ve been pleasantly surprised by the Q1 performance of many of the companies I’m an investor in. Several had record quarters, most made or beat their Q1 plan (obviously the easiest plan of the year to make since it’s usually baked near the beginning of the quarter), and a couple had extraordinary growth that surprised everyone. Some struggled, but when I look at the overall distribution of behavior across our entire portfolio, it was kind of what you’d expect from a typical economic environment versus one that is either distressed or bubbly.
This morning during my daily information consumption routine, I noticed four things that stuck out.
All of these are nice leading indicators that the exit environment for tech companies, which has been frozen for the past few quarters, is starting to thaw out. It’ll be interesting to see if this is just a warm day mid-winter or actually the beginning of spring.