Irony alert: A lot of this post will be incomprehensible. That’s part of the point.
I get asked to tweet out stuff multiple times a day. These requests generally fit in one of three categories:
Unless I know something about #3 or are intrigued by the email, I almost never do anything with #3 (other than send a polite email reply that I’m not going to do anything because I don’t know the person.) With #1 and #2, I usually try to do something. When it’s in the form of “here’s a link to a tweet to RT” that’s super easy (and most desirable).
There must have been a social media online course somewhere that told people “email all people you know with big twitter followings and ask them to tweet something out for you. Send them examples for them to tweet, including a link to your product, site, or whatever you are promoting.”
Ok – that’s cool. I’m game to play as long as I think the content is interesting. But the social media online course (or consultant) forgot to explain that starting a tweet with an @ does a very significant thing. Specifically, it scopes the audience to be the logical AND clause of the two sets of twitter followers. Yeah, I know – that’s not English, but that’s part of my point.
Yesterday, someone asked me to tweet out something that said “@ericries has a blah blah blah about https://linktomything.com that’s a powerful explanation”. Now, Eric has a lot of followers. And I do also. But by doing the tweet this way, the only people who would have seen this are the people who follow Eric AND follow me. Not OR. Not +. AND.
Here’s the fun part of the story. When I sent a short email to the very smart person who was asking me to tweet this out that he shouldn’t start a tweet like this since it would be the AND clause of my followers and Eric’s followers, he jokingly responded with “that’s great – that should cover the whole world.” He interpreted my comment not as a “logical AND” but a grammatical AND. And there’s a big difference between the two.
As web apps go completely mainstream, I see this more and more. Minor syntatical things that make sense to nerds like me (e.g. putting an @reply at the beginning of a tweet cause the result set to be the AND clause of followers for you and followers for the @reply) make no sense to normal humans, or marketing people, or academics, or – well – most everyone other than computer scientists, engineers, or logicians.
The punch line, other than don’t use @ at the beginning of a broadcast tweet if you want to get to the widest audience, is that as software people, we have to keep working as hard as we can to make this stuff just work for everyone else. The machines are coming – let’s make sure we do the best possible job with their interface which we still can influence it.
I spent this weekend at LindzonPalooza. Once a year Howard Lindzon gets together a bunch of his friends at the intersection of financing, tech, media, and entrepreneurship, we descend on The Del in Coronada, and have an awesome 48 hours together. Many interesting and stimulating things were said, but one I remember was from Peter Pham over dinner. It was a simple line, “why do we teach languages in junior high and high school but not a computer language?” that had profound meaning to me.
When I was in high school, I had to take two years of a foreign language. I had three choices – French, Spanish, or German. I didn’t really want to learn any of them so I opted for French. I hated it – rote memorization and endless tedious classes where I didn’t really understand anything. Fortunately I liked my teacher for the first two years and I did fine academically (I got an A) and ended up taking a third year of French.
Year three was a total disaster. I hated the teacher and apparently she hated me. We watched these stupid reel-to-reel movies of french cartoons aimed at English speakers trying to learn French. Beyond being boring, they were incomprehensible, at least to me. Somehow I ended up in the front row and it was my job to change the movie when it finished. One day, when I was sure the teacher was out of the room and I was changing the reel, I muttered ” tu es une chienne” (one of the few French phrases I still remember, along with “va te faire foutre.”) I was wrong – she was in the room and, after a trip to the principal’s office (the principal liked me and let me off easy) I dropped the class and took a study hall instead.
Now, before I use the old line of “I have a hard time learning languages”, I’ll call bullshit on myself since during that time I learned BASIC, Pascal, and 6502 Assembler. I was good at learning languages – I was just way more interested in computer languages than romantic european languages.
We didn’t have AP Computer Science at my school so I taught all of this to myself. But today, schools have computer science courses. And, based on what I’ve learned from my work at NCWIT, looking at course curriculums, and talking to a lot of students, most high school computer science courses suck. Part of the problem is the word “science” – they teach computer science theory, how to program in Java, math, logical, and a bunch of other things. But they don’t teach you software development, which is much more useful, and a lot more fun.
When I compare it to French 3, I wanted to learn conversation French. I probably would have enjoyed that. But the teacher, who was French, insisted on grinding us through endless grammar exercises. The movies were sort of conversational, but they obsessed over the different tenses, and we were tested endlessly on when to use tu and when to use vous, even in French 3.
I’m not a language instructor, nor do I have any interest in figuring out the best way to teach a language – computer or otherwise – but it seems to me that we are shifting into a different period where learning how to write software is just as important – and probably more so – to a high school student as learning to speak French, at least at a two year of course level where all you remember are a few swear words.
Am I wrong? If so why. BTW – Google translate quickly tells me that is “Ai-je tort? Si oui, pourquoi.” My bable fish is on order.
By now the blogosphere, twitterverse, and even mainstream media is abuzz with the absurd decision that Yahoo has made to sue Facebook over ten software patents with the assertion that Facebook’s entire business is based on Yahoo’s patented inventions. My partner Jason Mendelson called this on 2/28 when he wrote his post Goodbye Yahoo! It was nice knowing you and Fred Wilson weighed in this morning with his post Yahoo! Crosses The Line.
My personal view is well known – I don’t think any of these patents are actually valid. Take a look at the analysis on PaidContent of The 10 Patents Yahoo Is Using To Sue Facebook, read the plain English descriptions, and then look at the filing dates. Now, try to make the argument that these are novel, useful, and non-obvious inventions of the part of Yahoo. For a less nuanced view, now read TechDirt’s post Delusions Of Grandeur: Yahoo Officially Sues Facebook, Laughably Argues That Facebook’s Entire Model Is Based On Yahoo.
I’m hopeful this is the beginning of the endgame of massive patent reform around software. It’s time for the entire industry to recognize that we are quickly shifting from a cold war (patents are deterrents) to a nuclear war that – like the one in War Games – the only winning move is not to play.
I’ve decided to let a week pass while I think about what the right response to this is. Software patents have the same polarizing dynamic that SOPA/PIPA had . Our government is, through laws and regulations – many of which make no sense, has created a construct with the legal industry that is untenable. Once again, we see an incumbent (Yahoo – and yes, I recognize the irony of calling Yahoo an incumbent) attacking an innovator (Facebook) with irrational weapons that have huge collateral damage, all in the name of “enhancing shareholder value.”
This is not a winnable game for Yahoo, the Internet, innovation, or society. Like nuclear war, the only winning move is not to play. However, Yahoo has now played. The next few moves are critically important.
Six weeks ago I saw a tweet about a new thing from Codecademy called Code Year. It promised to teach you how to code in 2012. I signed up right away and am now one of 396,103 people who have signed up as of this moment.
Like a good MIT graduate, I’ve procrastinated. When I was an undergrad, I liked to say things like “I want to give all of my fellow students a four to six week handicap.” Yeah – I was the dude that blew off too many classes at the beginning of the semester. I did read everything so I eventually caught up pretty quickly, but fortunately MIT drop date was late in the semester so I had plenty of option value on bailing if I’d left things too long.
Every week for the past six weeks I’ve been diligently getting an email from Code Year saying my next lesson was ready. Tonight, after dinner, I decided to tackle week 1 and see if this was an effective way to learn JavaScript. While I’m able to program in PHP and Python, I’ve never learned JavaScript. I’m not really proficient in either language (PHP or Python) since I don’t write any production code anymore, but I’m comfortable with the syntax and have done my share of simple little things. JavaScript – not so much. And that seems silly to me since so many people that I hang out with eat JavaScript for breakfast.
Week 1 was trivial. I liked the Code Year lessons and the Codecademy UI is very good. I’ve scored 350 points, have completed 42 (simple) exercises, and have four badges, I only did the lessons for week 1; I’ve left the problems for another time to see if the syntax actually sinks in.
I been spending some time with friends in Boulder talking about different approaches to teaching people how to program. One of the initiatives, which is starting to pick up speed, is called the Boulder Software School. Within the group we’ve been pointing at things like Code Year, but I don’t know if any of us have actually given it a shot. So – in the spirit of experimenting, I’m on it. It’ll be interesting to see if any real proficiency with JavaScript emerges, or if I just learn the syntax for another programming language that puts angle brackets in funny places to delimit conditional statements.
In December I wrote a post titled It’s Not About Having The Most Friends, It’s About Having The Best Friends. Since then I’ve been systematically modifying my social networking behavior and cleaning up my various social graphs. As a significant content generator in a variety of forms (blogs, books, tweets, videos) and a massive content consumer, I found that my historical approach of social network promiscuity wasn’t working well for me in terms of surfacing information.
I made two major changes to the way I use various social networks. I went through each one and categorized each on three dimensions: (1) consumption vs. broadcast and (2) public vs. private, (3) selective vs. promiscuous. These are not binary choices – I can be both a content consumer and a broadcaster on the same social network, but I’ll use it differently depending where on the spectrum I am.
For example, consider Facebook. I determined I was in the middle of the consumption/broadcast spectrum, public, and selective. With Foursquare, I determined I was closer to broadcast and private and very selective. With LinkedIn, I was 100% broadcast, public, and promiscuous. With Twitter, I was similar to Facebook, but with a much wider broadcast and promiscuous. With RunKeeper, very strong on broadcast, public, but selective.
I then looked at the tools I was using. Yesterday I noticed Fred Wilson’s email The Black Hole Of Email and it reminded me that I view email as my primary communication channel for broad accessibility (I try to answer every email I get within 24 hours – if it takes longer you know I’m on the road or got behind) and often respond within minutes if I’m in front of my computer. But I’ve worked very hard to cut all of the noise out of my email channel – I have no email subscriptions (thanks OtherInBox), I get no spam (thanks Postini), I run zero inbox (read and reply / archive immediately), and am very selective with the notifications I get via email (i.e. I check Meetup.com daily, but the only email notifications I get are for Boulder Is For Robots.) As a result, I find email manageable and a powerful / simple comm channel for me.
Tuning each social network has ranged from trivial (15 minutes with RunKeeper and I was in a happy place) to medium (Foursquare took an hour to clean up my 800+ friends to 100-ish) to extremely painful (going from 3000 Facebook friends to a useful set seemed overwhelming.) I decided to clean up the easy ones first and then come up with manual algorithms for the harder ones.
My favorite approach is what I’m doing with Facebook. Every day I go into the Events tab and look at the birthday list. I then unfriend the people whose name I don’t recognize or who I don’t want to consume in my news feed. Since Facebook’s social graph is on the public side, people can still follow me (ala Twitter follow). I view this as a reverse birthday gift which probably enhances both of our lives.
In contrast, I’ve continued to just accept all LinkedIn requests except from obvious recruiters or people who look like spambots. I know they can pay to get access to my social graph – that’s fine – I want them to have to pay someone or work a little for it, not just get it for free, but the benefit of having a wide social graph on LinkedIn for the one time a week I use it to hunt someone down somewhere far outweighs the pain of being promiscuous.
I’ve continued to find and use other tools for managing all the data. One of my new favorites is Engag.io. Rather than getting a stream of Facebook email notifications, I check it once a day and respond to everything that I see. I’ve noticed that I find comments in other services like Foursquare that I was previously missing, and rather than having a pile of clutter in my inbox, I can interact it with once a day for ten minutes.
When I reflect on my approach, it doesn’t surprise me that it’s very algorithmic. That’s how I’ve always driven my content consumption / content generation world and part of the reason it doesn’t overwhelm me. Sure – it spikes up at times and becomes less useful / more chaotic (like it did last year when I realized Facebook wasn’t really useful for me anyone.) This causes me to step back, figure out a new set of algorithms, and get it newly tamed. And yes, Facebook is now much more useful and interesting to me after only a few months of cleanup.
I’m always looking for new tools and approaches to this so if you have a great one, please tell me. For example, the “unfriend on birthdays” approach was suggested several times in the comments to one of the posts and after trying a Greasemonkey plugin, manual unfriending on the iPad while watching TV, and other brute force approaches, I just decided I’d clean it up over a year via the birthday approach. So – keep the comments and emails flowing – they mean a lot to me.
I woke up this morning to a post from Fred Wilson titled The Academy For Software Engineering. In it Fred announced a new initiative in New York City called The Academy For Software Engineering. Fred, and his friend Mike Zamansky (a teacher at Stuyvesant High School) helped create this with the support of Mayor Bloomberg’s office and Fred and his wife Joanne are providing initial financial support for the project. If successful, it will have a profound impact on computer science education in the New York public high school system.
Fred’s looking for additional support. I haven’t talked to Amy yet about magnitude, but I’ve already committed via Fred’s blog and sent him a note separately. If you are interested in education in general and computer science / software education in high school in particular, I’d strongly encourage you to reach out as well.
I’ve been working on this general problem (dramatically improving computer science education, both in K-12 and college) for a while through my work at the National Center for Women & Information Technology. More than ever I believe we have a massive education pipeline problem – whether you call it computer science or software engineering or something else. There are several fundamental problems, starting with the curriculum and lack of teachers, but including a total miss on approach and positioning. I expect efforts like The Academy For Software Engineering to take this on directly.
I’m involved in the nascent stages of two projects in Boulder going by the code names “CodeStars” and “The Software School.” I’m excited about each of them and Fred’s initiative and leadership just pumped up my energy by a notch.
Fred / Joanne / Mike (who I don’t know) – thank you! And Mayor Bloomberg – we need a lot more politicians like you who speak their mind and get things done.
Marc Andreessen recently wrote a long article in the WSJ which he asserted that “Software Is Eating The World.” I enjoyed reading it, but I don’t think it goes far enough.
I believe the machines have already taken over and resistance is futile. Regardless of your view of the idea of the singularity, we are now in a new phase of what has been referred to in different ways, but most commonly as the “information revolution.” I’ve never liked that phrase, but I presume it’s widely used because of the parallels to the shift from an agriculture-based society to the industrial-based society commonly called the “industrial revolution.”
At the Defrag Conference I gave a keynote on this topic. For those of you who were there, please feel free to weigh in on whether the keynote was great, sucked, if you agreed, disagreed, were confused, mystified, offended, amused, or anything else that humans are capable of having as stimuli-response reactions.
I believe the phase we are currently in began in the early 1990’s with the invention of the World Wide Web and subsequent emergence of the commercial Internet. Those of us who were involved in creating and funding technology companies in the mid-to-late 1990’s had incredibly high hopes for where computers, the Web, and the Internet would lead. By 2002, we were wallowing around in the rubble of the dotcom bust, salvaging what we could while putting energy into new ideas and businesses that emerged with a vengence around 2005 and the idea of Web 2.0.
What we didn’t realize (or at least I didn’t realize) was that virtually all of the ideas from the late 1990’s about what would happen to traditional industries that the Internet would distrupt would actually happen, just a decade later. If you read Marc’s article carefully, you see the seeds of the current destruction of many traditional businesses in the pre-dotcom bubble efforts. It just took a while, and one more cycle for the traditional companies to relax and say “hah – once again we survived ‘technology'”, for them to be decimated.
Now, look forward twenty years. I believe that the notion of a biologically-enhanced computer, or a computer-enhanced human, will be commonplace. Today, it’s still an uncomfortable idea that lives mostly in university and government research labs and science fiction books and movies. But just let your brain take the leap that your iPhone is essentially making you a computer-enhanced human. Or even just a web browser and a Google search on your iPad. Sure – it’s not directly connected into your gray matter, but that’s just an issue of some work on the science side.
Extrapolating from how it’s working today and overlaying it with the innovation curve that we are on is mindblowing, if you let it be.
I expect this will be my intellectual obsession in 2012. I’m giving my Resistance is Futile talk at Fidelity in January to a bunch of execs. At some point I’ll record it and put it up on the web (assuming SOPA / PIPA doesn’t pass) but I’m happy to consider giving it to any group that is interested if it’s convenient for me – just email me.
I hate astroturfing. I think think it’s the lamest form of promotion and advocacy possible. It’s the opposite of authenticity and the antithesis of the brilliance of Twitter.
Last week I tweeted about one of my investments and a number of people replied. After each reply, the founder of a competitor tweeted out his own message to the @replies that responded to me. After a few I became annoyed since his @replies were both (a) unwelcome and (b) cluttering up my stream. I considered blocking him but then decided to think about it some more.
I’ve seen this strategy a few times. A competitor tries to piggyback on another company and intersect the stream and inject “look at me – my thing is good also” into the mix. I haven’t decided if this is brilliant or stupid, but after chewing on it a little it felt like a derivative of astroturfing to me.
Now, unlike astroturfing, it’s merely a low grade tactic to get attention one by one in the Twitter stream by intersecting existing interactions. In some cases, I’m sure people will be intrigued by it. In others it will feel spammy. And in others it will be ignored. I also think it’s a stupid competitive approach, which reminds me that I have a bunch of posts to finish up about competition which are sitting patiently in my drafts folder.
My concern isn’t the one off dynamic, which I don’t think has much real impact. It’s when this becomes a social media strategy. It’s inevitable that this will scale up and pollute the conversation, changing the signal to noise ration. The awesome thing about Twitter is anyone can follow you. But they can also @reply to you. Of course, they have to follow the other person copied for them to see the message, but that’s an easy thing to do. Once someone builds this into a social media dashboard and automates the “identify-keyword, add, @reply-message” function, it’ll get yucky fast. Especially when political campaigns get hold of the idea and really start astro-twitter-turfing.
James Bessen, Jennifer Ford, and Michael Meurer of BU School of Law have written a phenomenal paper titled The Private and Social Costs of Patent Trolls. Rather than be politically correct and refer to NPE’s simply as “non-practicing entities”, they cut through all the noise, define what a patent troll is, and go through a detailed and rigorous analysis of the private and social costs of patent trolls. Some highlights from the paper follow:
Regarding money:
The litigation has distinctive characteristics:
The authors suggest that these lawsuits exploit weaknesses in the patent system. They conclude that the loss of billions of dollars of wealth associated with these lawsuits harm society and state “while the lawsuits increase incentives to acquire vague, over-reaching patents, they decrease incentives for real innovation overall.”
While I’ve just summarized the executive summary, the paper is extremely well written, the topic rigorously researched, and the conclusions follow from the actual data. The footnotes are a joy to read as they tackle a few previous papers that use completely contorted logic to make their points. My favorite is footnote 6:
“In effect, Shrestha is arguing: A) Valuable patents receive higher citations, and, B) NPE litigated patents receive higher citations, therefore, C) NPE litigated patents are valuable patents. This is a classic logical fallacy.”
It’s a special bonus that the header on each page says “page # – Troll – 9/11”.
My partner Jason and I were talking about exactly the problem the other day as we wondered why so many people have trouble with logic and deductive reasoning. Our world of software patents is rife with this category of problem. It’s awesome that serious academics like Bessen and his colleagues are going deep into this issue.