Brad Feld

Tag: ai

Sunspring, the first known screenplay written by an AI, was produced recently. It is awesome. Awesomely awful. But it’s worth watching all ten minutes of it to get a taste of the gap between a great screenplay and something an AI can currently produce.

Watch this on The Scene.

It is intense as ArsTechnica states, but that’s not because of the screenplay. It’s because of the incredible acting by Thomas Middleditch and Elisabeth Gray, who turned an almost illiterate script into an incredible five minute experience. Humphrey Ker, on the other hand, appears to just be a human prop.

AI has a very long way to go. But it’s going to get there very fast because it understands exponential curves.

 


When I was 14, my dad gave me a copy of Alvin Toffler’s book The Third Wave

It blew my fucking mind.

I then read the prequel – Future Shock – which was good – but since my mind was already blown, it was anticlimactic.

If you don’t know the arc of Toffler’s waves, they go as follows:

  • The First Wave: agricultural society
  • The Second Wave: industrial society
  • The Third Wave: post-industrial society

Future Shock was written in 1970 and The Third Wave was written in 1980. While the idea of post-industrial society seems obvious in hindsight, in 1980 it was a completely new idea.

Ever since then I’ve been wondering what the next wave would be. While Kurweil’s The Singularity Is Near is probably the closed book I’ve read that stimulated me the way The Third Wave did when I was 14, at some point I just felt hollow and disappointed when I read the latest futurist manifesto. Instead, I ventured further into the future with the science fiction that I have always read on a regular basis and used it as my stimuli.

Recently, a bunch of smart and famous tech entrepreneurs have been talking about AI and the impact of AI on civilization. I’ve read a few of the books that get tossed around, like Bostrom’s Superintelligence, and a bunch of the articles that people have written. But none have spoken to me, or blown my mind the way Toffler did 35 years ago.

I’m on a search for the “Third Wave” of this generation. Any ideas for me?


I’m a huge fan of William Hertling. His newest book, The Turing Exception, is dynamite. It’s the fourth book in the Singularity Series, so you really need to read them from the beginning to totally get it, but they are worth every minute you’ll spend on them.

William occasionally sends me some thoughts for a guest post. I always find what he’s chewing on to be interesting and in this case he’s playing around with doing a Drake’s Equation equivalent for social networks. Enjoy!

Drake’s Equation is used to estimate the number of planets with currently communicating life, which helps us predict the odds of finding intelligent life in the universe. You can read the Wikipedia article for more information, but the basic idea is to multiply together a number of functions: the number of stars in our galaxy, the fraction of those that have planets, the average percent of planets that could support life, etc.

I’m currently writing a novel about social networks, and one of the areas that’s interesting to me is what I think of as the empty network problem: a new social network has little benefit unless my friend are there. If I’m an early adopter, I might give it a few days, and then leave. If my friends show up later, and I’ve already given up on it, then they don’t get any benefit either.

Robert Metcalfe, inventor of ethernet, coined Metcalfe’s Law, which says “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n^2).”

Social networks actually have a more rigorous form of that law: “the value of a social network is proportional to the square of the number of connected friends.” That is, I don’t care about the number of strangers using a network, I care about the number of friends. (Friends being used loosely here to include friends, family, coworkers, business associates, etc.)

Drake’s Equation helps predict the success of finding life in the universe because it takes into account the rise and fall of civilizations: two civilizations must exist at the same time and within observable distance of each other in other to “find intelligent life”.

So there must be a similar type of equation that can help predict a person’s adoption of a new social network and takes into account that we’re only willing to try a network for so long before giving up.

Here’s my first shot at this equation:

P = (nN * fEA * fAv * fBE * tT) / (tB * nF) * B

P = probability of long-term adoption

nN = Size of my network (number of friends)
fEA = Fraction of my friends who are Early Adopters
fAv = fraction of those who have available time to try a new network
fBE = fraction that overcome the Barrier to Entry
tT = Average length of time people Try the network
nB = Average length of time it takes to see Benefit of the new network
nF = Number of Friends needed to see benefit
B = The unique benefit or desirability of the network

In plain English: The probability of a given person becoming a long term user of a new social network is a function of the number of their friends who also adopt and how long they remain there divided by the length of time and number of friends it takes to see the benefit multiplied by the size of the unique benefit offered by the social network.

Some ideas that fall out from the equation:

  • A network that targets teens, who may tend to have more time and may be more likely to be early adopters, will have an easier time gaining adoption than a network that targets busy executives, all other things being equal.
  • A network that has a benefit when even two friends connect will see easier initial adoption than one that requires a dozen friends to connect.
  • A network whose benefit applies to distant connections is at an advantage compared to one that only offers a benefit to close friends (because N is larger)
  • A sufficiently large unique benefit can overcome nearly any disadvantage.
  • Social gaming is interesting, because it provides benefit even when connected to no one, the benefit increases when connected to strangers, and then increases even more when connected to friends.

What do you think?


Amy and I saw Ex Machina last night. A steady stream of people have encouraged us to go see it so we made it Sunday night date night.

The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.

Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.

Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?

When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.

What if gender was truly irrelevant in an advanced intelligence?

You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.

Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.

Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.

But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?

At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.

Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.

Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.

Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?

I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.


Ingres at SeatacYesterday at the end of the day I was sitting in Greg Gottesman‘s office at Madrona catching up on email before dinner. Greg walked in with Ben Gilbert from Madrona Labs.

We started talking about sci-fi and Greg said “Are you into Ingress?” I responded “Is that the Google real-world / augmented reality / GPS game?” Greg said yes and I explained that I’d played with it a little when it first came out several years ago since a few friends in Boulder were into it but I lost track of it since there wasn’t an iOS app.

Greg pulled out his iPhone 6+ giant thing and started showing me. Probably not surprising to anyone, I grabbed my phone, downloaded it, created an account using the name the AIs have given me (“spikemachine”), and started doing random things.

Greg went down the hall and grabbed Brendan Ribera, also from Madrona Labs, who is a Level 8 superstar Ingres master-amazing-player. Within a few minutes we were on the Ingress Map looking at stuff that was going on around the world.

By this point my mind was blown and all I wanted to do was get from basic-beginner-newbie-no-clue-Ingre player to Level 2. With Brendan as my guide I quickly started to get the hang of it. A few hacks and XMPs later I was Level 2.

I asked Greg, Ben, and Branden if they had read Daemon by Daniel Suarez. None of them had heard of it so I went on a rant about Rick Klau’s discovery of the book and Leinad Zeraus, the evolution of this crazy thing into Daniel Suarez’s bestseller and the rest of my own wonderful romp through the writings of Daniel Suarez, William Herting, and Ramez Naam. It wasn’t merely my love of near-term sci-fi, but my discovery of what I believe is the core of the next generation of amazing near-term sci-fi writers. And, as a bonus to them having to listen to me, I bought each of them a Kindle version of Daemon.

Ingres completely feels like Daemon to me. There is plenty of chatter on the web about speculation of similarities and inspirations of Daemon on Ingress. I have no idea what the real story is, but since we are all suspending disbelief in both near-term sci-fi as well as Ingress, I’m going with the notion that they are linked even more than us puny humans realize.

This morning as I was walking through Sea-Tac on my way to my plane, I hacked a few portals, got a bunch of new stuff, and XMPed away whenever a resistance portal came into range. I’m still a total newbie, but I’m getting the hang of it. And yes, I’m part of the enlightenment as it offends me to the core of my soul that people would resist the future, although it seems to be more about smurfs vs. frogs.


I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.


I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.

If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.

Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.

Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”

Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.

Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.

Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.

Screen Shot 2014-11-03 at 6.36.19 AMI went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.

If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.

My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AIHe had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:

“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”

Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.

One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”

Kwatz!


William Hertling is currently my favorite “near term” science fiction writer. I just read a pre-release near-final draft of his newest book, The Last Firewall. It was spectacular. Simply awesome.

You can’t read it yet, but I’ll let you know when it’s available. In the mean time, go read the first two books in the trilogy.

They are also excellent and important for context for The Last Firewall. They are inexpensive. And they are about as close to reality while still being science fiction as you can get.

I define “near term science fiction” as stuff that will happen within the next 20 years. I used to read everything by William Gibson, Bruce Sterling, and Neal Stephenson. Gibson’s Neuromancer and and Stephenson’s Snow Crash were – until recently – my two favorite books in this category. Suarez’s Daemon and Freedom (TM) replaced these at the top of my list, until Hertling showed up. Now I’d put Daemon and The Last Firewall tied for first.

Amy and I were talking about this in the car today. Gibson, Sterling, and Stephenson are amazing writers, but their books have become too high concept. There’s not enough love and excitement for the characters. And the science fiction is too abstract – still important, but not as accessible.

In contrast, Hertling and Suarez are just completely nailing it, as is Ramez Naam with his recent book Nexus. My tastes are now deeply rooted with these guys, along with Cory Doctorow and Charles Stross.

If I was writing science fiction, this would be what I was going for. And, if you want to understand the future, this is what you should be reading.


Holy cannoli! That’s what I shouted out loud (startling Amy and the dogs who were laying peacefully next to me on the couch last night) about 100 pages into William Hertling‘s second book A.I. Apocalypse. By this point I figured out where things were going to go over the next 100 pages, although I had no idea how it was going to end. The computer virus hacked together by a teenager had become fully sentient, completely distributed, had formed tribes that now had trading patterns, a society, and a will to live. All in a parallel universe to humans, who were now trying to figure out how to deal with them, ranging from shutting them off to negotiating with them, all with the help of ELOPe, the first AI who was accidentally created a dozen years earlier and was now working with his creator to suppress the creation of any other AI.

Never mind – just go read the book. But read Avogadro Corp: The Singularity Is Closer Than It Appears first as they are a series. And if you want more of a taste of Hertling, make sure you read his guest post from Friday titled How To Predict The Future.

When I was a teenager, I obsessively read everything I could get my hands of by Isaac Asimov, Ray Bradbury, and Robert Heinlein. In college, it was Bruce Sterling, William Gibson, and Neal Stephenson. Today it’s Daniel Suarez and William Hertling. Suarez and Hertling are geniuses at what I call “near-term science fiction” and required reading for any entrepreneur or innovator around computers, software, or Internet. And everyone else, if you want to have a sense of what the future with our machines is going to be like.

I have a deeply held belief that the machines have already taken over and are just waiting for us to catch up with them. In my lifetime (assuming I live at least another 30 years) I expect we will face many societal crises around the intersection of man and machine. I’m fundamentally an optimist about this and how it evolves and resolves, but believe the only way you can be prepared for it is to understand many different scenarios. In Avogadro Corp and A.I. Apocalypse, Hertling creates two amazingly important situations and foreshadows a new one in his up and coming third book.