Brad Feld

Tag: sci-fi

Amy and I saw Ex Machina last night. A steady stream of people have encouraged us to go see it so we made it Sunday night date night.

The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.

Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.

Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?

When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.

What if gender was truly irrelevant in an advanced intelligence?

You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.

Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.

Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.

But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?

At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.

Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.

Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.

Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?

I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.


I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.