I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.
Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.
I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”
But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.
We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.
Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.
It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.
History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.
We’ve got some doozies coming in the next 50 years. It’s going to be fun.
I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.
If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.
Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.
Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”
Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”
Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.
Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.
Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”
I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.
I went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.
If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.
My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:
“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”
Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.
One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”
William Hertling is currently my favorite “near term” science fiction writer. I just read a pre-release near-final draft of his newest book, The Last Firewall. It was spectacular. Simply awesome.
You can’t read it yet, but I’ll let you know when it’s available. In the mean time, go read the first two books in the trilogy.
They are also excellent and important for context for The Last Firewall. They are inexpensive. And they are about as close to reality while still being science fiction as you can get.
I define “near term science fiction” as stuff that will happen within the next 20 years. I used to read everything by William Gibson, Bruce Sterling, and Neal Stephenson. Gibson’s Neuromancer and and Stephenson’s Snow Crash were – until recently – my two favorite books in this category. Suarez’s Daemon and Freedom (TM) replaced these at the top of my list, until Hertling showed up. Now I’d put Daemon and The Last Firewall tied for first.
Amy and I were talking about this in the car today. Gibson, Sterling, and Stephenson are amazing writers, but their books have become too high concept. There’s not enough love and excitement for the characters. And the science fiction is too abstract – still important, but not as accessible.
In contrast, Hertling and Suarez are just completely nailing it, as is Ramez Naam with his recent book Nexus. My tastes are now deeply rooted with these guys, along with Cory Doctorow and Charles Stross.
If I was writing science fiction, this would be what I was going for. And, if you want to understand the future, this is what you should be reading.
Holy cannoli! That’s what I shouted out loud (startling Amy and the dogs who were laying peacefully next to me on the couch last night) about 100 pages into William Hertling‘s second book A.I. Apocalypse. By this point I figured out where things were going to go over the next 100 pages, although I had no idea how it was going to end. The computer virus hacked together by a teenager had become fully sentient, completely distributed, had formed tribes that now had trading patterns, a society, and a will to live. All in a parallel universe to humans, who were now trying to figure out how to deal with them, ranging from shutting them off to negotiating with them, all with the help of ELOPe, the first AI who was accidentally created a dozen years earlier and was now working with his creator to suppress the creation of any other AI.
Never mind – just go read the book. But read Avogadro Corp: The Singularity Is Closer Than It Appears first as they are a series. And if you want more of a taste of Hertling, make sure you read his guest post from Friday titled How To Predict The Future.
When I was a teenager, I obsessively read everything I could get my hands of by Isaac Asimov, Ray Bradbury, and Robert Heinlein. In college, it was Bruce Sterling, William Gibson, and Neal Stephenson. Today it’s Daniel Suarez and William Hertling. Suarez and Hertling are geniuses at what I call “near-term science fiction” and required reading for any entrepreneur or innovator around computers, software, or Internet. And everyone else, if you want to have a sense of what the future with our machines is going to be like.
I have a deeply held belief that the machines have already taken over and are just waiting for us to catch up with them. In my lifetime (assuming I live at least another 30 years) I expect we will face many societal crises around the intersection of man and machine. I’m fundamentally an optimist about this and how it evolves and resolves, but believe the only way you can be prepared for it is to understand many different scenarios. In Avogadro Corp and A.I. Apocalypse, Hertling creates two amazingly important situations and foreshadows a new one in his up and coming third book.