I’ve been reading hard science fiction lately, along with some actual science. The hard sci-fi includes Dragon’s Egg and Starquake by Robert Forward (wow – awesome) and Nova by Samuel Delany (also awesome). The science includes The God Equation: The Quest for a Theory of Everything by Michio Kaku and Fundamentals: Ten Keys to Reality by Frank Wilczek.
In between runs this weekend I finished Nova (I was listening to it on Audible), Fundamentals (I was reading it on Kindle), and read most of Starquake (It’s only available in physical form.) I also started listening to Project Hail Mary by Andy Weir. The only thing that would have made this weekend better would be a third day to it, instead of the Monday in front of me.
Frank Wilczek is a legendary physicist who won the Nobel Prize in 2004 for “for the discovery of asymptotic freedom in the theory of the strong interaction.” with David Gross and David Politzer. His office at MIT is in the same hallway as Bernard Feld, my MIT namesake (Prof B. Feld, something I never became.) He also happens to be a spectacular writer.
Fundamentals is extremely accessible. After reading Michio Kaku’s The God Equation, I realized that I knew a lot of surface-level physics (and science in general), but there was a layer down, especially from the past 20 years, that was elusive. Kaku’s presented it in a way that one could understand without any deep quantum physics knowledge, so I went looking for more.
Wilczek delivered. The first part of the book, called “What There Is”, has five chapters.
As I read hard sci-fi, the entanglement of known science at the time (Nova was published in 1968; The Dragon’s Egg was published in 1980) along with speculation of where things were going (e.g. each book took place far in the future) created a contextual backdrop for me for Fundamentals that helped bring what we know, and what we don’t know, to the surface. Or, more specifically, what we knew (in 1968, 1980) that was right, and what doesn’t seem right anymore because it wasn’t known, or understood.
The shocker is how much is directionally correct. When I read Asimov from the 1950s (I, Robot is a good place to start), or Philip K. Dick from the 1960s (Do Androids Dream of Electric Sheep is a good place to start) I have the same feeling. Many details are completely wrong (e.g. how data is stored on auxtape) but others are completely correct (e.g. massive underground data centers). Hard sci-fi takes more risks on this dimension, and both Forward and Delany do an amazing job of both the science and the storytelling.
In the last 20 years, I’ve read a lot more sci-fi than science. That’s a miss on my part. Going forward, my diet will include both. And I hope to someday meet a Cheela. And a Shrike.
If you are looking for a great book to read this weekend, I recommend Ian McEwan’s Machines Like Me. I read it last weekend and am still thinking about it.
McEwan is a magnificent writer. When the hardcover ended up on top of my infinite pile of books to read, Amy said, “Wow, you’ll love Ian McEwan’s writing.” Whenever Amy says something like that, I know I’m in for a treat.
The setting is London in 1982. But it’s a parallel universe. Alan Turing chooses jail over chemical castration, lives, and has created massive innovations that are 40 years ahead of their time. Lennon and JFK didn’t die. Jimmy Carter wins a second term. Margaret Thatcher gets booted after botching the Falklands War.
That’s the backdrop for the introduction of our protagonists Charlie and Miranda. Charlie uses his inheritance to buy an Adam, one of 25 first production models of artificial humans (13 Eves, 12 Adams are available – the Eves sell out immediately so Charlie ends up with an Adam.)
I love the narrative feature of a parallel universe. Amy and I started watching Season 2 of The OA last night which aggressively jumps to an alternative universe. Some of today’s best near term sci-fi writers are using this as a basis for their writing, although they are often less explicit about how they are twisting current reality to the alternative universe.
McEwan isn’t subtle about the twists, which makes the book awesome. You quickly feel that this 1982 is the real 1982 and things take off from there. Every time McEwan drops another new reality fragment, more pieces fall nicely into place.
The result is a very provocative journey through the introduction of an artificial human into the evolving relationship of two existing real humans.
If you are a reader, especially one who likes (a) sci-fi and (b) literary fiction, you’ve got a fun weekend ahead of you if you grab Machines Like Me.
If you – like me – love science fiction, I encourage you to support the new sci-fi magazine Compelling Science Fiction. The editor, Joe Stech, lives in the Boulder area and has been actively involved in the Boulder/Denver startup community. He’s using Patreon as a funding model, which I’m a big fan of for new and indie writers.
An annoying thing about Twitter Search is that it’s not good enough to help me find who tweeted at me that Dark Matter by Blake Crouch is something I should read. I scrolled through my @mentions until I was annoyed after trying to search but not being able to figure out how to scope the search so I could only search @mentions = bfeld (or maybe my problem is that it should be @mentions == @bfeld).
Whoever it was – thank you! Dark Matter was awesome. It’s the first book I read Saturday as part of my decompress from the week and feel better from trying to eat yogurt maneuver that I ended up playing out throughout the day.
I love near term sci-fi. I especially love right now sci-fi – stuff that happens in current time but incorporates a scientific breakthrough that is currently being explored.
Dark Matter is all about the concept of an infinite number of parallel universes. The scientific breakthrough is the notion of quantum superposition easily explained by the Schrödinger’s cat thought experiment.
The book is a magnificently fast romp that includes kidnapping, research institutions, love, family, death, religion, the nature of the universe, psychological intrigue, really complex relationship dynamics, and a whole bunch of other stuff that makes a novel irresistible to put down. There were a few plot twists that I anticipated or figured out before they came, but generally I rode the wave of the book.
If you are a sci-fi fan or just like a great action adventure novel with nerdy underpinnings, this is for you. And if you are wondering whether we are actually just part of a computer simulation, this book will help you understand that theory better.
If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.
I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).
I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.
It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)
I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.
I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)
And then it starts to get complicated.
I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.
I’ve decided to read a bunch of old science fiction as a way to form some more diverse views of the future.
I’ve been reading science fiction since I was a kid. I probably started around age ten and was a voracious reader of sci-fi and fantasy in high school. I’ve continued on as an adult, estimating that 25% of what I read is science fiction.
My early diet was Asimov, Heinlein, Harrison, Pournelle, Niven, Clarke, Sterling and Donaldson. When I was on sabbatical a few years ago in Bora Bora I read about 40 books including Asimov’s I Robot, which I hadn’t read since I was a teenager.
I’m almost done with Liu’s The Dark Forest which is blowing my mind. Yesterday morning I came across a great interview from 1999 with Arthur C. Clarke. A bunch of dots connected in my mind and I decided to go backwards to think about the future.
I don’t think we can imagine what things will be like 50 years from now and I’m certain we have no clue what a century from now looks like. So, whatever we believe is just random shit we are making up. And there’s no better way to come across random shit that people are making up than by reading sci-fi, which, even if it’s terribly incorrect, often stimulates really wonderful and wide ranging thoughts for me.
So I thought I’d go backwards 50+ years and read sci-fi written in the 1950s and 1960s. I, Robot, written in 1950, was Asimov’s second book so I decided to start with Pebble In the Sky (his first book, also written in 1950). After landing on Amazon, I was inspired to buy the first ten books by Asimov, which follow.
Pebble In The Sky (1950)
I, Robot (1950)
The Stars, Like Dust (1951)
David Starr, Space Ranger (1952)
Foundation and Empire (1952)
The Currents of Space (1952)
Biochemistry and Human Metabolism w/Williams & Wilkins (1952)
Second Foundation (1953)
Lucky Starr and the Pirates of the Asteroids (1953)
They are all sci-fi except Biochemistry and Human Metabolism written with Williams & Wilkins in 1952. I bought it also, just for the hell of it.
I bought them all in paperback and am going to read them as though I was reading them in the 1950s (on paper, without any interruptions from my digital devices) and see what happens in my brain. I’ll report back when I’m finished (or maybe along the way).
If this list inspires you with any sci-fi books from the 1950s or 1960s, toss them in the comments and I’ll grab them.
My idea of a really good afternoon on a three day weekend is to lay on the couch and read a book. Other than a nap in the middle of the experience, that’s what I did today.
I read the Supersymmetry by David Walton. It’s the sequel to Superposition, which I read earlier this year. They are both excellent near term sci-fi in the action/adventure save the world while learning physics genre.
When I read Superposition, I was on a long airplane flight with Amy on our way home from Paris where we went for our Q2 vacation. We were both exhausted when we left for Paris so we mostly slept, read, walked around the city, and ate a little. Superposition was the last book I read on the trip and I liked it a lot, but the activity of re-entry after a week off the grid swept it quickly from my mind.
On September 1st, I got the following email from David Walton.
Just writing to let you know that Supersymmetry, the sequel to Superposition, comes out today. (It follows the story of the two girls, Alessandra and Alessandra, living separate lives 15 years later.) If you would like a promotional copy, I would be happy to send it to you.
I was so psyched that you read and enjoyed Superposition earlier this year. I’m sorry if I seemed to pester you about it at the time–I was just thrilled that you had picked it up and read it and actually *liked* it so quickly.
I went online, purchased a copy, and told David. I realized that I had never written a review of Superposition, which was a miss on my part as I enjoyed it so much. Within a few chapters of Supersymmetry I remembered why David is such a strong writer. He combines action/adventure with sci-fi with strong female protagonists who have an other-worldly backstory. The writing, like that of my favorite sci-fi writers, including William Hertling, Daniel Suarez, and Ramez Naam, could plausibly happen in my lifetime, but it’s a little distant from today so it takes the scientific leap that good sci-fi forces you to take.
What’s special about David’s work is that it is infused with physics. When I was a freshman at MIT, I thought I was pretty good at physics. In high school, I did well in AP Physics, although I only got a 3 on the AP Physics exam so I didn’t place out of it, which ended up being a blessing. MIT makes all freshman take a full year of physics, which for most is 8.01: Classical Mechanics and 8.02: Electricity and Magnetism. I felt like I was doing ok in 8.01 until I was ten minutes into my first exam and realized I had no idea how to answer any of the questions. Two days later I got my grade, which was a 20. Having never gotten a B on anything in Physics before, I did the only rational thing for a 17 year old freshman at MIT to do after getting a 20 on his first test – I went to my room, locked the door, and cried for an hour. The next day at 8.01 recitation I found out that class average was a 32, meaning I got a C, which wasn’t great but worked for the pass/fail grade that all freshman at MIT get to work under. At that moment, my belief that I was good at physics ended and my understanding of the MIT approach of being a daily assault on one’s self esteem began.
So, I have nostalgia for physics, even though I have no expertise with it. Superposition and Supersymmetry do a nice job of explaining some concepts in short bursts, unlike the pages that Neal Stephenson unfurls around physics in Seveneves, which I also loved but will admit to skimming in sections that were either tedious or too heavy for me.
It gives me great joy to discover new writers who I know I’ll be sticking with for a long time. I’m psyched to add David Walton to the list.
The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.
Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.
Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?
When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.
What if gender was truly irrelevant in an advanced intelligence?
You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.
Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.
Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.
But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?
At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.
Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.
Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.
Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?
I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.
I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.
Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.
I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”
But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.
We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.
Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.
It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.
History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.
We’ve got some doozies coming in the next 50 years. It’s going to be fun.