I read The Three-Body Problem last week and loved it. It was a little hard to get into it at the beginning and that ended up being pat of the beauty of it. I’m not going to summarize it here – I encourage you to read it – but want to talk about the thoughts it simulated.
About two-thirds of the way through the book, I had a thought while laying in bed next to Amy that we don’t have any idea how the universe works. I blurted out something like “We don’t know how the universe, whatever that means, works.” Appropriately, Amy asked “Tell me more” which is how many of our funnest conversations unfold.
I’ve been listening to the Hyperion Cantos on Audible while I run. I’ve ramped up my training again so I’m almost done with the last book (Rise of Endymion). As I absorb all of them, I think Dan Simmons has written what may end up being the most important science fiction books of our era.
When I toss creative constructs like the void which binds, the conflict between humans and cylons, and the Trisolaran’s (including criticism about how stupid they are) into a cauldron and stir it around, the stew that gets made reinforces my view that as humans we have no idea what is actually going on.
The idea the we are the only sentient beings that have ever existed makes no sense to me. Our view of time, which is scaled by a normal human lifespan (now approaching 80 years) sizes the lens through which we view things. Our daily cadence, which is ruled by endless interactions that last under a second and require almost no foreground thought, just reinforces a very short time horizon.
What if our time horizon was 100,000 years. Or 1,000,000 years. Or we could travel forward and backward through time at that scale. Or cross physical distances immediately without time debt. Or cross physical distances while varying the time dimension so we can travel both physically and through time at will. Or maybe travel on a dimension that is different than distance or time that we haven’t even considered yet.
Over the weekend, I ended up reading a few articles on quantum computing and qubits. As I was trying to piece together the arguments the authors were making about the impact on machine learning and AI, I drifted away to a simple question.
What does time mean anyway?
This might be my favorite part of The Three-Body Problem. I’m planning on reading the second book in the trilogy (The Dark Forest) next week after I finish the third book in the Red Rising Trilogy (Morning Star) to see where it takes me.
I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.
If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.
Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.
Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”
Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”
Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.
Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.
Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”
I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.
I went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.
If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.
My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:
“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”
Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.
One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”