On Monday, I wrote a post titled Look Up and Don’t Give Up that included the 2:48-second video of a mama bear and her cub struggling across a steep cliff covered with snow. 20+ million people have also looked at it and, I expect, found inspiration from it as I did.
I didn’t think very hard about this until this morning when I read The Atlantic article titled The Problem Behind a Viral Video of a Persistent Baby Bear: What appears to be a life-affirming triumph is really a cautionary tale about drones and wildlife.
As I was reading the article, I flashed back to several books from two different authors – William Hertling and Daniel Suarez – that included autonomous drones (and drone swarms) as part of their plots. I remember being incredibly anxious during the sections on killer drones controlled (or programmed) by bad guys, and then even more anxious when the drones appeared to be completely autonomous, just carrying out whatever their mission was while coordinating with each other.
And then I felt pretty uncomfortable about my enthusiastic feelings about the cub and the mama bear. I remembered the moment near the end of the video where the mama bear swats at the cub and then the cub falls down the snow-covered mountain for a long time before stopping and starting the long climb up again. I had created a narrative in my head that the mama bear was reaching out to help the cub, but the notion of the drone antagonizing the mama bear, which responded by trying to protect the cub, rings true to me.
My brain then wandered down the path of “why was that idiot drone pilot sending the drone so close to the bears?” I thought about how the drone wasn’t aware of what it was doing, and the pilot was likely completely oblivious to the impact of the drone on the bears. I thought about how confused and terrified the bears must have been while they scrambled over the snow to try to reach safety. Their dash for cover in the woods took on a whole new meaning for me.
I then thought about what encountering a drone swarm consisting of 100 autonomous drones would feel like to the bears. I then teleported the bears to safety (in my mind) and put myself in their place. That most definitely did not feel good to me.
We are within a decade of the autonomous drone swarm future. Our government is still apparently struggling to get voting machines to work consistently (although the cynical among us expect that the non-working voting machines are part of a deliberate approach to voter suppression in certain places.) At the same time, we can order food from our phone and have it delivered in 30 minutes, no matter what the food is or where we are located. Humans are still involved in the delivery, but that’s only a temporary hack on the way to the future where the drones just drop things off for us.
When I talk to friends about 2030 (and yes, I hope to still be around), most people extract linearly from today. A few of my friends (mostly sci-fi writers like William and Eliot Peper) are able to consistently make the step function leaps in imagination that represent the coming dislocation from our current reality. I don’t think it’s going to be visitations from aliens, distant space travel due to FLT drives, or global nuclear apocalypse. Sure, those are possible and, unless we get our shit together on humans on several dimensions, we’ll continue our steady environmental and ecological destruction of the planet. But, that kind of stuff is likely background noise to the change that is coming.
It’s the change you can see through the bears’ eyes (and fear) while at the same time the joy that humans appear to get – mostly – from observing them, but not really thinking about the unintended consequences. While the killer AI that smart people scarily predict could be front and center, I think it’s more likely our inability to anticipate, and react to, unintended consequences that are really going to mess us up.
If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.
I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).
I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.
It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)
I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.
I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)
And then it starts to get complicated.
I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.
I’m a huge fan of William Hertling. His newest book, The Turing Exception, is dynamite. It’s the fourth book in the Singularity Series, so you really need to read them from the beginning to totally get it, but they are worth every minute you’ll spend on them.
William occasionally sends me some thoughts for a guest post. I always find what he’s chewing on to be interesting and in this case he’s playing around with doing a Drake’s Equation equivalent for social networks. Enjoy!
Drake’s Equation is used to estimate the number of planets with currently communicating life, which helps us predict the odds of finding intelligent life in the universe. You can read the Wikipedia article for more information, but the basic idea is to multiply together a number of functions: the number of stars in our galaxy, the fraction of those that have planets, the average percent of planets that could support life, etc.
I’m currently writing a novel about social networks, and one of the areas that’s interesting to me is what I think of as the empty network problem: a new social network has little benefit unless my friend are there. If I’m an early adopter, I might give it a few days, and then leave. If my friends show up later, and I’ve already given up on it, then they don’t get any benefit either.
Robert Metcalfe, inventor of ethernet, coined Metcalfe’s Law, which says “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n^2).”
Social networks actually have a more rigorous form of that law: “the value of a social network is proportional to the square of the number of connected friends.” That is, I don’t care about the number of strangers using a network, I care about the number of friends. (Friends being used loosely here to include friends, family, coworkers, business associates, etc.)
Drake’s Equation helps predict the success of finding life in the universe because it takes into account the rise and fall of civilizations: two civilizations must exist at the same time and within observable distance of each other in other to “find intelligent life”.
So there must be a similar type of equation that can help predict a person’s adoption of a new social network and takes into account that we’re only willing to try a network for so long before giving up.
Here’s my first shot at this equation:
P = (nN * fEA * fAv * fBE * tT) / (tB * nF) * B
P = probability of long-term adoption
nN = Size of my network (number of friends)
fEA = Fraction of my friends who are Early Adopters
fAv = fraction of those who have available time to try a new network
fBE = fraction that overcome the Barrier to Entry
tT = Average length of time people Try the network
nB = Average length of time it takes to see Benefit of the new network
nF = Number of Friends needed to see benefit
B = The unique benefit or desirability of the network
In plain English: The probability of a given person becoming a long term user of a new social network is a function of the number of their friends who also adopt and how long they remain there divided by the length of time and number of friends it takes to see the benefit multiplied by the size of the unique benefit offered by the social network.
Some ideas that fall out from the equation:
What do you think?
I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.
If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.
Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.
Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”
Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”
Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.
Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.
Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”
I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.
I went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.
If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.
My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:
“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”
Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.
One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”
Kwatz!
Nope – I’m not talking about an Android phone. I’m talking about an amazing book titled Nexus by Ramez Naam.
Ramez sent me a pre-release version last month. I read it over my holiday in Mexico while I was recovering from kidney stone surgery. I saved it for the end when I was reasonable rested and cogent – it was amazing.
One of my favorite forms of science fiction is what I call “near term scifi.” It’s stuff written two to ten years in the future, usually linked back to current stuff. In Nexus‘ case, Ramez sets it 20+ years in the future, but I’m going to argue that he’s talking about stuff that’s within a decade. My guess is he chooses 2040-ish given the singularity dynamics – I prefer his post-human definition when man and machine merge into one.
Ramez combines science, technology, and a thriller in a very accessible and page turning way. If I ever decided to write fiction, my hope is that I could master the craft of scifi the way Ramez, William Hertling, and Daniel Suarez have. I put him firmly in their league.
If you are looking for a powerfully stimulating book to read over the holidays about where things are going, with a complex hero / protagonist / antagonist structure, plenty of twists and turns, and great scifi that intersects with our reality, go get a copy of Nexus right now.