I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.
If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.
Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.
Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”
Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”
Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.
Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.
Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”
I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.
I went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.
If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.
My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:
“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”
Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.
One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”
Fred Wilson beat me to it this morning with his post A Big Win For The Patent Reform Movement but he’s got a couple of hour time zone advantage over me. Regardless, I love Fred’s punch line:
So it was with incredible joy that I read these words by Elon Musk, founder and CEO of Tesla Motors and possibly the most innovative entrepreneur in the world right now. [Elon wrote in his post All Our Patent Are Belong To You] “Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.”
I’ll pile on with my accolades to Elon. While I don’t know him, I’m long time friends with his brother Kimbal who lives in Boulder so I always feel like I get a little taste of Elon whenever I talk to Kimbal. So – Elon – thank you for being a real leader here and taking action.
I’ve been asserting for a number of years that while software patents are completely fucked up, the general patent system stifles innovation. More and more research is appearing on software patent issues and patent trolls in general, including this recent piece by Catherine Tucker, an MIT Sloan professor of Marketing, titled The Eﬀect of Patent Litigation and Patent Assertion Entities on Entrepreneurial Activity. As Ars Technica summarizes in New study suggests patent trolls really are killing startups:
Turns out there is a very real, and very negative, correlation between patent troll lawsuits and the venture capital funding that startups rely on. A just-released study by Catherine Tucker, a professor of marketing at MIT’s Sloan School of Business, finds that over the last five years, VC investment “would have likely been $21.772 billion higher… but for litigation brought by frequent litigators.”
As my lawyer friends tell me, “the Supremes” are finally making some calls on this. The induced infringement theory, a particularly obnoxious patent litigation approach, is no longer valid. The main event, Alice Corp. v. CLS Bank, is still waiting to be ruled on. Let’s hope the Supremes take a real stand on when software claims are too abstract to be patented this time around, unlike the punt they made on Bilski.