Category: The Future

Jan 2 2018

The Past, Present, and The Future

The past is ungraspable,
the present is ungraspable,
the future is ungraspable.

– Diamond Sutra

Now that it’s 2018, the inevitable predictions for 2018 are upon us.

I’m not a predictor. I never have been and don’t expect I ever will be. However, I do enjoy reading a few of the predictions, most notably Fred Wilson’s What Is Going To Happen In 2018.

Unlike past years, Fred led off this year with something I feel like I would have written.

“This is a post that I am struggling to write. I really have no idea what is going to happen in 2018.”

He goes on to make some predictions but leave a lot in the “I have no idea” category.

I mentioned this to Amy and she quickly said:

And that, simply put, is my goal for 2018.

As I read my daily newsfeed this morning, I came upon two other predictions that jumped out at me, which are both second-order effects of US government policies changes.

The first is “tech companies will use their huge hoards of repatriated cash to buy other companies.” There is a 40% chance Apple will acquire Netflix, according to Citi and Amazon will buy Target in 2018, influential tech analyst Gene Munster predicts. The Apple/Netflix one clearly is linked to “Apple has so much cash – they need to use it.” While the Amazon one is more about “Amazon needs a bigger offline partner than Whole Foods”, it feels like it could easily get swept in the “tons of dollars sloshing around in US tech companies – go buy things!”

The second is “get those immigrants out of the US, even if they are already here and contributing to our society.” H-1B visa rules: Trump admin considers tweak that may lead to mass deportation of Indians is the next layer down, where the Executive Branch can just modify existing rules that have potentially massive changes.

I’ve been reading The Lessons of History by Will and Ariel Durant. Various Cylons on BSG had it right when they said, “All of this has happened before. All of this will happen again.”

Comments
Jul 24 2017

The Pessimists’ Future

An amazing book. But a dark, dark future. Or not, depending on whether or not you believe we are actually living in a computer simulation already.

Comments
Jul 12 2016

Looking Forward to 2025

If we fund an early stage startup company today and it’s hugely successful, it’ll be coming into its own in 2025. Ponder that for a moment.

That’s how our business – and entrepreneurship – really works. With all of the excitement around entrepreneurship in the past few years, there has been a lot of shorter term thinking. I’m seeing and hearing a lot more of it these days. This is dangerous, especially for founders.

I was in Boston at the end of May and had three separate experiences in one day. The first was with a company started in 2011 that is now a real business. The second was Techstars Boston Demo Day, showcasing 14 brand new companies. We started Techstars in 2006 and ran the first Boston program in 2009. The last was dinner with Alex Rigopulos, the co-founder of Harmonix, which he co-founded with Eran Egozy in 1995, sold in 2007, bought back in 2010, and is still running today.

We have three typical units of measure in business today: a month, a quarter, and a year. Many companies measure things on a daily basis, but decision making at this level is particularly difficult, especially as you add people to the mix. Most of the monthly measurements are either backward looking (e.g. financial reporting) although some are cadence generating (product release cycles, which can be continuous, but with significance once or twice a month for many companies.)

You get a little planning in the mix on a quarterly cycle. If you are on a leadership team, the question “how did the quarter go?” is likely a common refrain you hear four times a year. If your company has a good planning rhythm, you are reflecting on the quarter while simultaneously planning and adjusting for the next one. We are in the second week of Q316 – if you’ve rolled out your Q3 plan or your 2H plan to your team then you know what I mean.

The annual cycle is very predictable and omnipresent. I don’t think it merits much comment here.

While these are all important, none of them matter nearly as much as a long-term aperture. If you limit your thinking to one year, you are screwed in the long term. Humans are particularly bad at non-linear thinking which is at the core of any innovation process. If you want to understand this better, go soak in Ray Kurweil’s classic essay about The Law of Accelerating Returns where he discusses the intuitive linear view versus the historical exponential view.

Now that you’ve spent a few paragraphs thinking about days, months, quarters, and years, consider a decade. Can you even imagine your company over the next decade? While it’s easy to feel like we are compressing time with extreme success cases like Facebook and Twitter, consider Nike from 1964 to 1974 or Starbucks from 1971 to 1981 (Howard Schultz didn’t even join until 1982.) For perspective, explore any successful company’s first decade.

While there is a ton of variability in the trajectories of various successful companies, my favorite personal example is Harmonix, which spent a decade trying to go out of business every year before its “overnight success” of the launch of Guitar Hero. From the epic Inc. Magazine reflective history of the company in 2008.

It all easily might never have happened. “We were on the brink of death, I don’t know, 10 times over those 10 years,” Rigopulos says. Harmonix missed the cash gusher of the Internet bubble almost entirely while it pursued ideas that bombed miserably, one after another. In 1999, the year an online pet store fronted by a sock puppet raised $50 million, Harmonix was laying off staff. Its founders sometimes give the impression of still being a bit shaken. Last year, when fawning organizers of a video game conference asked Rigopulos to give a speech about “living the dream,” he wistfully marked up a PowerPoint chart of Harmonix’s annual profits and losses. He labeled the company’s breakout year, 2006, as “The Dream.” The years 1995 through 2005, shown almost entirely in red ink, were “The Part Before That.

I turned 50 in December and have been thinking more about the passage of time recently. I’ll be 60 in 2025. That’s a good marker for many of the early stage companies I’m involved in. “You’ll be the real deal when I’m 60” is a powerful way for me to frame the time commitment it takes to create something substantial out of nothing.

The next time we talk, tell me what your company will look like in 2025.

Comments
Mar 30 2016

Figuring Out The Future By Reading Sci-Fi From The Past

I’ve decided to read a bunch of old science fiction as a way to form some more diverse views of the future.

I’ve been reading science fiction since I was a kid. I probably started around age ten and was a voracious reader of sci-fi and fantasy in high school. I’ve continued on as an adult, estimating that 25% of what I read is science fiction.

My early diet was Asimov, Heinlein, Harrison, Pournelle, Niven, Clarke, Sterling and Donaldson. When I was on sabbatical a few years ago in Bora Bora I read about 40 books including Asimov’s I Robot, which I hadn’t read since I was a teenager.

I’m almost done with Liu’s The Dark Forest which is blowing my mind. Yesterday morning I came across a great interview from 1999 with Arthur C. Clarke. A bunch of dots connected in my mind and I decided to go backwards to think about the future.

I don’t think we can imagine what things will be like 50 years from now and I’m certain we have no clue what a century from now looks like. So, whatever we believe is just random shit we are making up. And there’s no better way to come across random shit that people are making up than by reading sci-fi, which, even if it’s terribly incorrect, often stimulates really wonderful and wide ranging thoughts for me.

So I thought I’d go backwards 50+ years and read sci-fi written in the 1950s and 1960s. I, Robot, written in 1950, was Asimov’s second book so I decided to start with Pebble In the Sky (his first book, also written in 1950). After landing on Amazon, I was inspired to buy the first ten books by Asimov, which follow.

Pebble In The Sky (1950)
I, Robot (1950)
The Stars, Like Dust (1951)
Foundation (1951)
David Starr, Space Ranger (1952)
Foundation and Empire (1952)
The Currents of Space (1952)
Biochemistry and Human Metabolism w/Williams & Wilkins (1952)
Second Foundation (1953)
Lucky Starr and the Pirates of the Asteroids (1953)

They are all sci-fi except Biochemistry and Human Metabolism written with Williams & Wilkins in 1952. I bought it also, just for the hell of it.

I bought them all in paperback and am going to read them as though I was reading them in the 1950s (on paper, without any interruptions from my digital devices) and see what happens in my brain. I’ll report back when I’m finished (or maybe along the way).

If this list inspires you with any sci-fi books from the 1950s or 1960s, toss them in the comments and I’ll grab them.

Comments
Feb 4 2016

Introducing The Dial Telephone

If you are over 80 years old, you experienced the transition from the non-dial telephone to the dial telephone, which included the magic “finger stop.”

If you are 30, imagine what you will be reflecting on 50 years from now.

Comments
Sep 28 2015

The Neurotech Era

The 2015 Defrag Conference is happening on November 11-12. Early bird pricing ends tomorrow.

For a taste of what you’ll get if you attend, following is a guest post by Ramez Naam,  the author of 5 books including the award-winning Nexus trilogy of sci-fi novels. I’m a huge fan of Ramez and his books – they are in my must read near term sci-fi category. 

A shorter version of this article first appeared at TechCrunch. The tech has advanced, even since then.


The final frontier of the digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it’s possible.

You’ve probably read bits and pieces about brain implants and prosthesis. Let me give you the big picture.

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all five senses; Augmentation of human memory, attention, and learning speed; Even multi-sense telepathy – sharing what we see, hear, touch, and even perhaps what we think and feel with others.

Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn’t real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he’d felt the heat of the dragon’s breath before.

Sound crazy? It is… and it’s not.

Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction, directly inputting touch into the brain.

Or consider vision. For more than a year now, we’ve had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at. (They’d do even better with implants in the brain.)

Sound, we’ve been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

Nor are our senses or motion the limit.

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

Now, let me be clear. All of these systems, for lack of a better word, suck. They’re crude. They’re clunky. They’re low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6’s A8 chip has 2 billion transistors. (Though, let’s be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don’t even have that.

The second barrier to brain interfaces is that getting even 256 channels in generally require invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We’re in the DOS era, if not even further back.

But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he’d ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what’s happening.

We’ve already seen that deep brain stimulators can do amazing things for patients with Parkinson’s. The same technology is on trial for untreatable depression, OCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you’re running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let’s get hungry for lunch around noon. Turn down the sugar cravings, though.)

Implants that help repair brain damage are also a gateway to devices that improve brain function. Think about the “hippocampus chip” that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we’re likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them – just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It’d also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain. Very crude versions of it have been demonstrated. We’ve already emailed verbal thoughts back and forth from person to person. And the field is moving fast. Just this month (after Apex came out) Duke researchers showed that one rat can learn from another, directly via implants in their brains.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. The same Duke research I just mentioned also showed that multiple rats or multiple monkeys working together via brain implants could sometimes achieve results better than a single animal. The mind meld is here.

Neural communication just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

“Focus,” Ilya snapped. “Can you erase her memories of tonight? Fuzz them out?”

“Nothing subtle,” he replied. “Probably nothing very effective. And it might do some other damage along the way.”

The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don’t come true, could you imagine what spammers and advertisers would do an interface to your neurons, if it were the least bit non-secure?)

Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

For now, we’re a long way from facing such a choice. In fiction I can turn the neural implant into a silvery vial of nano-particles that you swallow, in and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

That’s changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journal Neuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000 fold increase in neural bandwidth, without any brain surgery at all.

The pace of change is so fast, that every few months brings a new cutting edge technology. The latest is a ‘neural mesh’ that’s been implanted into mouse brains via a single injection through the skull.

Even so, we’re a long way from having such a device that that’s proven to work – safely, for long periods of time – in humans. We don’t actually know how long it’ll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it’ll take decades. Maybe it’ll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

Will we be ready? I, for one, am enthusiastic. There’ll be problems. Lots of them. There’ll be policy and privacy and security and civil rights challenges. But just as we see today’s digital technology of twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we’ll see much more positive than negative if we ever get to direct neural interfaces.

In the mean time, I’ll keep writing novels about them. Just to get us ready.

Comments
May 4 2015

Reflections on Ex Machina

Amy and I saw Ex Machina last night. A steady stream of people have encouraged us to go see it so we made it Sunday night date night.

The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.

Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.

Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?

When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.

What if gender was truly irrelevant in an advanced intelligence?

You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.

Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.

Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.

But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?

At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.

Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.

Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.

Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?

I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.

Comments
Dec 27 2014

Asimov’s I, Robot and Hertling’s The Turing Exception

William Hertling is one of my top five favorite contemporary sci-fi writers. Last night, I finished the beta (pre-copyedited) version of his newest book, The Turing Exception. It’s not out yet, so you can bide you time by reading his three previous books, which will be a quadrilogy when The Turing Exception ships. The books are:

  1. Avogadro Corp: The Singularity Is Closer Than It Appears
  2. A.I. Apocalypse
  3. The Last Firewall

William has fun naming his characters – I appear as a minor character early in The Last Firewall – and he doesn’t disappoint with clever easter eggs throughout The Turing Exception, which takes place in the mid-2040s.

I read Asimov’s classic I, Robot in Bora Bora as part of my sci-fi regimen. The book bears no resemblance to the mediocre Will Smith movie of the same name. Written in 1950, Asimov’s main character, Susan Calvin, has just turned 75 after being born in 1982 which puts his projection into the future ending around 2057, a little later than Hertling’s, but in the same general arena.

As I read The Turing Exception, I kept flashing back to bits and pieces of I, Robot. It’s incredible to see where Asimov’s arc went, based in the technology of the 1950s. Hertling has got almost 65 more years of science, technology, innovation, and human creativity on his side, so he gets a lot more that feels right, but it’s still a 30 year projection into the future.

The challenges between the human race and computers (whether machines powered by positronic brains or just pure AIs) are similar, although Asimov’s machines are ruled by his three laws of robotics while Hertling’s AIs behaviors are governed by a complex reputational system. And yes, each of these constructs break, evolve, or are difficult to predict indefinitely.

While reading I, Robot I often felt like I was in a campy, fun, Vonnegut like world until I realized how absolutely amazing it was for Asimov to come up with this stuff in 1950. Near the middle, I lost my detached view of things, where I was observing myself reading and thinking about I, Robot and Asimov, and ended up totally immersed in the second half. After I finished, I went back and reread the intro and the first story and imagined how excited I must have been when I first discovered I, Robot, probably around the age of 10.

While reading The Turing Exception, I just got more and more anxious. The political backdrop is a delicious caricature of our current state of the planet. Hertling spends little time on character background since this is book four and just launches into it. He covers a few years at the beginning very quickly to set up the main action, which, if you’ve read this far, I expect you’ll infer is a massive life and death conflict between humans and AIs. Well – some humans, and some AIs – which define the nature of the conflict that impacts all humans and AIs. Yes, lots of EMPs, nuclear weapons, and nanobots are used in the very short conflict.

Asimov painted a controlled and calm view of the future of the 2040s, on where humans were still solidly in control, even when there is conflict. Hertling deals with reality more harshly since he understands recursion and extrapolates where AIs can quickly go. This got me to thinking about another set of AIs I’ve spent time with recently, which are Dan Simmons AIs from the Hyperion series. Simmons AIs are hanging out in the 2800s so, unlike Hertling’s, which are (mostly) confined to earth, Simmons have traversed the galaxy and actually become the void that binds. I expect that Hertling’s AIs will close the gap a little faster, but the trajectory is similar.

I, Robot reminded me that as brilliant as some are, we have no fucking idea where things are heading. Some of Asimov’s long arcs landed in the general neighborhood, but much of it missed. Hertling’s arcs aren’t as long and we’ll have no idea how accurate they were until we get to 2045. Regardless, each book provides incredible food for thought about how humanity is evolving alongside our potentially future computer overlords.

William – well done on #4! And Cat totally rules, but you knew that.

Comments
Nov 3 2014

The Future Will Look Different From The Present

I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.

If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.

Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.

Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”

Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.

Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.

Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.

Screen Shot 2014-11-03 at 6.36.19 AMI went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.

If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.

My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AIHe had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:

“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”

Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.

One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”

Kwatz!

Comments
Sep 23 2014

From Punch Cards to Implants

While watching </scorpion> last night, Amy made the comment that we are the bridge generation. I asked her what she meant and she responded that we are the generation that will have gone from punch cards to implants. I thought this was profound.

BTW – </scorpion> was pretty good, although it’s getting crappy reviews according to Wikipedia. It’s not lost on me that the name of the show appears to be “end scorpion” so either someone in Hollywood is being too cute for their own good or they are clueless about HTML.

The first program I wrote was in 1977 in APL on an IBM mainframe (probably a S/360)  in the basement of a Frito-Lay data center in downtown Dallas. My uncle Charlie sat me down in a chair in front of a terminal, gave me a copy of Kevin Iverson’s A Programming Language, and left me alone for a while. He checked on me a few times, showed me the OCR system he’d helped create, and gave me some punch cards which I promptly folded, spindled, and mutilated.

My second program was on a computer at Richland College shortly thereafter. My parents got me into a community college course on programming and I was the precocious 12 year old in the class. I remember writing a high-low game, but I don’t remember the type of computer it was on. My guess is that it was a DEC PDP-something – maybe a PDP-8.

Shortly after I was introduced to a TRS-80 and then got an Apple II (the original one – not an Apple IIe – I even needed an Integer Card) for my bar mitzvah and was off to the races.

Almost 40 years later I’m still at it, but now investing rather than programming. When I think of what interests me right now, it’s all stuff that is in the “implant” spectrum – not quite there yet, but starting to march toward it with a steady pace. I believe in our AI future, think the Cylons are a pretty good representation of where things are going, am deeply intrigued with Hawking drives and the Shrike, and am ready to upload my consciousness “whenever.”

Assuming I live another 30+ years, I’ll definitely have experienced the bridge from punch cards to implants. And I think that’s pretty cool.

Comments