Tag: ai

Sep 10 2019

AI is the Big Data of 2019

I attended a Silicon Flatirons Artificial Intelligence Roundtable last week. Over the years Amy and I have sponsored a number of these and I always find the collection of people, the topics, and the conversation to be stimulating and provocative.

At the end of the two hours, I was very agitated by the discussion. The Silicon Flatirons roundtable approach is that there are several short topics presented, each followed by a longer discussion.

The topics at the AI roundtable were:

  • Safety aspects of artificial general intelligence
  • AI-related opportunities on the horizon
  • Ethical considerations involving AI-related products and services

One powerful thing about the roundtable approach is that the topic presentation is merely a seed for a broader discussion. The topics were good ones, but the broader discussion made me bounce uncomfortably in my chair as I bit my tongue through most of the discussions.

In 2012, at the peak moment of the big data hype cycle, I gave a keynote at an Xconomy event on big data titled something like Big Data is Bullshit. My favorite quote from my rant was:

“Twenty years from now, the thing we call ‘big data’ will be tiny data. It’ll be microscopic data. The volume that we’re talking about today, in 20 years, is a speck.”

I feel that way about how the word AI is currently being used. As I listened to participants at the roundtable talk about what they were doing with AI and machine learning, I kept thinking “that has nothing to do with AI.” Then, I realized that everyone was defining AI as “narrow AI” (or, “weak AI”) which has a marvelous definition that is something like:

Narrow artificial intelligence (narrow AI) is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.

The deep snarky cynic inside my brain, which I keep locked in a cage just next to my hypothalamus, was banging on the bars. Things like “So, is calculating 81! defined as narrow AI? How about calculating n!? Isn’t machine learning just throwing a giant data set at a procedure that then figures out how to use future inputs more accurately? Why aren’t people using the phase neural network more? Do you need big data to do machine learning? Bwahahahahahahaha.”

That part of my brain was distracting me a lot so I did some deep breathing exercises. Yes, I know that there is real stuff going on around narrow AI and machine learning, but many of the descriptions that people were using, and the inferences they were making, were extremely limited.

This isn’t a criticism of the attendees or anything they are doing. Rather, it’s a warning of the endless (or maybe recursive) buzzword labeling problem that we have in tech. In the case of a Silicon Flatirons roundtable, we have entrepreneurs, academics, and public policymakers in the room. The vagueness of the definitions and weak examples create lots of unintended consequences. And that’s what had me agitated.

At an annual Silicon Flatirons Conference many years ago, Phil Weiser (now the Attorney General of Colorado, then a CU Law Professor and Executive Director of Silicon Flatirons) said:

“The law doesn’t keep up with technology. Discuss …”

The discussion that ensued was awesome. And it reinforced my view that technology is evolving at an ever-increasing rate that our society and existing legal, corporate, and social structures have no idea how to deal with.

Having said that, I feel less agitated because it’s just additional reinforcement to me that the machines have already taken over.

Comments
Aug 21 2017

The Link Between Infinite Computing and Machine Learning

At the Formlabs Digital Factory event in June, Carl Bass used the phrase Infinite Computing in his keynote. I’d heard it before, but I liked it in this context and it finally sparked a set of thoughts which felt worthy of a rant.

For 50 years, computer scientists have been talking about AI. However, in the past few years, a remarkable acceleration of a subset of AI (or a superset, depending on your point of view) now called machine learning has taken over as the hot new thing.

Since I started investing in 1994, I’ve been dealing with the annual cycle of the hot new thing. Suddenly, a phrase is everywhere, as everyone is talking about, labeling, and investing in it.

Here are a few from the 1990s: Internet, World Wide Web, Browser, Ecommerce (with both a capital E and a little e). Or, some from the 2000s: Web Services, SOAs, Web 2.0, User-Generated Data, Social Networking, SoLoMo, and the Cloud. More recently, we’ve enjoyed Apps, Big Data, Internet of Things, Smart Factory, Blockchain, Quantum Computing, and Everything on Demand.

Nerds like to label things, but we prefer TLAs. And if you really want to see what the next year’s buzzwords are going to be, go to CES (or stay home and read the millions of web pages written about it.)

AI (Artificial Intelligence) and ML (Machine Learning) particularly annoy me, in the same way Big Data does. In a decade, what we are currently calling Big Data will be Microscopic Data. I expect AI will still be around as it is just too generally appealing to ever run its course as a phrase, but ML will have evolved into something that includes the word “sentient.”

In the mean time, I like the phrase Infinite Computing. It’s aspirational in a delightful way. It’s illogical, in an asymptotic way. Like Cloud Computing, it’s something a marketing team could get 100% behind. But, importantly, it describes a context that has the potential for significant changes in the way things work.

Since the year I was born (1965), we’ve been operating under Moore’s Law. While there are endless discussions about the constraints and limitations of Moore’s Law, most of the sci-fi that I read assumes an endless exponential growth curve associated with computing power, regardless of how you index it.

In that context, ponder Infinite Computing. It’s not the same as saying “free computing” as everything has a cost. Instead, it’s unconstrained.

What happens then?

Comments
Aug 24 2016

Ants and the Superintelligence

I’ll start with my bias – I’m very optimistic about the superintelligence.

Yesterday I gave two talks in Minneapolis. One was to an internal group of Target employees around innovation. In the other, I was interviewed by my partner Seth (for the first time), which was fun since he’s known me for 16 years and could ask unique questions given our shared experiences.

I can’t remember in which talk the superintelligence came up, but I rambled on an analogy to try to simply describe the superintelligence which I’ve come up with recently that I first saw in The AI Revolution: Our Immortality or ExtinctionI woke up this morning thinking about it along with one of the questions Seth asked me where my answer left me unsatisfied.

I’ve been reading most of what I could get my hands on about current thoughts and opinions about the superintelligence and the evolution of what a lot of people simply refer to as AI. I’ve also read, and am rereading, some classical texts on this such as Minsky’s Society of the Mind. It’s a challenging subject as it functions at the intersection of computer science and philosophy combined with humans efforts to define and describe the unknown.

My ants and the superintelligence rant is a way for me to simply explain how humans will related to the superintelligence, and how the superintelligence will relate to humans.

If I’m a human, I am curious about and study ants. They have many interesting attributes that are similar to other species, but many that are unique. If you want to learn more in an efficient way, read anything written about them by E. O. Wilson. While I may think I know a lot about ants, I fundamentally can’t identify with them, nor can I integrate them into my society. But I can observe and interact with them, in good and bad ways, both deliberately as well as accidentally. Ponder an ant farm or going for a bike ride and driving over an ant hill. Or being annoyed with them when they are making a food line across your kitchen and calling the exterminator. Or peacefully co-existing with them on your 40 acres.

If I’m an ant, there are giant exogenous forces in my world. I can’t really visualize them. I can’t communicate with them. I spent a lot of time doing things in their shadow but never interacting with them, until there is a periodic overlap that often is tragic, chaotic, or energizing. I get benefit from the existence of them, until they accidentally, or deliberately, do something to modify my world.

In my metaphor, the superintelligence == humans and humans == ants.

Ponder it. For now, it’s working for me. But tell me why it does work so I can learn and modify my thinking.

Comments
Jun 12 2016

vN – The AI Book That Should Be Turned Into A Movie

If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.

I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).

I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.

It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)

I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.

I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)

And then it starts to get complicated.

I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.

Comments
Jun 10 2016

AI Screenplay Writing Has a Long Way to Go

Sunspring, the first known screenplay written by an AI, was produced recently. It is awesome. Awesomely awful. But it’s worth watching all ten minutes of it to get a taste of the gap between a great screenplay and something an AI can currently produce.

It is intense as ArsTechnica states, but that’s not because of the screenplay. It’s because of the incredible acting by Thomas Middleditch and Elisabeth Gray, who turned an almost illiterate script into an incredible five minute experience. Humphrey Ker, on the other hand, appears to just be a human prop.

AI has a very long way to go. But it’s going to get there very fast because it understands exponential curves.

Comments
Oct 12 2015

What Is The “Third Wave” Of This Generation?

When I was 14, my dad gave me a copy of Alvin Toffler’s book The Third Wave

It blew my fucking mind.

I then read the prequel – Future Shock – which was good – but since my mind was already blown, it was anticlimactic.

If you don’t know the arc of Toffler’s waves, they go as follows:

  • The First Wave: agricultural society
  • The Second Wave: industrial society
  • The Third Wave: post-industrial society

Future Shock was written in 1970 and The Third Wave was written in 1980. While the idea of post-industrial society seems obvious in hindsight, in 1980 it was a completely new idea.

Ever since then I’ve been wondering what the next wave would be. While Kurweil’s The Singularity Is Near is probably the closed book I’ve read that stimulated me the way The Third Wave did when I was 14, at some point I just felt hollow and disappointed when I read the latest futurist manifesto. Instead, I ventured further into the future with the science fiction that I have always read on a regular basis and used it as my stimuli.

Recently, a bunch of smart and famous tech entrepreneurs have been talking about AI and the impact of AI on civilization. I’ve read a few of the books that get tossed around, like Bostrom’s Superintelligence, and a bunch of the articles that people have written. But none have spoken to me, or blown my mind the way Toffler did 35 years ago.

I’m on a search for the “Third Wave” of this generation. Any ideas for me?

Comments
Jun 4 2015

Hertling’s Equation

I’m a huge fan of William Hertling. His newest book, The Turing Exception, is dynamite. It’s the fourth book in the Singularity Series, so you really need to read them from the beginning to totally get it, but they are worth every minute you’ll spend on them.

William occasionally sends me some thoughts for a guest post. I always find what he’s chewing on to be interesting and in this case he’s playing around with doing a Drake’s Equation equivalent for social networks. Enjoy!

Drake’s Equation is used to estimate the number of planets with currently communicating life, which helps us predict the odds of finding intelligent life in the universe. You can read the Wikipedia article for more information, but the basic idea is to multiply together a number of functions: the number of stars in our galaxy, the fraction of those that have planets, the average percent of planets that could support life, etc.

I’m currently writing a novel about social networks, and one of the areas that’s interesting to me is what I think of as the empty network problem: a new social network has little benefit unless my friend are there. If I’m an early adopter, I might give it a few days, and then leave. If my friends show up later, and I’ve already given up on it, then they don’t get any benefit either.

Robert Metcalfe, inventor of ethernet, coined Metcalfe’s Law, which says “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n^2).”

Social networks actually have a more rigorous form of that law: “the value of a social network is proportional to the square of the number of connected friends.” That is, I don’t care about the number of strangers using a network, I care about the number of friends. (Friends being used loosely here to include friends, family, coworkers, business associates, etc.)

Drake’s Equation helps predict the success of finding life in the universe because it takes into account the rise and fall of civilizations: two civilizations must exist at the same time and within observable distance of each other in other to “find intelligent life”.

So there must be a similar type of equation that can help predict a person’s adoption of a new social network and takes into account that we’re only willing to try a network for so long before giving up.

Here’s my first shot at this equation:

P = (nN * fEA * fAv * fBE * tT) / (tB * nF) * B

P = probability of long-term adoption

nN = Size of my network (number of friends)
fEA = Fraction of my friends who are Early Adopters
fAv = fraction of those who have available time to try a new network
fBE = fraction that overcome the Barrier to Entry
tT = Average length of time people Try the network
nB = Average length of time it takes to see Benefit of the new network
nF = Number of Friends needed to see benefit
B = The unique benefit or desirability of the network

In plain English: The probability of a given person becoming a long term user of a new social network is a function of the number of their friends who also adopt and how long they remain there divided by the length of time and number of friends it takes to see the benefit multiplied by the size of the unique benefit offered by the social network.

Some ideas that fall out from the equation:

  • A network that targets teens, who may tend to have more time and may be more likely to be early adopters, will have an easier time gaining adoption than a network that targets busy executives, all other things being equal.
  • A network that has a benefit when even two friends connect will see easier initial adoption than one that requires a dozen friends to connect.
  • A network whose benefit applies to distant connections is at an advantage compared to one that only offers a benefit to close friends (because N is larger)
  • A sufficiently large unique benefit can overcome nearly any disadvantage.
  • Social gaming is interesting, because it provides benefit even when connected to no one, the benefit increases when connected to strangers, and then increases even more when connected to friends.

What do you think?

Comments
May 4 2015

Reflections on Ex Machina

Amy and I saw Ex Machina last night. A steady stream of people have encouraged us to go see it so we made it Sunday night date night.

The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.

Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.

Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?

When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.

What if gender was truly irrelevant in an advanced intelligence?

You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.

Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.

Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.

But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?

At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.

Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.

Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.

Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?

I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.

Comments
Jan 16 2015

Discovering Ingress

Ingres at SeatacYesterday at the end of the day I was sitting in Greg Gottesman‘s office at Madrona catching up on email before dinner. Greg walked in with Ben Gilbert from Madrona Labs.

We started talking about sci-fi and Greg said “Are you into Ingress?” I responded “Is that the Google real-world / augmented reality / GPS game?” Greg said yes and I explained that I’d played with it a little when it first came out several years ago since a few friends in Boulder were into it but I lost track of it since there wasn’t an iOS app.

Greg pulled out his iPhone 6+ giant thing and started showing me. Probably not surprising to anyone, I grabbed my phone, downloaded it, created an account using the name the AIs have given me (“spikemachine”), and started doing random things.

Greg went down the hall and grabbed Brendan Ribera, also from Madrona Labs, who is a Level 8 superstar Ingres master-amazing-player. Within a few minutes we were on the Ingress Map looking at stuff that was going on around the world.

By this point my mind was blown and all I wanted to do was get from basic-beginner-newbie-no-clue-Ingre player to Level 2. With Brendan as my guide I quickly started to get the hang of it. A few hacks and XMPs later I was Level 2.

I asked Greg, Ben, and Branden if they had read Daemon by Daniel Suarez. None of them had heard of it so I went on a rant about Rick Klau’s discovery of the book and Leinad Zeraus, the evolution of this crazy thing into Daniel Suarez’s bestseller and the rest of my own wonderful romp through the writings of Daniel Suarez, William Herting, and Ramez Naam. It wasn’t merely my love of near-term sci-fi, but my discovery of what I believe is the core of the next generation of amazing near-term sci-fi writers. And, as a bonus to them having to listen to me, I bought each of them a Kindle version of Daemon.

Ingres completely feels like Daemon to me. There is plenty of chatter on the web about speculation of similarities and inspirations of Daemon on Ingress. I have no idea what the real story is, but since we are all suspending disbelief in both near-term sci-fi as well as Ingress, I’m going with the notion that they are linked even more than us puny humans realize.

This morning as I was walking through Sea-Tac on my way to my plane, I hacked a few portals, got a bunch of new stuff, and XMPed away whenever a resistance portal came into range. I’m still a total newbie, but I’m getting the hang of it. And yes, I’m part of the enlightenment as it offends me to the core of my soul that people would resist the future, although it seems to be more about smurfs vs. frogs.

Comments
Dec 31 2014

Fundamental Software Problems That Haven’t Been Solved Yet

I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.

Comments