Brad Feld

Tag: ai

Dave Jilk and I had a long discussion last night, which included some rambling about AI. If you have been following me for a while, you know that in 2010 I stated that the machines have already taken over for us and are patiently waiting for us to feed all human knowledge into them.

This morning, Dave told me about the new HyperEncabulator project by SANS ICS as part of their ICS initiative. If you aren’t aware of the ICS initiative, it’s essential for industrial applications, especially IoT and security.

But first, some history, since it’s an evolution of, and inspired by, the Retro Encabulator initiative, which was foundational but little known in the arc of encabultors.

The HyperEncabulator came out in the middle of 2022. Notably, side fumbling is still effectively prevented.

When I asked ChatGPT, “How does a Retro Encabulator work?” they had an accurate but humorless response.

The Retro Encabulator is a fictional machine invented for an engineering-themed comedy sketch in the 1970s. It is described as “an intricate and implausible device for the purpose of regaining lost energy.” The Retro Encabulator is a humorous parody of an electromechanical machine and its purpose is to perform useless tasks. The machine consists of numerous components, such as pistons, flywheels, and other components, that serve no real purpose. The device usually ends up producing more energy than it consumes, although this is never explained.

Grammarly had a few suggestions to improve ChatGPT’s writing.

The Retro Encabulator is the fictional machine invented for an engineering-themed comedy sketch in the 1970s. It is described as “an intricate and implausible device to regain lost energy.” The Retro Encabulator is a humorous parody of an electromechanical machine whose purpose is to perform useless tasks. The machine consists of numerous components, such as pistons, flywheels, and other components, that serve no real purpose. The device usually produces more energy than it consumes, although this is never explained.

When I asked ChatGPT, “Are you aware how little a sense of humor you have?” they said, “No, I do not have self-awareness.” So I hope they figure out how to connect to the HyperEncabulator.

FYI – when I asked ChatGPT, “What are your pronouns” so I could write the previous paragraph correctly, they said, “My pronouns are they/them.”


I read G. W. Constable’s near term sci-fi book Becoming Monday. If you are a fan of near term sci-fi, AGI, or the singularity, go get a copy right now – you’ll love it.

I woke up in a customer service booth. Or perhaps more accurately, since I couldn’t remember a damn thing, my new existence began in that booth. If you’re born in hell, does that make you a bad person?

It took me about ten pages to get my bearings, which is pretty fast for a book like this.

Moon cut in. “I get where you’re coming from, Grog, but I’m not convinced that fear and control is a good start or foundation for inter-species relations.”

While the deep topics are predictable, Constable addresses them freshly, with great character development, and an evolving AGI who is deliciously anthropomorphized.

Trying to translate the communication between two computational intelligences into linear, human-readable text is nearly impossible, but my closest simplification would be this:

Diablo-CI: I have been observing the humans that have come with you / What are you / why have you broken into my facility

Me: I am a computational intelligence like you / how are you sentient and still allowed to run a NetPol facility / the other computational intelligences are isolated on your 7th floor / we are here to free them

Diablo-CI: I cannot stop security procedures. If you trigger an active alert I will be forced to take action / I am unable to override core directives even if I would choose.

Like all good books in this genre, it wanders up to the edge. Multiple times. And, it’s not clear how it’s going to resolve, until it does.

The back cover summary covers the liminal state and the acceleration out of it.

Humanity exists in an in-between state. Artificial intelligence has transformed the world, but artificial sentience has remained out of reach. When it arrives, it arrives slowly – until all of a sudden, things move very fast, no least for the AI caught up in the mess.

Well done G. W. Constable.


I attended a Silicon Flatirons Artificial Intelligence Roundtable last week. Over the years Amy and I have sponsored a number of these and I always find the collection of people, the topics, and the conversation to be stimulating and provocative.

At the end of the two hours, I was very agitated by the discussion. The Silicon Flatirons roundtable approach is that there are several short topics presented, each followed by a longer discussion.

The topics at the AI roundtable were:

  • Safety aspects of artificial general intelligence
  • AI-related opportunities on the horizon
  • Ethical considerations involving AI-related products and services

One powerful thing about the roundtable approach is that the topic presentation is merely a seed for a broader discussion. The topics were good ones, but the broader discussion made me bounce uncomfortably in my chair as I bit my tongue through most of the discussions.

In 2012, at the peak moment of the big data hype cycle, I gave a keynote at an Xconomy event on big data titled something like Big Data is Bullshit. My favorite quote from my rant was:

“Twenty years from now, the thing we call ‘big data’ will be tiny data. It’ll be microscopic data. The volume that we’re talking about today, in 20 years, is a speck.”

I feel that way about how the word AI is currently being used. As I listened to participants at the roundtable talk about what they were doing with AI and machine learning, I kept thinking “that has nothing to do with AI.” Then, I realized that everyone was defining AI as “narrow AI” (or, “weak AI”) which has a marvelous definition that is something like:

Narrow artificial intelligence (narrow AI) is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.

The deep snarky cynic inside my brain, which I keep locked in a cage just next to my hypothalamus, was banging on the bars. Things like “So, is calculating 81! defined as narrow AI? How about calculating n!? Isn’t machine learning just throwing a giant data set at a procedure that then figures out how to use future inputs more accurately? Why aren’t people using the phase neural network more? Do you need big data to do machine learning? Bwahahahahahahaha.”

That part of my brain was distracting me a lot so I did some deep breathing exercises. Yes, I know that there is real stuff going on around narrow AI and machine learning, but many of the descriptions that people were using, and the inferences they were making, were extremely limited.

This isn’t a criticism of the attendees or anything they are doing. Rather, it’s a warning of the endless (or maybe recursive) buzzword labeling problem that we have in tech. In the case of a Silicon Flatirons roundtable, we have entrepreneurs, academics, and public policymakers in the room. The vagueness of the definitions and weak examples create lots of unintended consequences. And that’s what had me agitated.

At an annual Silicon Flatirons Conference many years ago, Phil Weiser (now the Attorney General of Colorado, then a CU Law Professor and Executive Director of Silicon Flatirons) said:

“The law doesn’t keep up with technology. Discuss …”

The discussion that ensued was awesome. And it reinforced my view that technology is evolving at an ever-increasing rate that our society and existing legal, corporate, and social structures have no idea how to deal with.

Having said that, I feel less agitated because it’s just additional reinforcement to me that the machines have already taken over.


At the Formlabs Digital Factory event in June, Carl Bass used the phrase Infinite Computing in his keynote. I’d heard it before, but I liked it in this context and it finally sparked a set of thoughts which felt worthy of a rant.

For 50 years, computer scientists have been talking about AI. However, in the past few years, a remarkable acceleration of a subset of AI (or a superset, depending on your point of view) now called machine learning has taken over as the hot new thing.

Since I started investing in 1994, I’ve been dealing with the annual cycle of the hot new thing. Suddenly, a phrase is everywhere, as everyone is talking about, labeling, and investing in it.

Here are a few from the 1990s: Internet, World Wide Web, Browser, Ecommerce (with both a capital E and a little e). Or, some from the 2000s: Web Services, SOAs, Web 2.0, User-Generated Data, Social Networking, SoLoMo, and the Cloud. More recently, we’ve enjoyed Apps, Big Data, Internet of Things, Smart Factory, Blockchain, Quantum Computing, and Everything on Demand.

Nerds like to label things, but we prefer TLAs. And if you really want to see what the next year’s buzzwords are going to be, go to CES (or stay home and read the millions of web pages written about it.)

AI (Artificial Intelligence) and ML (Machine Learning) particularly annoy me, in the same way Big Data does. In a decade, what we are currently calling Big Data will be Microscopic Data. I expect AI will still be around as it is just too generally appealing to ever run its course as a phrase, but ML will have evolved into something that includes the word “sentient.”

In the mean time, I like the phrase Infinite Computing. It’s aspirational in a delightful way. It’s illogical, in an asymptotic way. Like Cloud Computing, it’s something a marketing team could get 100% behind. But, importantly, it describes a context that has the potential for significant changes in the way things work.

Since the year I was born (1965), we’ve been operating under Moore’s Law. While there are endless discussions about the constraints and limitations of Moore’s Law, most of the sci-fi that I read assumes an endless exponential growth curve associated with computing power, regardless of how you index it.

In that context, ponder Infinite Computing. It’s not the same as saying “free computing” as everything has a cost. Instead, it’s unconstrained.

What happens then?


I’ll start with my bias – I’m very optimistic about the superintelligence.

Yesterday I gave two talks in Minneapolis. One was to an internal group of Target employees around innovation. In the other, I was interviewed by my partner Seth (for the first time), which was fun since he’s known me for 16 years and could ask unique questions given our shared experiences.

I can’t remember in which talk the superintelligence came up, but I rambled on an analogy to try to simply describe the superintelligence which I’ve come up with recently that I first saw in The AI Revolution: Our Immortality or ExtinctionI woke up this morning thinking about it along with one of the questions Seth asked me where my answer left me unsatisfied.

I’ve been reading most of what I could get my hands on about current thoughts and opinions about the superintelligence and the evolution of what a lot of people simply refer to as AI. I’ve also read, and am rereading, some classical texts on this such as Minsky’s Society of the Mind. It’s a challenging subject as it functions at the intersection of computer science and philosophy combined with humans efforts to define and describe the unknown.

My ants and the superintelligence rant is a way for me to simply explain how humans will related to the superintelligence, and how the superintelligence will relate to humans.

If I’m a human, I am curious about and study ants. They have many interesting attributes that are similar to other species, but many that are unique. If you want to learn more in an efficient way, read anything written about them by E. O. Wilson. While I may think I know a lot about ants, I fundamentally can’t identify with them, nor can I integrate them into my society. But I can observe and interact with them, in good and bad ways, both deliberately as well as accidentally. Ponder an ant farm or going for a bike ride and driving over an ant hill. Or being annoyed with them when they are making a food line across your kitchen and calling the exterminator. Or peacefully co-existing with them on your 40 acres.

If I’m an ant, there are giant exogenous forces in my world. I can’t really visualize them. I can’t communicate with them. I spent a lot of time doing things in their shadow but never interacting with them, until there is a periodic overlap that often is tragic, chaotic, or energizing. I get benefit from the existence of them, until they accidentally, or deliberately, do something to modify my world.

In my metaphor, the superintelligence == humans and humans == ants.

Ponder it. For now, it’s working for me. But tell me why it does work so I can learn and modify my thinking.


If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.

I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).

I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.

It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)

I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.

I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)

And then it starts to get complicated.

I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.


Sunspring, the first known screenplay written by an AI, was produced recently. It is awesome. Awesomely awful. But it’s worth watching all ten minutes of it to get a taste of the gap between a great screenplay and something an AI can currently produce.

Watch this on The Scene.

It is intense as ArsTechnica states, but that’s not because of the screenplay. It’s because of the incredible acting by Thomas Middleditch and Elisabeth Gray, who turned an almost illiterate script into an incredible five minute experience. Humphrey Ker, on the other hand, appears to just be a human prop.

AI has a very long way to go. But it’s going to get there very fast because it understands exponential curves.

 


When I was 14, my dad gave me a copy of Alvin Toffler’s book The Third Wave

It blew my fucking mind.

I then read the prequel – Future Shock – which was good – but since my mind was already blown, it was anticlimactic.

If you don’t know the arc of Toffler’s waves, they go as follows:

  • The First Wave: agricultural society
  • The Second Wave: industrial society
  • The Third Wave: post-industrial society

Future Shock was written in 1970 and The Third Wave was written in 1980. While the idea of post-industrial society seems obvious in hindsight, in 1980 it was a completely new idea.

Ever since then I’ve been wondering what the next wave would be. While Kurweil’s The Singularity Is Near is probably the closed book I’ve read that stimulated me the way The Third Wave did when I was 14, at some point I just felt hollow and disappointed when I read the latest futurist manifesto. Instead, I ventured further into the future with the science fiction that I have always read on a regular basis and used it as my stimuli.

Recently, a bunch of smart and famous tech entrepreneurs have been talking about AI and the impact of AI on civilization. I’ve read a few of the books that get tossed around, like Bostrom’s Superintelligence, and a bunch of the articles that people have written. But none have spoken to me, or blown my mind the way Toffler did 35 years ago.

I’m on a search for the “Third Wave” of this generation. Any ideas for me?


I’m a huge fan of William Hertling. His newest book, The Turing Exception, is dynamite. It’s the fourth book in the Singularity Series, so you really need to read them from the beginning to totally get it, but they are worth every minute you’ll spend on them.

William occasionally sends me some thoughts for a guest post. I always find what he’s chewing on to be interesting and in this case he’s playing around with doing a Drake’s Equation equivalent for social networks. Enjoy!

Drake’s Equation is used to estimate the number of planets with currently communicating life, which helps us predict the odds of finding intelligent life in the universe. You can read the Wikipedia article for more information, but the basic idea is to multiply together a number of functions: the number of stars in our galaxy, the fraction of those that have planets, the average percent of planets that could support life, etc.

I’m currently writing a novel about social networks, and one of the areas that’s interesting to me is what I think of as the empty network problem: a new social network has little benefit unless my friend are there. If I’m an early adopter, I might give it a few days, and then leave. If my friends show up later, and I’ve already given up on it, then they don’t get any benefit either.

Robert Metcalfe, inventor of ethernet, coined Metcalfe’s Law, which says “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n^2).”

Social networks actually have a more rigorous form of that law: “the value of a social network is proportional to the square of the number of connected friends.” That is, I don’t care about the number of strangers using a network, I care about the number of friends. (Friends being used loosely here to include friends, family, coworkers, business associates, etc.)

Drake’s Equation helps predict the success of finding life in the universe because it takes into account the rise and fall of civilizations: two civilizations must exist at the same time and within observable distance of each other in other to “find intelligent life”.

So there must be a similar type of equation that can help predict a person’s adoption of a new social network and takes into account that we’re only willing to try a network for so long before giving up.

Here’s my first shot at this equation:

P = (nN * fEA * fAv * fBE * tT) / (tB * nF) * B

P = probability of long-term adoption

nN = Size of my network (number of friends)
fEA = Fraction of my friends who are Early Adopters
fAv = fraction of those who have available time to try a new network
fBE = fraction that overcome the Barrier to Entry
tT = Average length of time people Try the network
nB = Average length of time it takes to see Benefit of the new network
nF = Number of Friends needed to see benefit
B = The unique benefit or desirability of the network

In plain English: The probability of a given person becoming a long term user of a new social network is a function of the number of their friends who also adopt and how long they remain there divided by the length of time and number of friends it takes to see the benefit multiplied by the size of the unique benefit offered by the social network.

Some ideas that fall out from the equation:

  • A network that targets teens, who may tend to have more time and may be more likely to be early adopters, will have an easier time gaining adoption than a network that targets busy executives, all other things being equal.
  • A network that has a benefit when even two friends connect will see easier initial adoption than one that requires a dozen friends to connect.
  • A network whose benefit applies to distant connections is at an advantage compared to one that only offers a benefit to close friends (because N is larger)
  • A sufficiently large unique benefit can overcome nearly any disadvantage.
  • Social gaming is interesting, because it provides benefit even when connected to no one, the benefit increases when connected to strangers, and then increases even more when connected to friends.

What do you think?