<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Reid Hoffman on Feld Thoughts</title><link>https://feld.com/tags/reid-hoffman/</link><description>Recent content in Reid Hoffman on Feld Thoughts</description><generator>Hugo -- 0.155.3</generator><language>en-us</language><lastBuildDate>Fri, 12 Sep 2025 12:48:46 +0000</lastBuildDate><atom:link href="https://feld.com/tags/reid-hoffman/index.xml" rel="self" type="application/rss+xml"/><item><title>Reid Hoffman's Superagency</title><link>https://feld.com/archives/2025/09/reid-hoffmans-superagency/</link><pubDate>Fri, 12 Sep 2025 12:48:46 +0000</pubDate><guid>https://feld.com/archives/2025/09/reid-hoffmans-superagency/</guid><description>Reid Hoffman’s new book Superagency: What Could Possibly Go Right with Our AI Future is spectacular and a must-read for every non-technologist about how to think about this “AI thing.”</description><content:encoded><![CDATA[<div style="text-align:center;margin-bottom:24px;"><a href="https://feld.com" style="display:inline-block;"><img src="https://feld.com/images/email-header.png" alt="Feld Thoughts" width="600" style="max-width:100%;display:block;border:0;" /></a></div><p><img alt="A person dressed as a Jedi carries a small green character resembling Yoda on their back, set against a forest backdrop with large trees and vines." loading="lazy" src="/archives/2025/09/reid-hoffmans-superagency/Brad_Feld_Yoda-3.jpeg"></p>
<p>Reid Hoffman’s new book <a href="https://www.superagency.ai/" target="_blank" rel="noopener noreferrer"><em>Superagency: What Could Possibly Go Right with Our AI Future</em></a>
 is spectacular and a must-read for every non-technologist about how to think about this “AI thing.” If you want the short version, the recent <a href="https://every.to/podcast/how-to-prepare-for-agi-according-to-reid-hoffman-96911938-43f0-4f4b-a4fe-fa89f1c51918" target="_blank" rel="noopener noreferrer">AI &amp; I podcast with Reid</a>
 is an excellent way to get a feel for it.</p>
<p>Reid describes his approach as “smart risk taking” rather than blind optimism. “Everyone, generally speaking, focuses way too much on what could go wrong, and insufficiently on what could go right,” he told TechCrunch recently. This resonates with me. I’m tired of the endless AI apocalypse takes.</p>
<p>The book’s central idea is “superagency” – basically when technology gives us new superpowers and millions of people get those superpowers at the same time. Reid uses the car analogy. Cars were once so scary they required a person walking in front waving an orange flag. Now we can’t imagine life without them.</p>
<p>What I love about the book is how practical it is. Reid and his co-author Greg Beato didn’t use AI to write it, but they used AI to vet it – checking facts, doing research, getting different perspectives. That’s exactly how I think about AI tools. They’re not going to replace my thinking, but they can definitely amplify it.</p>
<p>The timing feels perfect given what’s happening here in Colorado with our AI regulation mess. While Reid is writing optimistically about AI’s potential, Colorado has been having a complete meltdown trying to regulate it. Our state legislature passed SB 24-205 in May 2024, making us the first state to broadly restrict private companies using AI. Governor Polis signed it “with reservations” and within a month <a href="https://coloradosun.com/2024/06/14/colorado-ai-bill-revisions/" target="_blank" rel="noopener noreferrer">Polis, our attorney general Phil Weiser, and the bill sponsor Robert Rodriguez issued an open letter</a>
 that “Starting today, in the lead up to the 2025 legislative session and well before the February 2026 deadline for implementation of the law, at the governor and legislative leadership’s direction, state and legislative leaders will engage in a process to revise the new law, and minimize unintended consequences associated with its implementation,”</p>
<p>This is exactly the kind of regulatory approach Reid warns against. In his recent podcast, he explained his philosophy: “I tend to be more regulatory cautious than anti-regulation.” The key difference? Start with measurement rather than prohibition. “When you start having the impulse that maybe there should be regulation, you should start with, well, how do we measure the questions that we’re worried about as harms?”</p>
<p>Colorado did the opposite and went straight to broad restrictions without first understanding what we were actually trying to prevent or how to measure it. The timeline since then has been a comedy of errors – multiple failed attempts to amend the law during the regular session, and most recently, an August special session that ended with lawmakers just pushing the start date from February 2026 to June 2026. That’s it. After over a year of fighting, we got a four-month delay.</p>
<p>Reid believes in “iterative deployment” – getting AI tools into people’s hands and then responding to actual feedback and real problems, not hypothetical ones. Instead, Colorado jumped straight to prescriptive rules based on fears rather than evidence. Reid’s approach would have been: Deploy AI systems, measure actual discrimination outcomes, then iterate on solutions. Our approach was: Assume the worst, regulate preemptively, and figure out implementation later.</p>
<p>The Colorado situation perfectly illustrates Reid’s point about fear-based thinking around AI. <em>Superagency</em> offers a much better framework – one that acknowledges challenges while focusing on AI’s potential to increase individual agency and create better outcomes for society.</p>
<p>Read the book. We need more thoughtful optimism and less regulatory panic. Especially here in Colorado, where we’re supposed to be leaders in technology, not cautionary tales about how fear can paralyze good policy-making.</p>
]]></content:encoded></item><item><title>Reid Hoffman on Bitcoin</title><link>https://feld.com/archives/2019/09/reid-hoffman-on-bitcoin/</link><pubDate>Thu, 05 Sep 2019 11:07:06 +0000</pubDate><guid>https://feld.com/archives/2019/09/reid-hoffman-on-bitcoin/</guid><description>I got the following email from Reid Hoffman this morning. Inspired by Lin-Manuel Miranda’s Hamilton, I produced a battle rap music video about centralized and decentralized currencies, pitting A</description><content:encoded><![CDATA[<div style="text-align:center;margin-bottom:24px;"><a href="https://feld.com" style="display:inline-block;"><img src="https://feld.com/images/email-header.png" alt="Feld Thoughts" width="600" style="max-width:100%;display:block;border:0;" /></a></div><p>I got the following email from Reid Hoffman this morning.</p>
<blockquote>
<p><em>Inspired by Lin-Manuel Miranda’s Hamilton, I produced a battle rap music video about centralized and decentralized currencies, pitting Alexander Hamilton against Satoshi Nakamoto. I hope the video gets more people talking about crypto and its evolving role in global commerce.</em> </p>
</blockquote>
<p>It seemed oddly coincidental with Fred Wilson’s post from yesterday titled <em><a href="https://avc.com/2019/09/some-thoughts-on-crypto/" target="_blank" rel="noopener noreferrer">Some Thoughts on Crypto</a>
.</em></p>
<p>I’m waiting patiently for someone to start talking about Crypto AI.</p>
]]></content:encoded></item></channel></rss>