For the past week, I’ve been asked at least once a day (yesterday I was asked several times, with an R0 of 3) about what I think the coronavirus’ impact will be on the global supply chain.
I have a perspective that it’s too early to really know but there are starting to be guidelines about how to think about it, especially as Chinese new year + a week has passed (and we are almost at +2 weeks). Theoretically, factories in China are opening next week, but until they open, they aren’t open …
While there is starting to be some macro analysis on the web, it’s classic generic stuff with big company examples such as Charting the Global Economic Impact of the Coronavirus, Coronavirus shakes centre of world’s tech supply chain, and How China’s novel coronavirus outbreak is disrupting the global supply chain.
I find things like the Johns Hopkins CSSE data set and coronavirus map to be much more interesting than these articles so I sent an email out to our hardware companies last night to see what they were hearing and thinking to collect some quantitative data from startups.
It seems like most people are expecting factories to open on 2/10 as planned. However, the expectation is being set that production will take two weeks to ramp back up to normal. And, there is some concern that larger companies will likely exert pressure to be at the front of the line.
Another problem at this point is movement into and out of China. The Chinese border with Hong Kong is only open at a few places and many are afraid to enter China right now for fear that they won’t be able to leave.
Everyone anticipates a big logistics clog once things start shipping, which will introduce delay and cost, although the magnitude of this is unknown.
Finally, the downstream (or upstream – I never get that right) impact of long lead time items will add another wrinkle once people understand the volume and timing constraints when things settle down.
Of course, the coronavirus is not yet contained and the actual shape of the infection and death curve is still evolving, but at this moment it is clearly worse than SARS, so that doesn’t feel very good.
If you have any additional qualitative data or perspective, I’d love to hear it.
As I was reading The Atlantic article Silicon Valley Abandons the Culture That Made It the Envy of the World I kept flashing to the end of Anna Wiener’s awesome memoir Uncanny Valley. And it was no surprise to see this article in the Boulder Daily Camera titled Big tech in hot seat at congressional hearing at CU Boulder.
Readers of this blog and my book Startup Communities know that I’m a huge fan of AnnaLee Saxenian. She has a great quote in The Atlantic article.
This is a full reversal of the language that tech promoters used to sell Silicon Valley–style innovation and competitiveness for decades. Saxenian has noticed the change in how the Valley describes itself, or at least in how the dominant firms do. “Advocacy of the small, innovative firm and entrepreneurial ecosystem is giving way to more and more justifications for bigness (scale economics, competitive advantage, etc.),” Saxenian wrote to me in an email. “The big is beautiful line is coming especially from the large companies (Facebook, Google, Amazon, Apple) that are threatened by antitrust and need to justify their scale.”
Margaret O’Mara, who wrote The Code: Silicon Valley and the Remaking of America, also has a good reminder.
“The story the Valley told about itself has been very much a small-is-beautiful story since the 1970s,” O’Mara told me. “It has a politics—this Vietnam-era rejection of the military-industrial complex, rejection of the mainframe, Big Business, Big Government, big universities.” This led people to take risks and launch new projects and firms. Entrepreneurs from all over the world migrated to a place where people understood why they wanted to start companies. And the idea even embedded itself right near the heart of the Valley, at Google. The company’s slogan, “Don’t be evil”, had a particular meaning when it was adopted around the millennium. In the classic Valley mind-set, “evil is bigness of all kinds,” O’Mara said.
The techlash is in full bloom and Silicon Valley is in the center of it. Ironically, of the three public companies that have > $1 trillion market caps, one of them (Microsoft) is headquartered in Seattle, which is definitely not part of Silicon Valley. Oh, and Amazon … Nonetheless, it’s part of the pending mess that is going to hit all of society very hard in the next few years, as the collision between the various factors—and factions—around innovation are going to be profound.
I expect historians will look back on this time with curiosity. They will wonder why there is such a huge disconnect between what is said. what is wanted, and what is done. Here are a few recent artifacts to ponder.
And, in case you thought the government was the solution, here are a few more to read.
Every time someone tells me they are going to “change the world” or “put a dent in the universe”, I think to myself, “For better, or for worse?”
As you might have seen in an earlier post, Foundry Group is helping to bring the Helium Network to Boulder. Another Helium fan – James Fayal – reached out to me about his effort to do the same in his hometown of Baltimore, as well as DC and Philly.
I’m hopeful that some of the readers of this blog live in Baltimore, DC, or Philadelphia and are interested in participating in the Helium rollout. If you fit this description, fill out the Mid-Atlantic Application.
James wrote me a little more about his background and motivation for doing this, which follows.
While I’m a consumer product founder by trade, I’ve been involved in various crypto projects since 2013. I’m excited about Helium because it is one of the first projects with significant real-world use-cases and the community has grown exponentially since they started selling hotspots earlier this year.
In short, Helium is building a ‘mesh’ network for LongFi data transmission, which can be used by IoT devices to transmit and receive data over long distances. You can see more about the technology here.
We’re looking to work with 15 – 25 locations in or around the cities of Baltimore, DC, and Philadelphia to host hotspots. We’ll be covering the cost of the unit and work with you to optimize the hotspot’s reach in the area. In return, we’ll be providing hosts with a % of the network’s tokens ‘mined’ by the hotspots.
If we’re successful, we could be one of the first regions of the United States with comprehensive coverage on the network!
To apply to be a host, fill out the Mid-Atlantic Application. Supply is limited and the Helium company is close to stocking out of their current batch of hotspots, but James will do his best to work with as many hosts in the area as possible.
And, if you are curious about the Boulder rollout, I’ve got 47 unallocated Helium hotspots in my office that are going to be provisioned in the next week. We will then start deploying them around town in the second half of January. While we have more than 47 people interested, if you have an interest and haven’t signed up on the Boulder Helium Hotspot Application, go for it so we know about anyone who wants to participate.
If you are asking, “What’s Helium?” here’s a fun video to get you started along with a deeper explanation of the technology.
As an LP in USV, we are small indirect investors. But, as a way to engage with a particular blockchain-based application/technology that we think has meaningful real-world potential, we thought we’d help enable a network in Boulder and see how it works.
We are looking for about 40 locations throughout Boulder (not just downtown) to set up hotspots. All you have to do is connect the Helium hotspot to the Internet. We’ll handle the rest.
If this is interesting to you, please fill out the Boulder Helium Hotspot Application. We are only choosing 40 locations, and we are going to spread them out as best as we can, so if you aren’t chosen, and you still want to be part of this, you can always buy a Helium Hotspot directly.
My partner Lindel pointed me at the Lux Capital 2019 Annual Dinner Talk. I watched it the other day and thought it was one of the best examples of a VC think piece that I’ve seen in a long time.
Lux‘s premise is that technology evolves out of the infinite arms race between deception and its detection. It touches on many contemporary ideas about truth and lies and the use of data in the pursuit of outcomes based on humans’ perceptions of truth and lies.
You don’t need to go very deep to understand how, as humans, we are regularly and continuously manipulated by the way information is presented to us. This isn’t a new phenomenon. What is new is how rapidly technology is evolving both in ways we understand as well as ways we don’t comprehend.
The optimist views this as innovations that will improve our species. The pessimist contemplates that this is a path that will diminish us, subjugate us to machines, or possibly even eliminate us.
Are you an optimist or a pessimist?
I’ve been a Superhuman email fan for a while. I decided a week ago to go try Gmail and see if I still liked Superhuman so much better.
After about two hours, I went back to Superhuman.
Several days later, I tried Gmail again, deciding that I was just grumpy for some reason. I bounced back to Superhuman within an hour.
This time I sat and thought about why I liked Superhuman so much better. It took a little while for it to come to me, but when it did it was painfully obvious.
When I’m looking at Superhuman, I am processing one consistent font. All the time. It’s the same for every email, except the occasional over formatted and stylized email marketing newsletter thingy. My focus stays on the content and the clean screen. I just read and respond.
When I’m in Gmail, there are a zillion random things everywhere. Emails are in different fonts – both types and sizes. My brain is constantly processing multiple inputs that make me tired, distract me, and slow me down.
All I really want to do it get through my email. When I just sit and process it email by email, with no context switching or distractions, it gets done quickly. Superhuman facilitates this; Gmail doesn’t.
Blogging is similar. The newest WordPress editor is delicious. I just type. It’s clean, simple, and always the same.
When I chew on it more, it’s part of why I love reading on a Kindle. The font is always the same, no matter what I’m reading. Suddenly, my brain is not processing different textures when I’m processing text.
It’s kind of clear to me when I type it out, but it wasn’t obvious until I thought about the other day. We’ve taken the UI to a place of divergence – it’s either consistent and simple or chaotic and complex. I’m all about consistent and simple.
Amy and I had dinner recently with Chris Couch, a friend from MIT who I hadn’t seen in 25 years. Before we had dinner, Chris sent us an email with a link to his High Altitude Photography Platform along with the video from Mission 1 of the HAPP.
Chris has a day job, so this has been his hobby for the past two years. It’s pretty epic – both as a project and a hobby. And, it’s reflective of the kind of brain many of my MIT friends have.
Amy met Chris on a flight from Boston to Dallas. She was flying to Dallas to meet me and my parents for a holiday weekend and Chris was flying to Dallas to meet his parents. They were sitting next to each other and Chris started writing equations on a napkin. Amy asked him what he was calculating and he said: “the amount of fuel the plane will use on this flight.” It was friendship at first sight.
While we hadn’t seen each other in many years, we reconnected as though no time had passed. While we’ve aged, the playful and curious spirit that we all had in our 20’s shined through during our long and winding conversation at dinner.
I love that Chris’ hobby (the HAPP) is a reflection of his brain. Is yours?
In Neal Stephenson’s newest book, Fall; or, Dodge in Hell: A Novel, the protagonist Richard “Dodge” Forthrast uses the phrase “The Miasma” to refer to the collection of technology that we commonly call “The Internet.”
When I first came across the phrase, I said out loud, “Brilliant.”
I was poking around on the Miasma this morning looking for a reference to this and found this Slashdot post about an interview with Stephenson from PC Magazine.
Q: How would you describe the current state of the internet? Just in a general sense of its role in our daily lives, and where that concept of the Miasma came from for you.
Neal Stephenson: I ended up having a pretty dark view of it, as you can kind of tell from the book. I saw someone recently describe social media in its current state as a doomsday machine, and I think that’s not far off. We’ve turned over our perception of what’s real to algorithmically driven systems that are designed not to have humans in the loop, because if humans are in the loop they’re not scalable and if they’re not scalable they can’t make tons and tons of money.
The result is the situation we see today where no one agrees on what factual reality is and everyone is driven in the direction of content that is “more engaging,” which almost always means that it’s more emotional, it’s less factually based, it’s less rational, and kind of destructive from a basic civics standpoint… I sort of was patting myself on the back for really being on top of things and predicting the future. And then I discovered that the future was way ahead of me. I’ve heard remarks in a similar vein from other science-fiction novelists: do we even have a role anymore?
I’ve spent the past year struggling with the Miasma. I’ve deliberately disengaged from some of it through deleting my Facebook account, limiting my Twitter usage to broadcast only, and trying to use LinkedIn in a productive way even though the UX seems to be set up to purposely inhibit you from using it in a way that doesn’t suck you into the LI vortex. I’ve stopped going to websites online proactively, have unsubscribed to everything except for a limited number of technology-oriented newsletters, and only read the articles that I click through to. I get summaries of what is going on daily through Techmeme’s newsletter, have a set of specific search filters set up in Google Alerts, and scan blogs I’ve subscribed to via Feedly. I try to never, ever, go to news.website.com (whenever I am bored and feel like doing this, I do ten situps instead.)
But the Miasma is still – well – the miasma. The relevant definition, if you don’t know it, is:
an influence or atmosphere that tends to deplete or corrupt
a dangerous, foreboding, or deathlike influence or atmosphere
While I was reading Edward Snowden’s book Permanent Record, he reflected on the joy of interacting with the Internet of his youth, which was the Internet from the mid-1990s. He remembers it as an idealized thing that has now become completely and totally corrupted.
When I described this to Amy, she responded with a magnificent rant that was something like “this is a romanticized utopian ideal about a thing that was inhabited by socially inhibited, white male nerds who consider themselves too smart to be misogynistic but, well, often are.”
The Miasma is a mess. It’s always been a mess. And it will always be a mess. Figuring out how to find beauty, usefulness, and joy in the mess is the opportunity. I’ve been thinking about that more lately and have a few ideas I’m going to play around with.
I attended a Silicon Flatirons Artificial Intelligence Roundtable last week. Over the years Amy and I have sponsored a number of these and I always find the collection of people, the topics, and the conversation to be stimulating and provocative.
At the end of the two hours, I was very agitated by the discussion. The Silicon Flatirons roundtable approach is that there are several short topics presented, each followed by a longer discussion.
The topics at the AI roundtable were:
One powerful thing about the roundtable approach is that the topic presentation is merely a seed for a broader discussion. The topics were good ones, but the broader discussion made me bounce uncomfortably in my chair as I bit my tongue through most of the discussions.
In 2012, at the peak moment of the big data hype cycle, I gave a keynote at an Xconomy event on big data titled something like Big Data is Bullshit. My favorite quote from my rant was:
“Twenty years from now, the thing we call ‘big data’ will be tiny data. It’ll be microscopic data. The volume that we’re talking about today, in 20 years, is a speck.”
I feel that way about how the word AI is currently being used. As I listened to participants at the roundtable talk about what they were doing with AI and machine learning, I kept thinking “that has nothing to do with AI.” Then, I realized that everyone was defining AI as “narrow AI” (or, “weak AI”) which has a marvelous definition that is something like:
Narrow artificial intelligence (narrow AI) is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.
The deep snarky cynic inside my brain, which I keep locked in a cage just next to my hypothalamus, was banging on the bars. Things like “So, is calculating 81! defined as narrow AI? How about calculating n!? Isn’t machine learning just throwing a giant data set at a procedure that then figures out how to use future inputs more accurately? Why aren’t people using the phase neural network more? Do you need big data to do machine learning? Bwahahahahahahaha.”
That part of my brain was distracting me a lot so I did some deep breathing exercises. Yes, I know that there is real stuff going on around narrow AI and machine learning, but many of the descriptions that people were using, and the inferences they were making, were extremely limited.
This isn’t a criticism of the attendees or anything they are doing. Rather, it’s a warning of the endless (or maybe recursive) buzzword labeling problem that we have in tech. In the case of a Silicon Flatirons roundtable, we have entrepreneurs, academics, and public policymakers in the room. The vagueness of the definitions and weak examples create lots of unintended consequences. And that’s what had me agitated.
At an annual Silicon Flatirons Conference many years ago, Phil Weiser (now the Attorney General of Colorado, then a CU Law Professor and Executive Director of Silicon Flatirons) said:
“The law doesn’t keep up with technology. Discuss …”
The discussion that ensued was awesome. And it reinforced my view that technology is evolving at an ever-increasing rate that our society and existing legal, corporate, and social structures have no idea how to deal with.
Having said that, I feel less agitated because it’s just additional reinforcement to me that the machines have already taken over.