As I caught up this morning on the posts in my new Startup Community community (over 1,000 people have already joined – that was a pleasant surprise – click here to join us) I noticed one from a member from a founder in Adelaide. It was in response to a prompt from Tom Higley to kick off the Complex Systems topic discussion.
The response (one of many) was:
The video is an excellent short description of Complexity Science with an example of adopting a complexity science mindset to the problem of Urban Greening.
It immediately reminded me of Kimbal Musk and Hugo Matheson’s The Kitchen Community Learning Garden initiative (now Big Green) that has transformed a lot of schools in Boulder before starting to expand around the US.
I’m starting to feel like my answer to any question I get should be “It’s complex.”
A Simple, a Complicated, and a Complex system walk into a bar.
Simple says to the bartender, “Can I have a drink?” The bartender gives Simple a glass of water.
Complicated says to the bartender, “Can I have a Rum Martinez?” The bartender does the following:
Complex says to the bartender, “Can I have a Startup Community?” The bartender escorts Complex to the spot behind the bar where the bartender was previously standing and says, “You now have all the tools to make your own drink.”
Ian and I are in Knoxville grinding through getting our draft of The Startup Community Way (now at 65,000 words) into shape. A core part of the construct of the book is the notion of a complex adaptive system which we are using as the framework for explaining the behavior of a startup community.
To understand how a startup community evolves, you have to understand how complex adaptive systems work. SCW (our TLA for the book The Startup Community Way, as compared to SC1, which is our TLA for the book Startup Communities) has two chapters on this (currently Chapter 5: The Science of Startup Communities and Chapter 6: Practical Implications of Complex Adaptive Systems).
But even before you get to this point, it’s important to understand the difference between Simple, Complicated, and Complex systems. As a starting point, I thought I’d try to describe them in simple language, rather than dig into the extended theory around them.
A Simple system is one that has a single path to a single answer. If you want to get to the solution, there is one, and only one, way to do it.
A Complicated system is one that has multiple paths to a single answer. To get to the answer, you have multiple different choices you can make. However, there is only one correct solution.
A Complex system is one that has multiple paths to multiple answers.
When you toss in the word “adaptive”, you end up with a system that changes based on the choices that you make, and as a result of these choices, the answers change.
Startup communities are complex adaptive systems. Ian and I have been wrestling that notion to the ground for a while (I credit him with coming up with the idea) and we are getting closer, even though the answer keeps changing as we learn more about it (see what I did there?)
Complexify is such a delicious, underused word. I’ve been using it a lot lately, hopefully with great effect on people who are on the receiving end.
CEOs and founders struggle with this all the time (as do I). They are executing on a strategy and a plan. A new idea or opportunity comes up. It’s interesting and/or exciting. Energy gets spent against it. Momentum appears. While some people on the team raise issues, suddenly the idea/opportunity starts taking on a life of its own. Things get more complex.
Eventually, there’s a reset. The core of what is going on is good – there’s just a bunch of complicated crap happening that is distracting everyone and undermining the goodness in the business. So, the CEO and the leadership team go on a mission to simplify things. This takes a while, usually involves killing some projects, and often results in some people leaving the company. These aren’t big restructuring exercises but rather focused simplification exercises. The end result is often a much stronger business, with more focus, faster growth, and better economics, especially EBITDA.
This happens regularly in the best companies that are scaling. In my view, it’s a key part of the job of a CEO who is working “on the company” a majority of her time, rather than simply working “in the company.” It’s particularly powerful when a company starts to see its growth rate decline (it’s still growing, but at a slower pace than before) or a company is spending too much money relative to its growth rate.
Six months (or twelve months) later the simplification effort is complete. The company is performing much better. EBITDA has dramatically improved (or the negative EBITDA has gotten a lot smaller.) Growth is happening in an economically justified way. The product is improving faster. Customers are happier. Everyone around the team is enthusiastic.
And then a new idea or opportunity appears. Energy starts being spent against it. Momentum appears. You get where this is going.
I call this complexifying, a word I rarely see in the entrepreneurship literature. Maybe it’ll start creeping in now. All I know is that I’m using it a lot these days.
Yesterday’s post Relentlessly Turning Input Knobs To 0 generated a bunch of interesting private comments. It also generated a few public ones, including the link to the article What is the problem with social media? by Jordan Greenhall which was extraordinary.
Jordan asserts that the problem with social media can be broken down into four foundation problems.
- Supernormal stimuli;
- Replacing strong link community relationships with weak link affinity relationships;
- Training people on complicated rather than complex environments; and
- The asymmetry of Human / AI relationships
He then has an essay on each one. The concept of supernormal stimuli is straightforward and well understood already, yet Jordan has a nice set of analogies to explain it. Tristan Harris and his team at the Center for Humane Technology have gone deep on this one – both problems and solutions.
I found the second essay – replacing strong link community relationships with weak link affinity relationships – to resonate with something I’ve been experiencing in real time. As my weak link affinity relationship activity diminishes (through lack of engagement on Facebook and Twitter), all the time I spent on that has shifted to strong link community relationships. Some of these are in person, some by video, some by phone, and some by email, but they are all substantive, rather than shallow (or weak.) I also find that I’m having a wider and deeper range of interesting interactions, rather than a continuous reinforcement of the same self-affirming messages. And, I’m more settled, as I’m not reacting to endless shallow stimuli or interacting with lightweight intention. And, my brain feels like it has more space to roam.
The third essay – training people on complicated rather than complex environments – totally nailed it for me. Ian Hathaway, my co-author on Startup Communities 2, has been working deeply on how startup communities are complex (rather than complicated) systems. This is a central theme of our upcoming book and the contrast between a complicated system (having a finite and bounded (unchanging) set of possible dynamic states) and a complex system (having an infinite and unbounded (growing, evolving) set of possible dynamic states) is a really important one. I loved Greenhall’s conclusion:
“In the case of complexity, the optimal choice goes in a very different direction: to become responsive. Because complex systems change, and by definition change unexpectedly, the only “best” approach is to seek to maximize your agentic capacity in general. In complication, one specializes. In complexity, one becomes more generally capable.”
He then goes on to define social media as training humans to navigate a complicated system, taking time away from us “training our sense making systems to explore an open complex space.” His examples of how this works in the context of Facebook are excellent.
While the asymmetry of Human / AI relationships is nothing new, the Ke Ji / AlphaGo / AlphaGo Zero story is a reminder of what we are contending with. I loved:
“The Facebook AI is Alpha Go. The equivalent of Alpha Go Zero is a few minutes in the future. We need to get our heads around the fact that this kind of relationship, a relationship between humans and AI, is simply novel in our experience and that we cannot rely on any of our instincts, habits, traditions or laws to effectively navigate this new kind of relationship. At a minimum, we need to find a way to be absolutely dead certain that in every interaction, these gods of social media have our individual best interests in mind.”
I didn’t expect this treat to come out of my blog post yesterday, but it’s part of why I blog. And I doubt I would have found it scanning my social media feeds.