Complexify is such a delicious, underused word. I’ve been using it a lot lately, hopefully with great effect on people who are on the receiving end.
CEOs and founders struggle with this all the time (as do I). They are executing on a strategy and a plan. A new idea or opportunity comes up. It’s interesting and/or exciting. Energy gets spent against it. Momentum appears. While some people on the team raise issues, suddenly the idea/opportunity starts taking on a life of its own. Things get more complex.
Eventually, there’s a reset. The core of what is going on is good – there’s just a bunch of complicated crap happening that is distracting everyone and undermining the goodness in the business. So, the CEO and the leadership team go on a mission to simplify things. This takes a while, usually involves killing some projects, and often results in some people leaving the company. These aren’t big restructuring exercises but rather focused simplification exercises. The end result is often a much stronger business, with more focus, faster growth, and better economics, especially EBITDA.
This happens regularly in the best companies that are scaling. In my view, it’s a key part of the job of a CEO who is working “on the company” a majority of her time, rather than simply working “in the company.” It’s particularly powerful when a company starts to see its growth rate decline (it’s still growing, but at a slower pace than before) or a company is spending too much money relative to its growth rate.
Six months (or twelve months) later the simplification effort is complete. The company is performing much better. EBITDA has dramatically improved (or the negative EBITDA has gotten a lot smaller.) Growth is happening in an economically justified way. The product is improving faster. Customers are happier. Everyone around the team is enthusiastic.
And then a new idea or opportunity appears. Energy starts being spent against it. Momentum appears. You get where this is going.
I call this complexifying, a word I rarely see in the entrepreneurship literature. Maybe it’ll start creeping in now. All I know is that I’m using it a lot these days.
Yesterday’s post Relentlessly Turning Input Knobs To 0 generated a bunch of interesting private comments. It also generated a few public ones, including the link to the article What is the problem with social media? by Jordan Greenhall which was extraordinary.
Jordan asserts that the problem with social media can be broken down into four foundation problems.
- Supernormal stimuli;
- Replacing strong link community relationships with weak link affinity relationships;
- Training people on complicated rather than complex environments; and
- The asymmetry of Human / AI relationships
He then has an essay on each one. The concept of supernormal stimuli is straightforward and well understood already, yet Jordan has a nice set of analogies to explain it. Tristan Harris and his team at the Center for Humane Technology have gone deep on this one – both problems and solutions.
I found the second essay – replacing strong link community relationships with weak link affinity relationships – to resonate with something I’ve been experiencing in real time. As my weak link affinity relationship activity diminishes (through lack of engagement on Facebook and Twitter), all the time I spent on that has shifted to strong link community relationships. Some of these are in person, some by video, some by phone, and some by email, but they are all substantive, rather than shallow (or weak.) I also find that I’m having a wider and deeper range of interesting interactions, rather than a continuous reinforcement of the same self-affirming messages. And, I’m more settled, as I’m not reacting to endless shallow stimuli or interacting with lightweight intention. And, my brain feels like it has more space to roam.
The third essay – training people on complicated rather than complex environments – totally nailed it for me. Ian Hathaway, my co-author on Startup Communities 2, has been working deeply on how startup communities are complex (rather than complicated) systems. This is a central theme of our upcoming book and the contrast between a complicated system (having a finite and bounded (unchanging) set of possible dynamic states) and a complex system (having an infinite and unbounded (growing, evolving) set of possible dynamic states) is a really important one. I loved Greenhall’s conclusion:
“In the case of complexity, the optimal choice goes in a very different direction: to become responsive. Because complex systems change, and by definition change unexpectedly, the only “best” approach is to seek to maximize your agentic capacity in general. In complication, one specializes. In complexity, one becomes more generally capable.”
He then goes on to define social media as training humans to navigate a complicated system, taking time away from us “training our sense making systems to explore an open complex space.” His examples of how this works in the context of Facebook are excellent.
While the asymmetry of Human / AI relationships is nothing new, the Ke Ji / AlphaGo / AlphaGo Zero story is a reminder of what we are contending with. I loved:
“The Facebook AI is Alpha Go. The equivalent of Alpha Go Zero is a few minutes in the future. We need to get our heads around the fact that this kind of relationship, a relationship between humans and AI, is simply novel in our experience and that we cannot rely on any of our instincts, habits, traditions or laws to effectively navigate this new kind of relationship. At a minimum, we need to find a way to be absolutely dead certain that in every interaction, these gods of social media have our individual best interests in mind.”
I didn’t expect this treat to come out of my blog post yesterday, but it’s part of why I blog. And I doubt I would have found it scanning my social media feeds.