Yesterday’s post Relentlessly Turning Input Knobs To 0 generated a bunch of interesting private comments. It also generated a few public ones, including the link to the article What is the problem with social media? by Jordan Greenhall which was extraordinary.
Jordan asserts that the problem with social media can be broken down into four foundation problems.
- Supernormal stimuli;
- Replacing strong link community relationships with weak link affinity relationships;
- Training people on complicated rather than complex environments; and
- The asymmetry of Human / AI relationships
He then has an essay on each one. The concept of supernormal stimuli is straightforward and well understood already, yet Jordan has a nice set of analogies to explain it. Tristan Harris and his team at the Center for Humane Technology have gone deep on this one – both problems and solutions.
I found the second essay – replacing strong link community relationships with weak link affinity relationships – to resonate with something I’ve been experiencing in real time. As my weak link affinity relationship activity diminishes (through lack of engagement on Facebook and Twitter), all the time I spent on that has shifted to strong link community relationships. Some of these are in person, some by video, some by phone, and some by email, but they are all substantive, rather than shallow (or weak.) I also find that I’m having a wider and deeper range of interesting interactions, rather than a continuous reinforcement of the same self-affirming messages. And, I’m more settled, as I’m not reacting to endless shallow stimuli or interacting with lightweight intention. And, my brain feels like it has more space to roam.
The third essay – training people on complicated rather than complex environments – totally nailed it for me. Ian Hathaway, my co-author on Startup Communities 2, has been working deeply on how startup communities are complex (rather than complicated) systems. This is a central theme of our upcoming book and the contrast between a complicated system (having a finite and bounded (unchanging) set of possible dynamic states) and a complex system (having an infinite and unbounded (growing, evolving) set of possible dynamic states) is a really important one. I loved Greenhall’s conclusion:
“In the case of complexity, the optimal choice goes in a very different direction: to become responsive. Because complex systems change, and by definition change unexpectedly, the only “best” approach is to seek to maximize your agentic capacity in general. In complication, one specializes. In complexity, one becomes more generally capable.”
He then goes on to define social media as training humans to navigate a complicated system, taking time away from us “training our sense making systems to explore an open complex space.” His examples of how this works in the context of Facebook are excellent.
While the asymmetry of Human / AI relationships is nothing new, the Ke Ji / AlphaGo / AlphaGo Zero story is a reminder of what we are contending with. I loved:
“The Facebook AI is Alpha Go. The equivalent of Alpha Go Zero is a few minutes in the future. We need to get our heads around the fact that this kind of relationship, a relationship between humans and AI, is simply novel in our experience and that we cannot rely on any of our instincts, habits, traditions or laws to effectively navigate this new kind of relationship. At a minimum, we need to find a way to be absolutely dead certain that in every interaction, these gods of social media have our individual best interests in mind.”
I didn’t expect this treat to come out of my blog post yesterday, but it’s part of why I blog. And I doubt I would have found it scanning my social media feeds.