According to Daniel Schmachtenberger, we are at the end of one kind of civilization and must transition to another. If we don't transition—if we stick with our current civilizational operating system—then we won't survive an interlocking set of global crises. In Daniel's terminology, our current civilization ("game A") will "self-terminate". The new civilizational setup ("game B") is yet to be invented. He suggests we invent it by starting from a series of requirements (he calls them "generator functions") which he has derived from the threats to our survival.
I pretty much agree with this assessment! I love Daniel's emphasis on redesigning society, rather than on power struggles or mindset shifts. And we both put special emphasis on redesigning institutions like voting and markets (“collective intelligence” is the lingo for this in Daniel’s scene).
But I would amend Daniel's account in one place: Game B is already here, operating, on Earth. His requirements are already met. We needn't reinvent civilization from whole cloth—instead, we can study what is already happening.
- The Left and Right Arm of Civilization
- Strong Right Arm
- Weak Left Arm
- Game B
- 1. Instrumentality
- 2. Types of Knowledge
- Example 1 - Personal Lives
- Example 2 - Court Systems
- 3. Antirivalry
- 4. Wide-Scoped Tech
- 5. Complex Systems
- What We Need Now
- Where We're Headed
The Left and Right Arm of Civilization
To tell this story, I have to start from a very basic idea about human beings—about how we make plans and search for fulfilment. I will say that we have two distinct kinds of problem solving abilities.
- We solve achievement-problems. There are things that we want to accomplish and "check off". With these, we solve problems about how to achieve them efficiently. We want them to be over as quickly as possible. I want to pay my taxes, and I want to do it as quickly as possible. I may need to fly to another country, which I want to do in the least time possible, etc. We can call those goals, although there's lots of other names you could use, like obligations or outcomes or whatever.
- We also solve practice-problems. Second, there are things that we want to make into ongoing practices and parts of our lives. For instance, I want to practice playing music. I want to practice loving people in a way that really embraces our development together, and how we change over time. Here, I am not trying to achieve something and check it off. Rather, I am solving problems about making my life into a practice space for what's important to me in an ongoing way. Neither of these—the practicing of music or the practicing of loving people—are things I want to do efficiently or in the least time possible. With the things that we practice, generally we are not about efficiency. We care about "the process".
Each kind of problem-solving takes a kind of intelligence. We are highly intelligent arrangers of life, both with regard to finding venues for practice and tools for accomplishing our goals. So, when I need to fly to a different country or file my taxes, I can also say "I can do that with this tool" or "this person could help me". That's an example of me having agency and intelligence in achievement-problems. And when I'm looking for people to love, I can kind of figure out where I can do that in my life and where I can't, and how to bring new people into my life that I can do that with. That's an example of me having agency and intelligence in practice-problems.
These two skills are like our left and right arms. And what I want to say here is that—as a civilization—one arm has been developed much more than the other.
Strong Right Arm
In particular, we have very sophisticated mechanisms which amplify our powers regarding achievement.
- We have vast systems for matchmaking to solve them: We find strangers to solve problems for us and with us, using everything from linkedin to classified listings to a variety of professions and professional trainings. Strangers can even form organizations and companies to help people with specific kinds of achievement-problems.
- We also have scalable structures for collaboration and incentives around them: We set up contracts, and we have all sorts of infrastructure—courts of law, small claims, billing, etc—to make sure people deliver on their contracts. We have offices and specialized workplaces of all sorts, project management software, and various pay-for-work schemes.
- Finally, we have knowledge related to achievement-problems: this includes much of science and engineering, vocational schools, textbooks, how-to guides, repair manuals, etc.
Developing in parallel with this social capacity for matchmaking, collaboration, incentives, and knowledge around achievement, we have also gotten better at expressing our achievement-problems and at refining them:
- We have clarity and specificity in naming them: We can list our goals, obligations, desires, impulses, etc. We can type them into google, select them using amazon checkboxes, assign them to others in task lists, and so on.
- We've refined and developed our achievement-problems themselves. We have a moonshot goal, like going to the moon, a super-personal one like buying a rare coffeemaker or becoming an engineer, or a super-technical one like reducing the coefficient of friction on the airplane wing by 0.05%.
All of this structure amplifies our agency with achievement-problems. This is our overdeveloped, right arm.
Weak Left Arm
Comparatively, our left arm is weak. To see this, look at what we might want to practice, like the earlier examples: loving people in a way that really embraces our development together. This is a pretty common practice-problem, but consider:
- Matchmaking. How would you find others to help you practice this? Is there a way to filter people online by it? Can you search for it on google? Mostly, we can only think about people we already know who would be good to practice this with.
- Collaboration and incentives. Sure, you could maybe find a relationship coach who specializes in this, and maybe a meetup group on the topic. But if you compare this to a similarly common achievement-problem (say, repairing the car's windshield) you must admit that collaboration and incentives mechanisms are sorely lacking.
- Knowledge. There are likely self-help books on this topic, and also literature. But how do you search for them? Is there a wikipedia page? Are there collected stories of people trying to live this way, and when it has worked out and when not?
Also, our practice-problems themselves are, in general, less refined and developed and less clear and specified. We cannot list them offhand, we cannot type them into google or another search box, assign them to people in task lists, or shop for them using checkboxes. We don't usually have moonshot practice-problems and are not as likely to have super-personal or super-technical ones.
So this is our underdeveloped left arm.
Game B
Here's my big claim:
To support this claim, I'll have to show why large scale practice-problem solving would match up with Daniel's generator functions. I'll have to convince you of this diagram:
game A
large-scale achievement-problems
- rivalrous
- narrow-scoped problems
- fragile, complicated systems
game B
large-scale practice-problems
- antirivalrous
- broad-scoped problems
- antifragile, complex systems
To do this, I'll introduce two intermediate concepts, and then waltz through the generator functions themselves.
So, the rest of this essay will be in five short sections:
- Instrumentality
- Types of Knowledge
- Antirivalry
- Wide-Scoped Tech
- Complex, Not Complicated
1. Instrumentality
You can look at anything—say, a car—from the perspective of practice-problems or achievement-problems. From the practice point of view, a car is an environment, or part of one. Is a car an environment to practice good sex in? To play word games with your friends? Is a place with lots of cars around good for meditation? Etc.
When you look at the same car from an achievement point of view, you consider the car as a tool. Is it going to get me laid? Can I get the hell out of Dodge City, Kansas with it?
Note the following:
- When a car is assessed in terms of achievement, all that matters is whether it will get the job done. All the other byproducts are overlooked. This is less true when the car is assessed as an environment.
- If you and I both have goals for which we require the car, we are in conflict. Is the car yours or mine? When we both have things we want to practice in the car, there is less of a conflict. We may even become practice partners!
- This lens of instrumentality can affect us at the smallest scales: we can try to use every moment. We can have great anxiety about whether to use ourselves for one goal we have, or for another. This is a kind of internal rivalry.
- It is not only humans who adopt this lens of instrumentality. It is also part of the nature of employment, contracts, market systems, and other systems we've built for solving achievement-problems. A contract usually specifies an exchange wherein both sides are permitted to use each other as tools for particular projects and within particular bounds, or wherein the right to such use is transferred.
I believe this instrumental view of objects (and people) is at the core of the problem Daniel calls rivalry. I will return to this below, when I discuss antirivalry and practice.
2. Types of Knowledge
Here's something I wrote in 2017:
The 19ᵗʰ and 20ᵗʰ Centuries saw the rise of Science. We built engines to collect, distribute, and certify scientific knowledge — e.g., textbooks, laboratories, and universities. We have also developed methods to verify this knowledge: scholarly debates, laboratory replications, the proofs of mathematics, and so on. But these developments ignored a kind of knowledge that’s more important to human beings: knowledge of how to live well. There's a ton of demand for this kind of knowledge, but no good methods to check or organize it (no wikipedias, scientific archives, citation indexes, etc). Predictably, the demand has been filled with ubiquitous BS. Nonsense authorities—like Gwyneth Paltrow and Deepak Chopra or Sheryl Sandberg—tell us how our lives, relationships, and careers should go. The elderly couple down the street—who have probably learned more about the subject—are ignored. What does organized and vetted wisdom look like? How can the people with hard-earned wisdom—rather than a book to sell—be recognized? What would the engines of wisdom look like?
This passage sketches out two types of knowledge, there is technical knowledge, which is knowledge about achievement-problems. And there is wisdom, which is knowledge about practice-problems. These two types of knowledge evolve differently, via different social processes, and in response to different events.
Technical knowledge is knowledge in the shape of achievement-problems, which break down into steps and subgoals and specializations. If I have a goal like "visit my grandmother", it might involve a step like "fly to Delaware", and executing this goal well might involve a specialization of labor which involves me, a pilot, and an aeronautics engineer. Ultimately very specialized goals, like reducing the coefficient of friction on the airplane wing by 0.05% might be involved, and the specializations and subfields are based on the way the goals can be broken into steps.
Knowledge about practice-problems, or wisdom, evolves in a different way. This kind of knowledge concerns the testing and acceptance of new values, approaches, or guidelines, rather than of new models, theories, techniques, or facts.
I will discuss two situations where new values or guidelines arise: first, in our personal lives; second, in court systems. Both have a similar structure, where an existing value or approach is called into question based on an example where it seems to fail, and from the failure a new, modified value is derived.
Example 1 - Personal Lives
In our personal lives, we try to live by one set of values and then we enter a situation where they don't guide us well, and we have emotions of conflict. As I wrote in Emotions, Values, and Wisdom:
A negative feeling signals a conflict between our values that we have to think about: • Perhaps we were pursuing value B but we forgot about value A. For instance, I was trying to be effective but I forgot that it was also important to me to be kind. This might result in embarrassment.
What comes out of these emotions is often new values—values which reconcile the conflict and can guide us through these situations. In this case, maybe instead of being effective, I try to act so as to build the capacity of the team. This involves being kind sometimes, maybe, and overall being more effective in the long run. With the second case, my emotions of conflict may lead me to
Example 2 - Court Systems
It's kind of similar with court systems! Instead of emotions of conflict, the court considers cases where harm resulted from the old values or approaches, and that tell us we need new values. A good example would be Canterbury v. Spence, from 1972, which established the value of informed consent in medical practice. From the wikipedia:
Until the 1960s, it was conventional medical doctrine to withhold significant information from patients, particularly potentially upsetting information. It was common practice not to tell a patient they were dying, and even to deny it. ... Instead, many practitioners revealed only information that another physician might provide, following a rule known as "the professional standard". Risks, in particular, were often glossed over or omitted entirely. Although the right to consent in medical situations had been recognized for decades, the notion of informed consent was new.
All of this happened because a man, one Jerry Watson Canterbury, was not informed of the risks of a surgery, got paralyzed, and took it to court. This one man's experience led to a new key value in the practice of medicine.
Whether due to a court case or to emotions of conflict, the evolution of values is similarly driven: There is a problem with the previous set of values. Through reflection, deliberation, or experimentation, a new value emerges as guiding practice better.
There is a kind of specialization here, but it is not like the specialization of technical knowledge. Different values are relevant to different practices. The practice of medicine is different than the practice of leading a team, which is different than the practice of intimate relationships. Different values have emerged to guide these different practices.
3. Antirivalry
Let's return to the example of a car, and look at it from the perspective of practice-problems.
You can look at anything—say, a car—from the perspective of practice-problems or achievement-problems. From the practice point of view, a car is an environment, or part of one. Is a car an environment to practice good sex in? To play word games with your friends? Is a place with lots of cars around good for meditation? Etc.
What happens if we view the car, not as a tool, but as part of an environment which might help us with what we want to practice in our lives? Or, in other words, when we consider the role of cars as practice spaces for our values—where our values are considered to be evolving as above? A car is an environment, or part of an environment, by which I'm either bringing my values into practice or by which my values are suppressed.
This is a wider understanding of the car and its role. I'll get to that later, in the section on scope. It's also, I believe, a less-rivalrous way to think about cars. Or, to move to another example, to think about tennis courts.
If your family and my family both arrive at the same tennis court at the same time, and we each want to use the court to play tennis, only one of us can win. At best, we have a scheduling problem. At worst, we bargain for the court and the richest family gets it, or we compete in some other way. There are situations where rivalry is inherent in the situation—if there is only one tennis court and it is impossible to build more, and there are millions of players, there will be contests of some sort for its use.
But there are many situations where the rivalry emerges from the problem formation and the infrastructure of coordination, rather than from the fundamentals. Consider what happens when both of our families come together around the problem of integrating the practice of tennis into our lives. This may be an opportunity rather than a contest. Perhaps we can share the costs of building a court, we can train one another up, or we can form inter-family teams for doubles. Often these possibilities grow exponentially when more such families come together around practice-problems.
These exponential possibilities (also called network effects or increasing returns) can form around practice problems, even when those coming together have substantially different values. For instance, if we both like to play tennis, but you value a game that's close to the net. I value a slower, mid-court game. It's likely we can still find a way to play tennis together, and additionally that we'll have explored different areas of technique, and have a lot to teach one another. Even more so if there's a variety of styles.
What decides whether these network effects dominate is often how the problem is framed: whether it is considered as a practice-problem or a set of competing achievement-problems. If each family is independently breaking their practice-problems (like practicing tennis) into individual family achievement-problems (like renting a court) then there will be a great deal of extra rivalry that comes from the fact that coordination is happening so late, and so individualistically, only at the achievement-problem stage, and not at the practice-problem stage.
In general, there are network effects, or increasing returns, when trying to build practice spaces that are good for our values, that support a range of values and that don't suppress our most important values. If the infrastructure of coordination supports the coming together around practice-problems, rather than merely around achievement-problems, these network effects can be realized, and rivalry is much less of a problem.
We see this already happening in the areas where people already gather around practice-problems: for instance, in jazz and electronic music, hiphop dance, internet meme culture, and in religious communities concerned with practicing values.
These subcultures lie outside the domain of market-coordination, contracts, and coordination mechanisms which force an achievement-problem frame. Instead, people gather around practice-problems, and the general feeling is anti-rivalrous, full of network effects and creative riffing. Venues for creativity in these domains are open and available, even in poorer places. They are operating as practice spaces, and are assessed by their richness as such.
Contrast this with art subcultures which are more market-oriented, like visual art. In visual art, venues like galleries and museums are seen as tools for advancing an artists' career, and are highly-contested, rivalrous spaces. These spaces tend to be less rich, and less generative, because they are assessed as tools, not environments.
4. Wide-Scoped Tech
Anti-rivalry is one of the requirements (or "generator functions") that Daniel has for Game B. Another requirement is that technology be built, or it's relevance considered, in another way. Currently, Daniel says that technology is evaluated based on whether it solves a narrow problem, and can be considered relevant and successful even if it makes broader problems worse in unpredictable ways. Here's one of his examples, cholesterol pills:
This is the same with biotech, where I can say the problem is one biometric that I’m trying to address, LDL or whatever it is, and I can give something that lowers that. But it might also do a bunch of things that are negative, which are the side effects of that thing in the overall system, which is why that approach is not a really good approach to medicine.
Other examples are close at hand—for instance, cars.
This problem is closely related to what we have discussed already, the specialization of technical knowledge, and the viewing of objects (including cholesterol pills, cars, and parts of the Earth, like minerals) mainly for their instrumental value as tools.
We already take a wider evaluative view when we consider these things as environments (or parts of environments) and evaluate them as practice spaces for our values. I have written a great deal about how exactly to measure—ultimately using a numerical metric—the success of a technology (and environment) like facebook or a public school in terms of how it works out for people's practice problems. Since I gave that talk ("Is Anything Worth Maximizing") I've been leading a community which has tested this methodology in many different settings. I believe if these values-based metrics became common in tech, medicine, education, and so on, this would go a long way towards solving problems with narrow scope.
Whether such metrics can really solve problems like the ones above with cars, cholesterol pills, and mining depends on whether the right values can evolve—values like keeping the overall health of the organism in mind, or zero-emissions, which would anticipate and address the problems caused by these technologies.
I can't promise you that they will. But what I can do is point out that these situations look very similar to situations where values do evolve currently. If we imagine that a car or a pill is wrecking the environment in a way which wasn't forseen by the pre-existing set of values when it was designed, this is very much like the examples I used when discussing the evolution of values—the personal example of not seeing the harm in driving a team towards effectiveness or the missing social value when the standard of medical practice was "the professional standard."
When we imagine the scaling up of our values-driven, practice-problem left arm, we must also imagine that these mechanisms for the evolution of values, including mechanisms like court opinions and individual discernment, are correspondingly scaled up to handle the additional load. Somehow we would need to do values evolution at the scale of science.
I think it is reasonable to expect that, under those conditions, the right values to redesign cars and cholesterol pills would evolve rapidly and shape how environments are formed and practice-problems are solved. Wisdom at the speed of technological change.
5. Complex Systems
Daniel has a third requirement for Game B, that in place of the complicated, fragile systems, made by experts, that we have today, we move towards complex, regenerative ecosystems, which can repair themselves and adaptively respond to environmental changes in the way that ecosystems can.
To make this shift work, too, has much to do with the dynamics of wisdom versus technical knowledge, and with the characterization of problems in terms of environments and practice spaces, rather than in terms of achievement-problems and steps.
What We Need Now
That concludes our tour of Daniel's requirements. I hope the reader feels that I've supported my claim: that the part of our society that's concerned with wisdom and practice-problems is game B, and that we should scale it the fuck up, pronto.
This reframing offers us three lessons.
First, we must become values-articulate. As I mentioned in the introduction, we are very articulate about our goals, other people's goals, and the diversity of goals. So when we are typing a Google search or clicking checkboxes on Amazon, we're specifying our goals and we can be very articulate. It's similar when we're writing a contract. This ability to specify our goals is one of the things that allows us to build scalable systems around goals. This makes it relatively easy for us to design tools that address specific achievement-problems, and coordination systems that operate on the diversity of goals. We need a similar ability to specify our values.
The second lesson is that we need to develop a new kind of design expertise. We have developed a rich and widespread understanding, in our culture, of how to design systems around goals, of how those systems can go wrong with regard to goals, and of how to make them robust against those failures.
At intimate scales, we have the disciplines of a product design, user experience design, and industrial design. These cover the kinds of failures that happen when a poorly designed tool is frustrating or confusing to people who are trying to solve their achievement-problems. These fields give designers a powerful lens: whenever they look at objects, they see the potential frustrations and confusions that might arise as people try to achieve their goals with it.
On a larger scales we have disciplines like mechanism design, social choice, public choice theory, market design, public health policy, and the like. These fields are concerned with goal-related failures at larger scales—failures like the tragedy of the commons, incentives to fraud, or the prisoner's dilemma. Those who study these fields get primed to anticipate those failures and design around them.
What would it be like, to have a group of people the size of the current community of product designers, UX designers, but focused on practice-problem design failures and the difficulties people have in living by values in different environments? Imagine thousands of people who can walk around and see these values-based design failures, everywhere they go, just like the best designers now see problematic tools.
What would it be like, to have a body of expertise similar to mechanism design, social choice, and public health, but using analytical tools and models that focus on failures to coordinate around practice-problems?
Where We're Headed
The need to develop this expertise leads to the third lesson I see about how to get to Game B. Such expertise doesn't spring out of nowhere just because of some blockchain project or new formal process. In fact, we can expect things to grow in the other direction—starting with informal and local systems and gradually building up formalizations, guiding principles, and textbook fields from our successes.
Daniel often talks as if the defining moment in Game B will be some new kind of blockchain-based crypto-collectivist enclosure: a commons-based society with a new formal system at its core, that manages to capture its network effects and make them available to all within it, accelerating it's own progress exponentially, and outcompeting Game A.
But this idea, that a formal system will come first, is probably wrong. It's more likely that a variety of informal systems which are working well—which are solving practice problems at new, slightly larger scales—will be studied and formalized.
Consider how things have developed with Game A:
- We now have sophisticated legal paperwork for corporations, but these developed and were formalized out of common law practice, based on what was working.
- We have sophisticated formal systems for the weighing, grading, trading, and logistics of commodities like steel, eggs, and orange juice. These are now managed by international and national panels of experts. But before these systems existed there were local, informal institutions for solving the same problem. It was only by studying what was working that the formal mechanisms were worked out.
So whereas Daniel might suggest starting with formal systems, and talks at the very, very abstract level—about civilizational operating systems and currencies and things like that—I don't think this is the right place to start.
I think a better approach is to get in the habit of understanding one's own values, understanding each other's values, and building informal and semi-formal systems that scale up our ability to live by our values and to coordinate around them. Then, once these systems are operating fairly well, we can look at how to formalize them and make them work for more people, more diverse values, more diverse practice-problems. And we can export the design expertise we've gained into textbooks and trainings and the like.
I'm optimistic about this approach, and it's the approach (Our Mission ) that Human Systems is taking. If you like it too, consider enrolling in our classes (HS101 Deluxe) in values-articulacy and values-based design.