subtitle Practical Techniques to Align Markets, Products, and Orgs with Togetherness & Meaning
- Hard steps from Deep work
- App store spaces list
- Facebook ad
- Review screen
Google and Apple are designing new tokens that share a bit of info about each of us as we browse or use or devices but they are optimised to sell us stuff. What if they were optimized to get us meaning instead?
- edge sources of meaning
- social situation for each
- a few bits of demographic and geographic info
- Ad token / profile
- Sociality situation for each
- Top HS
- Location / demo
- App store sort
- Ad network
- Suggested for
- You recently downloaded Matter for
slidesAdd redesigns of TikTok and Telegram
slidesSection and slides where I pull our hard steps from DEEP WORK, using Bret Story and More
slidesImprove wireframes for hard steps re Bret story
slides -> talkDo we need to reformat the hard steps?
talkAdd text for twitter app
talkadvantages of creative riffing / white swan collaboration
Also, I hope the script will be better! That’s where you come in. Please send me notes, so I can improve it before launch.
Travel back to 2008. The first iPhones were out. I was working at Couchsurfing, and it was growing fast. Meetup.com and Wikipedia were all the buzz. Flashmobs were booming! Anyone remember Improv Everywhere? It was the “sharing economy”! And back then, that meant an economy of gift-giving, not one where everyone rents and no one owns.
It seemed—to me and many others—that the internet was bringing us a better economy where people work together on giant projects like Couchsurfing, Wikipedia, and Linux. A big step, I thought, in the direction of meaning and togetherness. Love was be the new motivator. Money would fade away. I believed!
By 2012, I’d sobered up. By then, Facebook’s News Feed and YouTube had not only replaced TV, but had added hours of screen-time per person per day. The “attention economy”. In alarm, Tristan Harris and I cofounded the Center for Humane Tech.
Actually, it wasn’t called “the Center for Humane Tech” — it was called “Time Well Spent”. This was part of my proposal to fix the problem. At CouchSurfing, we had tried to maximize the amount of meaningful time our users spent with each other, rather than any measure of transactions. Tristan and I thought that if more people did this—maximized “Time Well Spent” instead of “Time Spent”—it would fix the tech industry.
So, I was pretty naive when I thought that flashmobs were the new economy. But I was still naive in 2012. I didn’t see the larger trends. After Humane Tech, I came to believe that the problem wasn’t with tech, actually.
I started reading economics, economic history, and social theory.
Individual vs Collective
I discovered that modern society has gotten better at giving people individual experiences, but not collective ones.
Now, people do want collective things: they want belonging, connection, community, love, they want adventures together. But if you look at long-term trends, we get less of these things year-by-year.
In most developed countries, people used to hang out on the porch with their neighbors, or in pubs; then they started watching TV as a family; then they switched to multiple TVs, one in each room in the house; finally the TVs got upgraded to smartphones. Each person staring at their own rectangle of glass.
Over the same period, church communities got replaced by individualized yoga classes; dating and friend groups got replaced by swipe-based apps and porn.
Now, this seems quite strange. Over this time period, markets and of the internet have massively increased the choice available:
- we can buy pants from around the world,
- or learn moves from an obscure breakdancer in Japan.
You’d think, this explosion of choice—that because of this, many more people would get exactly what they wanted out of life. But—at least with collective things—that’s not what happens.
Sure, some apps or websites might be good for community for a moment. I guess Couchsurfing was one. If you work on one of those, like I did, you might feel you’re changing the trend. But zoom out a little and—no, sorry, the trend’s still there.
Taste vs Meaning
And here’s another trend: one way the internet’s increased choice, is via giant marketplaces. Amazon. The App Stores. They have something for everyone. They embrace the differences between people. You can find the right options, just for you, whether you’re a goth teen, a recumbent bicycling enthusiast, or a cake decorator.
But there’s a kind of difference between people they’re not so good at. Everyone has different sources of meaning—you may find it meaningful to be wildly creative; she may find it meaningful to be quietly contemplative; they may find it meaningful to love and be loved deeply. We vary as to what’s meaningful to us, but that’s a difference that markets and the internet don’t identify and serve.
So it’s easy to satisfy your obscure tastes, than to make your life meaningful. You end up with a perfect coffee blend, but no creativity, no contemplation, nothing you really want.
Thesis Statement 🙂
So, you might be thinking: perhaps this is okay?
Markets and the internet have given us a lot! Maybe the isolation and meaninglessness is worth the other stuff? Or maybe you’re thinking: coffee blend? this sounds like a first world problem. And yes, in a way it is a first world problem. And in a way the trade-off was worth it, so far. But it’s fast becoming a problem for everyone on Earth, and a very serious one.
It might have been worth it for a while, but not anymore.
- The problem with togetherness has gotten so bad, that it’s breaking politics and social trust.
That’s bad for everyone.
- The problem with meaning, I’ll show, is also breaking the engines of progress—like science and democracy—which are needed by everyone.
- In the developed world, these problems are breaking the basic cycles of love and dating, breaking schools, dissolving communities.
So, we should understand why markets and the internet have this bias, and do something about it.
So we should understand these trends, and reverse them.
in this talk, I'm going to lay out an agenda for doing that.
- To start, I have to dive into the details, and talk about what meaningfulness is—what it has to do with togetherness. That's the first section.
- Next, I'll talk about how, as individuals or entrepreneurs, we can make things more or less meaningful.
- Finally, I’ll use all that to talk about how we can reverse the trend.
🎙️ Chapter 1 - What’s Meaningful?
In this chapter I’ll talk about meaning at three levels of specificity.
Let’s start with these questions:
- What's something that later day or tomorrow, you want to get done?
- What’s something you hope to get over quickly?
- What something you hope to accelerate or delegate—to get someone else to do?
Try to answer them in your mind. Everybody have something? Okay.
Next set of questions. Instead of something you want to accelerate…
- What's something in the next day or two you want to linger on?
- Something you want to spend more time with?
- Something to make sure to notice, just because it's meaningful?
- Something you want to celebrate and cherish, or slow down to enjoy?
Everyone have something?
There’s something I’ve noticed, running this experiment. Most people have answers to both sets of questions, but the answers come quicker with the first set. And they’re more specific. They can give paragraphs of information on what they want to get done, offload, or skip.
Answers to the second questions are at a lower resolution.
I think the reason is straightforward. We spend a lot of time making to do lists, very little time making to slow down or cherish lists. We practice the first questions a lot; the second questions almost never.
There's something sad about this. By focusing on those first questions, we miss everything worth celebrating; everything that makes it worthwhile to be alive. We see clearly what we want life to be rid of, but not what we want it to be full of.
This difference also affects what we make.
Imagine you’re building an educational platform. Are students there to pass the test or get a certification? Or are they there to follow their curiosity, to face their open questions, or to celebrate together the beauty and complexity of the world?
On top are things your customers want to get done, get over with; on the bottom, the things that make life worth living.
Call the things on the top our goals. We are intricately aware of them. They form a kind of superstructure—smaller goals fit into larger ones, and we are dimly aware of the whole tree. We make lists of goals, at different scales.
Call what’s on the bottom our sources of meaning. Or I often call them our values. We are much less articulate about them. We don’t see the same kinds of patterns in them, that we see in our goals.
Part of my goal in this talk is to change that. To sharpen up “sources of meaning”—make them as clearly defined as goals seem to be, and as communicable. I need to do this, to make my argument well.
But we also need sources of meaning to be clearer if we want to build businesses and apps around them. To fix the bias in markets and the internet.
Meso: Funnels, Tubes, and Spaces
See, this difference in articulacy makes things challenging for entrepreneurs. Say you interview your customers for “pain points” or “customer needs”—but what’s top of mind for both you and your customers are goals, not sources of meaning, you’ll only collect their goals.
So you’ll design for their goals.
You’ll think of yourself as building what I call a funnel or a tube, not a space. Funnels and tubes are goal-driven.
I call something a funnel if it gets everybody to do the same thing, or work on the same goal. By this definition, the checkout area in a supermarket is a funnel. So are many organizations. There's one goal for everyone.
Similarly, I'll call something a tube if it gets people from where they are to their own goal. So Amazon, and all marketplaces, are tubes. So are Google searches. Tubes accelerate everyone to their own goal.
Both are goal-driven things, that everyone involved would accelerate if they could—to get the goal accomplished more quickly.
Many entrepreneurs see all design tasks as funnels and tubes. But that’s a big mistake.
Some things aren't designed around goals at all. Instead, they are about values, or exploration according to values. I'll call those exploratory spaces.
- spaces for exploratory thinking like your whiteboard, your journal, or a research lab.
- spaces for creativity, like jam sessions and brainstorms and creative tools
- spaces for chilling, like your living room
- spaces for vulnerability, like talks around a campfire, or a confession booth
- spaces for celebration like dance clubs, street riots, and festivals
These are not goal-driven. You know something is a space if you don't want it to be over quickly. You would accelerate an Amazon purchase if you could, or an uber ride, or an organizational goal. The things you do in a space are things that you would not accelerate.
Most design tasks involve a mix of these three parts.
Imagine you're making a messaging app, like Telegram or Messenger.
- Sometimes you open the app, search for who you want to send a message to, and send it. Then, the app is a tube, getting you to your goal of sending a message.
- But the messaging app is also a kind of exploratory space—a space for thinking about who you want to stay in touch with, about which kind of correspondence you want to have with who, at which kind of rhythm; a space for being thoughtful about your correspondence, for being vulnerable, and more generally for expressing whatever values you have about keeping in touch. In this sense, the messaging app is a space.
A messenger app can be thought of as a tube, or it can be thought of as a space. And there are many, many things like this.
Same with that educational platform I mentioned before. If you focus on your users’ goals, you’ll think of it as a funnel or tube. If you focus on their sources of meaning, it’s a space.
This is even true with something like advertising analytics: it may seem like your customer has a straightforward goal—to have the most people see their promoted tweets. But you could also design an advertising analytics tool as a space—a space to explore your audience, to find audiences who meet you in certain ways, to build up a certain kind of meaningful rapport in a community.
So designers can often decide whether to make a funnel, a tube, or a space.
Or—as with our messaging app—they might make a mix of all three, where each screen or UI component participate in one or more funnels, tubes, or spaces.
Now, designers have a lot of words for a lot of things. But they don’t have terminology that corresponds to funnels, tubes, and spaces. Designers don’t broadly recognize it yet, but spaces require a different approach.
And there isn’t a special word in entrepreneurship for entrepreneurs who make spaces— even though many space-making entrepreneurs exist.
Of course they do:
- Many game designers like the designers of Roblox and Minecraft are spacemakers.
- The founders of Burning Man are spacemakers.
- The people who set up Bell Labs and Xerox PARC are spacemakers.
These projects aren’t pure spaces. They involve a mix of funnels and tubes, too. But the people who make them clearly focus on spaces. So I’ll call them “spacemakers”.
One thing I’ll advocate for, in chapter 3, when I talk about reversing the long term trends of isolation and meaninglessness, is the creation of a shared identify for spacemakers.
In a way, this wouldn’t be so special. There are many specialized entrepreneurial communities. Consider organic farming. Or “zebras unite”, a community of alternative tech entrepreneurs. These groups have their own funding and legal structures, their own metrics and certifications, etc.
I believe a community of spacemakers would be substantially more powerful than the organic farmers or the zebras, because life meaning is such a central concern for human beings, and because the rest of the economy has done such a bad job with spaces.
- The public is so incredibly underserved and mis-directed, when it comes to meaning and togetherness.
- Governments, too, are wasting a ton of money investing in democratic, educational, and scientific structures that no longer function because their spaces have decayed.
People are flushing their dollars down the drain.
There are businesses claiming to be about ”community”, “sharing”, “adventure”, or “love”—words that suggest a space. But most of these businesses are smart marketing, slapped on a funnel.
A community of spacemakers could become trusted to fix these problems. To deliver meaning and togetherness instead of mis-marketing funnels. To fix the problems of space-decay in democracy, education, and science.
Together, these might be the largest sources of economic inefficiency the world has ever seen.
A trusted community spacemakers can help.
Language of Meaning
In chapter 2, I’ll say how a spacemakers could operate as businesses, and what their design practice might be. But before I get there, I want to continue trying to clarify what’s meaningful. Let’s go deeper.
So far, I’ve talked about “sources of meaning” with vague words like “creativity”, “vulnerability” and “rapport”. But this won’t do. We need to make “sources of meaning” as clear and communicable as goals, fears, or feelings.
Imagine you're making a space for vulnerability, or an exploratory app for creativity. Or maybe you are expanding a network of farmers markets, and you want them to be good spaces for people to explore localism in food systems.
Well, you have two big problems in orienting your design around vulnerability, creativity, or localism.
- First, these terms are vague. Different people in your team will disagree about what 'localism' means, what 'vulnerability' means, etc.
- And this leads to a second problem: these terms are also untestable. How do you know whether vulnerability is working out? How do you avoid tricking yourself into thinking it is, or at least picking whatever definition of vulnerability is easiest, rather than what would be most meaningful?
I struggled with these problems for many years.
They're quite deep!
It took years of reading to solve them.
Here's some papers that helped me figure them out.
These are by Amartya Sen, an economist who won the Nobel prize.
These are by philosophy professors—Ruth Chang, Charles Taylor, and David Velleman.
In the the end, based on this reading, I came up with these values cards.
- The cards are specific. They drill down on a vague word like vulnerability, creativity, or localism. People care about distinct kinds of vulnerability, and you'll pick one or two to focus on in your project.
One key idea, that I got from my reading, is that values show up when we make choices.
They show up in our attention.
If you care about something, you’ll pay attention to it during a choice.
So, the center of each card tries to capture what people pay attention to, and choose by, if they have this source of meaning.
The other key idea is that values are less about concrete outcomes, more about a quality people want in their lives.
So, at the bottom of the card, it says what qualities happen, when you live by the value.
That’s it! The cards define “a value” or “a sources of meaning” (we use these terms inter-changably) in these two parts, and give it a name.
Two Kinds of Creativity
To show the power of this, consider these two kinds of creativity.
- Let's say you're into this kind of creativity—about having a lot of exciting ideas which build on each other. Usually together with a brainstorm buddy. To practice it, you need to attend to certain things: for instance, to finding the right conversational rhythm, the right kinds of reactions, and the right companions.
- Contrast that with this other kind of creativity. If you value this one, you’ll focus on different things. On your longest lasting curiosities, how you can study them over time, and where to pursue them deeply.
Even though these are both kinds of creativity, they call for different designs.
- Think about someone who finds meaning in the first kind. They’d be well-served by a social environment that makes it easy to find that buddy, or test different buddies out, and where there’s low stakes, and a lot of quick thinking.
- But someone who values the second kind of creativity may need a quieter environment, or one where any social pairings involve much more context.
Put Person B in environment A, and they won't be able to pay attention to those long lasting curiosities, they won’t have a chance to do deep work.
And vice versa: put person A in environment B, and they won't be able to find someone to brainstorm with and do creative riffing.
That’s an important fact. The attentional paths that are written on these values cards let us differentiate one kind of creativity from another; one kind of vulnerability from another, and one kind of localism from another.
And they let us be rigorous about whether a kind of creativity (or whatever) is happening in our designs. It’s simple: for a value to really be happening, people must be attending to what’s on the card, and choosing by it.
If, in your design, people can attend to these particular types of things, and make their choices by them, congratulations—people are able to live by their value within the space you made.
Emotions and Values
It’s eye opening to find out the sources of meaning for the people around you.
One way you can do that is by asking about the spaces they need in their lives: which spaces are responsible for their most meaningful experiences? What were those spaces good at?
Another way to learn about them is to look into people's emotions. Emotions point to values that are working out, or not working out.
- If I'm angry, one way to interpret that is that something important to you me—a way I want to live—is blocked. The thing that's blocked—that way I want to love—can be written as a values card.
- if I'm sad, it might be because some terrible thing has placed me far away from a way I want to live.
- If I'm grateful, some fortunate event has brought me in contact with a way I want to live.
In each case, the way I want to live can be written as values card.
Whatever I'm feeling, whether it's a positive or negative emotion, will point to something important to me, something that can written as a values card.
Making these cards, and finding values in feelings—these are two of the best ways I know to get clear on your own values and the values of the people you love. Once you see each emotion points to a value, your understanding of your sources of meaning sharpens right up.
Universe of Meaning
There’s one other advantage of these values cards. As you collect them, trade them, and make them, you start glimpsing what I’ll call “the universe of meaning” — what the space of values is, with everyone's sources of meaning all together.
It’s interesting! The collection of all sources of meaning—this collection is smaller than the collection of all goals, or the collection of all preferences. It’s smaller but it’s still endlessly surprising, and always changing.
Remember, values cards are limited to things you’ve actually paid attention to, and found meaning in paying attention to, so they always cut right to what’s real, and meaningful, where you’ve already had the experience.
This is different than the collection of all goals, which includes many far-off goals. Goals like “get rich like Elon Musk”. Or goals people only have because they hope they’ll lead to something good later. Goals like “impress so-and-so at the bar.”
Of course, these goals relate to our values. With some goals, we hope accomplishing them brings us closer to our values. For instance, maybe you think that getting rich would allow you to finally spend time playing music.
With other goals, they provide a venue, or social context for our values. For instance, if I find meaning a kind of visual creativity, I might set a goal of publishing a weekly comic with a friend, to create a venue for that source of meaning.
So goals relate to sources of meaning, but vastly outnumber them.
It’s similar with preferences. The collection all preferences includes taste preferences, like the preference for chocolate cake, or goth make-up. And it includes ideological preferences like the preference of Trump or Biden.
Some preferences are deeply expressive of your sources of meaning. For instance, if you love nature and prefer a certain forest which had a quality that opens your heart.
Other preferences might be quite strongly held, but only related to your sources of meaning by a long train of inferences.
Perhaps you think, if Trump wins, things will get worse for your family, and the forests will be cut down. That’s why you prefer Biden. The connection between your source of meaning and the preference is distant.
If you collect sources of meaning directly, instead of preferences, you cut right to what’s precious and immediate about being alive. The basic ways of living, choosing, and attending, which a person finds meaningful. The things we need spaces for.
We're now about halfway through the talk. It might be a good time to take a break, do some stretching, high five your colleagues, or just walk around and look at the building you're in.
Here are two things you can think about, if you want to walk around:
- Which parts of the building are designed as tubes? Which parts are funnels or spaces?
- Or think about your own feelings—do they point to a way you want to live? Could they be written as values cards?
✍️ Chapter 2 - Meaning on Purpose
In Chapter 1, I showed how to make a shared language of meaning. In this chapter, I want to show how powerful this is. I’ll show two ways that a shared language of meaning can change things on a local level.
- First, I’ll show how having a shared language of meaning allows you to design meaningful things, spaces, and make them meaningful on purpose.
- Second, I’ll show how the people inside an institution—for instance, a school, a research lab, or a city—can use that shared language of meaning to advocate for or against change.
So, let’s talk about design. This is the practical art of being a spacemaker.
It begins with listening for people’s sources of meaning. This is how space-makers find demand for spaces.
Say you talk to Alfred, and you find he has a particular source of meaning. You write it out as a values card and he agrees.
You now have the design criteria for a space.
As you try to serve it, you’ll design a complex structure—an app, a business, various events, etc. As you do so, you’ll want to divide up the funnels, tubes, and spaces.
When you’re iterating on the space parts, you can point to his source of meaning and ask: is this space a great place to attend to those things? To choose in these ways? How could it be better?
It’s way easier to make a good space for him.
Once you’ve made a good space, having the source of meaning at hand will also let you monitor it, to make sure it continues to be meaningful the same way.
And there’s one more thing we can do to make a good space for Alfred. We can ask him what’s hard about living by this value.
We often assume that, if someone isn’t living by their values, it’s because they’re weak or hypocritical.
But, look closely at a values card.
Let’s look at a value of mine. Deep work. Working for many years to make something—something which may not even be successful in the end.
A big part of living by a value is attending to certain things.
With this one, you’ve got to attend to your own curiosities, to experimental timelines, and to potential colleagues.
- Look at curiosities. Curiosities I'm ready to address by experimenting for months or years without quick results.
- Look at this line, about companions for investigating those curiosities.
It takes a kind of introspective skill even just to explore your own curiosities. To categorize them into short term, and longer-term ones.
- You need a good source of potential companions.
- You also need to be able to assess them—to guess which of those people share your lifelong curiosities.
- You need information from those people.
- And once you find a good companion, you have to be able to build a relationship—you probably need some social skills.
Where are you going to find those companions?
There's going to be a process of relationship building there.
We can drill into that, that process of relationship building.
What are the hard steps of building such a companion relationship?
Here’s how I built one of these companion relationships. In this case, it was a kind of mentorship. I was thinking about doing deep work, but I didn't really know what it took.
Then I met my friend Bret. Without Bret, I’m not sure I’d have been able to do deep work.
When Bret and I had dinner, we saw we shared some deep questions about making technology that’s aligned with human values.
He invited me to present to his lab. But I gave a really bad presentation. Bret took me aside and gave me really good feedback. I followed up, improved my talk. Bret helped me think clearer and was a critical audience while my thoughts were still messy, and this was how our relationship formed.
The elements in his story can be reformatted is what we call hard steps.
We just asked for each element in the story is there's something here that sometimes hard to do, it's often part of living by the value
So we can replace this one with this and this one with this.
Part of designing spaces is understanding these things. We call them the hard steps of living by a value. We collect hard steps the same way we collect sources of meaning—by talking to people.
We find they come in three flavors.
- There’s information that's hard-to-gather, that's necessary to live by the value.
- There are relationship-building moves that can be hard-to-take—like when Bret offered to host, or gave me tough feedback in a sweet way.
- And there are transitions in settings or environments that can be hard to make—like when Bret took me aside to a quiet place for the feedback.
The hard steps explain why people don’t necessary live by their values.
General we found that if the information is available in an environment the relationship building moves are relatively easy to make and settings and transitions are relatively easy to make. People are much more likely to be able to live in a way that's meaningful for them.
If somebody would find meaning in deep work, but they're not doing deep work, it could be that any of these hard steps were blockers for them. They can’t find or choose companions, discover their curiosities, etc.
It's not that they're hypocritical or weak! It's that living by the value is hard.
✍️ Good Mentor App?
If you want to make a good space for living by this value, you job is to make it a bit easier. Then more people can live by their value of deep work, and live a meaningful life.
To do that, you better study these hard steps. Study them hard.
You should make sure that, in your space, the information needed is accessible, the moves are easy to make, and the transitions are possible.
The good news, is this reliably leads to design insights.
- For mentors, it could be easier to see who's worth investing in. Could we analyze twitter data to make a timeline about how someone’s learned? How dedicated they are? What they still suck at, that could be high leverage for them?
- Another thing that seems important to surface is whether people take feedback seriously. I can imagine an app where people go through multiple revisions of something. (A presentation. A video. A tweet.) They have feedback sessions from strangers, and those strangers get to watch the thing evolve.
As soon as you start collecting sources of meaning and hard steps, you realize the world is full of opportunities like that. Everyone has many sources of meaning being blocked by hard steps that are too hard to take. If there were spaces to make the hard steps easier, they’d have more meaningful lives.
My sense is that, if people were more articulate about sources of meaning and hard steps—if this information was as widespread as information about goals—there would be many opportunities to redesign spaces and make things meaningful on purpose.
If this information was widespread, people would also be able to prevent the decline of the spaces they already have. Or fight for their restoration.
Say you're a scientist, or a school teacher, or an elected representative.
Over the past decades, aspects of your job keep changing. Your workplace is less and less meaningful. Your school, research lab, or democratic processes used to be a space, but it’s become a funnel.
If you can’t speak in these terms—if you can’t name the sources of meaning being lost, the hard steps made harder by each policy change, if you can’t even name the difference between spaces and funnels… if you can't say any of that, how could you fight those policy changes that slowly funnel-ify your school, lab, or town?
Articulacy has a dramatic effect on our ability to advocate for change.
Look at how things changed as people became articulate about feminism or environmental impact. If you’re in a meeting at work and someone points out that women aren’t being heard, or that some ecological impact could be curbed, something’s likely to change, because people are articulate about the problem.
But if someone points out that one of their sources of meaning is being made harder… they are unlikely to get much support—yet. With no sense of the problem and no shared language, people can’t band together to keep spaces working.
Articulacy about sources of meaning and hard steps would let us advocate to keep things meaningful. And it lets spacemakers make things meaningful on purpose, in the first place.
✍️ Chapter 3 - Meaning at Scale
So far, I’ve shown how a shared language of meaning can help make things meaningful on a local level. People can design for meaning, and they can advocate for meaning inside their institutions.
Sadly, I doubt these local actions will suffice to reverse the long-term trend away from meaning and togetherness.
We live in a world where many forces are at play, besides the individual actions of local entrepreneurs. Businesses live or die based on larger structures—ad networks, design methods, business practices, operating systems, funding networks, recommender systems, and markets.
I’ll make an argument in this chapter that these larger systems are biased towards funnels and tubes, and away from spaces, and that they need to be changed if we’re to live in a world with enough spaces, enough meaning and togetherness.
In this chapter, I’ll show how these larger systems are biased, and what changes might suffice to give spaces a chance. I won’t talk yet about how to actually get those changes made—although in the talk’s conclusion I’ll give some suggestions.
Here, I’ll start with the easier changes—in design methods, business practices, and funding, and work up to the harder ones, which concern systems at the largest scales.
In chapter 2, I told a story where you talked to Alfred, collected his sources of meaning, and made a good space for him. That story skipped several challenges which you’d face, as a space-maker.
- First, Alfred can probably answer questions about his goals far more easily than about sources of meaning. And all the processes you learned for detecting demand—customer surveys, user research, and metrics—are designed for detecting goal-related demand. You might have a hunch that Alfred needs a space for exploratory thinking, but end up making a productivity app, just because his productivity-related-goals are easier to capture.
- But let’s say you learned about values cards and can talk to Alfred about his sources of meaning, even when he’s inarticulate about them.
- But okay, let’s say you also know about hard steps. Not only can you collect sources of meaning from the inarticulate, you can design good spaces, using a new design method based on hard steps.
Your design methods may lead you astray. The most dominant design trainings are UX and incentives design. But they’re all about moving people along through funnels, smoothing out their experience, reducing choice, and incentivizing or entertaining them along the way.
Even if you wrote down space-criteria, following these design methods would bring you towards serving funnel demand, not space-demand.
Again, you’re stuck making a productivity app.
Well, you’ll still have to justify your project to colleagues, funders, employees, and other customers besides Alfred.
That’s so different from if your project was based on tastes: imagine you were raising money or assembling a team to open a bubble-tea shop—you can point at an expanding market for bubble tea, talk to bubble tea-lovers about their preferences, etc.
Right now, you can't do that with meaning. You’ll struggle to tell employees what meaning-related targets to hit; or how to plan around meaning, etc.
Your funders will mostly understand funnel and tube successes, and will have a hard time seeing the potential of your space. They’ll want to fund funnels and tubes.
So it will be hard to scale your space.
Here’s the key insight: it’s not enough for demand to be out there. A business has to find that demand, and meet it.
Any new business has to experiment with different offerings, different feature sets, until you hit upon what’s sometimes called “product market fit”—the point when your offering resonates with customers, and delivers on their needs.
Content creators also do this—experimenting with styles and formats. App-makers show mock-ups to their friends, and so on.
Funnel and tube entrepreneurs have great practices to accelerate this. Everything from Facebook ads, to lean startup methods, to book on user research and design. This gives funnel- and tube-based businesses a huge advantage.
In Chapter 1, I mentioned the idea of forming a community of spacemakers with a common entrepreneurial identity, just like organic farms or startups have a common identity.
- This community could make space-demand easier to recognize, by changing customer surveys, user research, product success metrics, etc. Ideally, these spacemakers would be able to interview or survey customers about their sources of meaning, even when those customers don’t know themselves.
- They could make that demand easier to meet. Ideally, they’d have special design skills, to parse a project into funnels, tubes, and spaces, and to systematically design for spaces .
- They could have their own funding structures—funding that makes bets on meaning, not on transactions.
- They could have their own product success metrics and design methods—focused on whether people can live well in a space.
- And they could grow their customer base by helping people get in touch with their sources of meaning.
Developing an entrepreneurial community of spacemakers, with it’s own methodologies, seems key to making meaning at scale.
Unfortunately, there are more problems. Some I haven’t brought up yet.
Let’s go back to the space you’re making for Alfred. Even if you collect the related information, design for it, and somehow scale your funding—you may struggle with advertising and marketing. You’ll likely need to put your business on a social network, recommender system, or two-sided marketplace. You might even need to install it as an app.
Our lives, these days, are structured by recommender systems, algorithms, advertising networks, and operating systems.
To some degree, they decide what we pay attention to, what we download, what we buy, even who we meet.
And there’s something in these structures, too, which is biased towards funnels and tubes, and away from spaces.
Actually there are two sources of this bias, and we’d need to fix both. I’ll take them in turn.
As I’ve hinted, getting clear about values changes how two people can talk to each other.
When I know your sources of meaning, I can try to make things meaningful for you. If I just know your goals and preferences, it's less obvious how to do that.
That also relates to recommender systems, operating systems, ad networks, and markets. These systems involve giant databases, where each of us is profiled. How we’re modeled by the corporations and OSes and market structures affects which recommendations and which matches are made.
But there’s a big difference between our interpersonal models of each other and the corporate models of us. On the interpersonal level, our models are loose and evolving—I know more about my girlfriend's feelings, more about your preferences, and more about my co-founders goals.
But in our interactions with governments, or corporations, their databases models are rigid, conforming to a fixed schema.
- Google and Amazon know about our goals, based on what we search for.
- Facebook and TikTok know something about our interests, based on how we click and scroll.
So, I think this really matters.
Which type of information the governments, corporations, and operating systems use is really important.
If they have goal information, that’s going to work for recommending funnels and tubes, but not spaces.
This means they can’t be fully aligned with what’s important to us.
- If you have a source of meaning like
community care, a goal-based recommender won’t know to recommend a good space for it. Instead, it will try to transform it into a goal. It will tell you that if your teeth were whiter, you might have community, and try to sell you a way to make your teeth whiter.
- If you have a source of meaning like
deep work, it might try to sell you a notes app.
- If you have a source of meaning like
responding to the world situation, it might try to sell you a conspiracy theory to “really understand what’s going on”.
These goals may advance us towards what’s meaningful, but often they won’t. Even when they do help, there’s often a more direct path the recommenders never see, because they see us through our goals, and recommend funnels and tubes.
There’s a kind of debate at these companies, and in the world of advertising more generally. They are debating: which kind of profile information is best for getting us to click, scroll, and buy?
In this debate, goals information seems very strong. But there are new ideas. For instance, there’s the idea to put emotions or emotional responses in the profiles. So, Facebook would track what videos make you smile.
Information about emotions or emotional response might be good to keep you scrolling. Keep you distracted. But it might not be as good for selling things.
Which kind of profile information is best for getting us to click, scroll, and buy?
Wait. Hold up. That’s a shitty question.
What question should these companies be asking? Wouldn’t this be better?
Which kind of profile information is best, to help us live meaningful lives?
The profile these companies currently have is counterproductive, when it comes to making our lives meaningful.
I guess you already know what I think should be in there: values cards and hard steps!
I know some of you are going to be against corporations having profiles of us of any kind. But if you set that aside for a moment, and imagine with me, that they have a kind of meaning profile.
They know what our sources of meaning are, and where we have them fulfilled and they know a bit more. They know: do we have some friends that share these sources of meaning? Or are we alone in them? Which hard steps are blocking us?
What would this unlock?
- What if we could sort the app store by what will be meaningful?
- What if we got ads or feeds based on a bet on what will be meaningful?
- What if our OS was looking out for us, trying to fill our lives with meaning, and learning in the process?
Ideally, we wouldn’t have to trust the recommendations corporations make for us. We don’t want them to use our sources of meaning to delude us, to trick us into buying something which won't actually help.
Ideally, we’ll be able to audit how well the platforms and OSes do at this task.
Imagine we can get that kind of information flowing, and keep it truthful. I think we can create an ecosystem where governments, corporations, markets, and voting structures are all accountable to our need for meaningful lives, in a way they've never been before.
Because these systems are in between us. They limit the structure of how we can collaborate so. If I'm dealing with you and we have separate goals, let's say you want to start a community center and I want to teach improv. One thing we could do in our conversation is keep talking because we know each other and maybe we find out that you like this first Community Care value and I like this create a roofing value and we both agree that creative riffing is one of the ways that people come to care about each other. Suddenly, we realized that we have a common project. But if our interactions are intermediated by a market or recommender, by market, let's say that only knows our goals. Then All it knows is that you want to teach improv classes. And I went over to community center. And our level of collaboration is is much lower. Because like maybe we're even competing for real estate. I want to rent the same spot for my community center as you want to rent for your improv classes or maybe we find a way to transact I maybe you rent the community center to do your improv classes. Whether we find a way to transact or we're just in this, we're just trying to outbid each other, we've missed a whole bunch of opportunity that are richer vocabulary would allow us to discover.
These limited vocabularies of markets create a great deal of economic loss agree to have on extra rivalry when without the layer intermeeting it intermediating it we we would find a way to work together or with a richer vocabulary, we would find a way to work together.
A consequence of these layer of transactionality. But when we ended up bidding each other this causes extra rivalry. But when we end up transacting
(Now, it's not that this never causes a conflict, it certainly does sometimes, for instance. I gave the example of these two types of creativity. If there's limited land, and you have people who prefer both types, one group needs a quiet space, one group needs a social space, well, then there will be a contest there. There's some kind of inherent rivalry to make these multiple spaces.)
We pushed towards thinking of our community centers or our improv classes as funnels for ramping transactions. In a way a goal has been created.
For instance, imagine I value
creative riffing, which we covered earlier, and you value this one—
white swan days—which is about running into people unexpectedly. But we are inarticulate about our sources of meaning, we can only talk about our goals.
So I say I want to start a comedy improv club, and you say you want to start a community space. We end up competing for the same real-estate, and maybe even competing for attention to get the same people to come to our different events and spaces.
If we’re lucky, we might find a transaction we can make together: maybe I rent your community space for an improv event.
That’s how goals work—either we are rivals competing for the same resource, or we find a way to transact.
Now imagine things change, and we can know each other’s sources of meaning. We are likely to see many new advantages of working together.
There’s even more advantage when people discover they share the same sources of meaning.
People get excited about a space when they imagine coming together with other people with similar sources of meaning, and practicing there. If I value a certain kind of vulnerability, I’ll be excited to gather with others who share that value and be vulnerable together. And a group that values a particular creativity will be excited to be creative that way together.
That excitement only spreads when you realize you share a source of meaning. Otherwise, you don’t see the potential.
So already, we're gonna have a hard time thinking of this as a space for two reasons, one, you can't really see how to support people. In a situation that's so transactional, and also everybody's stuck alone.
Okay, so the type of data in the profile creates a bias in these large-scale systems. Unfortunately, there’s another source of bias, and it’s more pernicious.
Something remarkable about these systems—about markets, recommender systems, and operating systems—is that they get in between everything.
Markets and recommenders, in particular, imagine two pools of actors: creators and consumers. The creators are kept separate from one another, and the consumers too. With recommenders, each creator makes their own creation and posts it. Each consumer scrolls the output of the recommender on their own. Creators and consumers are only connected when there’s a match.
What could have been something more like a sharing circle, where creators and consumers riff on each others’ ideas, and applaud one another—what could have been a space—has been split. Individualized.
And this changes our interactions. They become transactions. Posts. Views. Clicks. Purchases.
This is such a deep change! It replaces values with goals. We can no longer interact in the way thats meaningful. Instead, we ramp transactions.
Markets and recommenders, by their intermediating, individuating structure, turn spaces into funnels.
This the fundamental reason why people have been turned into consumers and “creators”. Why democracies—which used to be made of spaces—have been turned into funnels for riling up voters.
Why the world is a whirlwind of ideological battles, manipulation, and consumption.
Why there are so many businesses claiming to be about ”community”, “sharing”, “adventure”, or “love”—words that suggest a space. But these businesses, which are subject to markets and recommenders, cannot be spaces. They are funnels, and they can’t deliver these things.
So there’s no way out, without changing the structure of markets and recommender systems.
Luckily, this is possible!
Scientific publishing has a similar function to a recommender system: a scientific paper goes through a process of “legitimation” which surfaces the best sciences, and which sorts science papers into different bins, to get the right science to the right scientists. But it’s not a market or recommender system, and it doesn’t separate people.
We can study scientific publishing and learn how to restructure markets and recommenders.
Let’s look at the effort that goes into preparing a scientific paper, getting it accepted or seen. Legitimation is a word for this effort. What steps do you have to take to legitimate a paper? Or equivalently, what makes a legitimate paper.
Here they are.
When people think about legitimate generalizing and paper they think about peer review and maybe acceptance by a journal, a leading journal. But that's actually just the end the tip of a long legitimation process before that you find collaborators you do a lot of background reading ground your claims before that you had to get hired by some institution. Before that you had a thesis advisor and a thesis at a degree if any of these things turned out to be bogus, you might have an illegitimate paper on your hands
- Papers have authors.
- These authors have institutions. Part of what legitimates the paper is that the authors got past some kind of admissions or hiring process.
- When a paper has multiple authors, there’s an additional legitimation there—because the authors decided to collaborate. And often, a scientific paper is a collaboration between more experienced and less experienced scientists.
- Furthermore, each scientist had, in the past, some kind of thesis advisor, who signed off on their work at the start of their career, and was part of giving them a degree.
- Next is the methods section of the paper—it needs to contain methods which are easily checkable, and common in the field.
- Also important are the references, and the related work section, which act to show the author has engaged with the field, and prove they have done their background reading.
- A legitimate paper will also have a relationship between its claims and it’s references. Big claims will either be supported directly with data or argument in the paper, or will have footnotes attached linking to support from other authors.
- Finally, the cherry on top is acceptance by a journal, and peer review.
A legitimate paper is one that has all these qualities.
Scientific publishing is one example of a legitimation process, but they are ubiquitous, and operate at different scales. Local democracy is a legitimation process. Asking someone to marry you involves a legitimation process. Even just sitting next to a stranger at a bar involves a mini-legitimation process of body language and connection building first.
You can think about a legitimation process as a network of semi-private groups with shared norms, with a shared schema for building legitimate contributions.
Science is made of many university departments, research labs, and publications, but legitimating a paper mostly involves the same steps, and everyone in science knows them.
So that’s science. We can imagine if TikTok worked the same way.
Instead of creating alone, TikTok creation could work more like scientific publishing. A contribution would grow, accumulating collaborators and stamps of approval of different kinds.
- Imagine if, when you were posting on TikTok, instead of posting to the recommender, you posted to a group whose job was to help you improve your contribution and which would consider certain criteria.
- A recommender could still be used, but, instead of recommending videos to consumers, it could recommend first viewers of a video—people who could watch your video before everyone else, and sign off on it.
- You could browse these groups, and find a group with criteria you like.
- The group could make suggestions, or even join you in editing.
- Others in the group could make assessments about whether your contribution was ready yet, using the criteria of the group.
- Once enough of the group’s editors like it, your contribution goes forward with metadata attached about the group, and about the criteria the group uses for evaluation.
- For entertainment videos, this may just be a tag like “funny”.
- But for science communication, videos could include criteria from the scientific legitimation process!
- Consumption could also take place in these groups. In fact, the distinction between creation and consumption might evaporate.
I’ve just turned TikTok from a recommender to a legitimation process. There are two key differences: first, recommenders are still used, but they don’t separate people; second, there are known criteria for legitimation, which can be tuned to the sources of meaning of participants.
These two qualities mean the system can be maintained by people who know one another, rather than by some central, all-seeing authority.
Ch 3 Conclusion
So, I’ve argued that larger systems are biased towards funnels and tubes, and away from spaces. In particular:
- Business practices are biased towards funnels and tubes. To fix them, a community of spacemakers must form, with it’s own success metrics, design methods, and funding structures.
- Recommender systems and markets are biased towards funnels and tubes. To fix them, the user profiles they use must to be rebuilt around sources of meaning, and audited externally. And the recommender systems and markets need to be restructured as legitimation processes, to disintermediate people.
That’s what we need to do, to make large-scale systems safe for meaning.
We can do the same thing with a network of private groups. We could imagine a process where as information gets forwarded from group to group, it's successively refined. Users inside a telegram group could subscribe to either of two feeds information that's vetted by other participants in the group or both vetted and unvetted information forwarded messages could have a audit trail of groups and users that vetted the information they're in and the criteria they used. The same thing can be done with. Ai ideas about paid curators or information markets can also be modified can also be flipped around until legitimation processes, although I won't show that here for interest of time now the important thing about these modified processes that I've just presented is that they can be aligned with values when the previous versions will be much harder to align with values
And they have another advantage. Legitimation processes can potentially include much more of a society.
With recommender systems and information markets, most contributions, most of the time, go nowhere. Someone is lucky if the algorithm gives them 15 minutes of fame, before they’re plunged back below the attentional waterline.
It’s like one of those movies where the rich influencers have a floating city in the sky, and the population below goes unmonitored and unsupported.
Legitimation processes can solve this. The easier parts of a legitimation process can come earlier; the harder parts, later. As you proceed through the process, you can gain relationships and skills for later stages.
Your thesis advisor can help you find collaborators, and become worthy of their time.
Potentially, people in a legitimation process feel an ongoing responsibility to their contacts, to get them a bit closer to contributing.
At the final stage are the excellent contributions, but at earlier stages, less excellent work gets the attention it needs to improve.
This is inclusive and supportive. It’s also efficient: it leads to many more contributions, in the end.
To see what I mean, imagine I interview some scientists about their sources of meaning in their work, and I come up with this value. They say one thing that's meaningful is “tiptoeing to the edge of knowledge and then going a little past it”.
I ask them what they’re attending to, when they're doing the tiptoeing.
- I find they’re attending to intuitions they have, intuitions that are hard to model, explain, or test with existing methods.
- They're attending to patterns in the data the current models don't account for.
- They're attending to their field as timeline, standing on the shoulder of giants who asked questions at the edge, and going a bit further.
This all feels meaningful to them.
Next, I ask about hard steps.
- The scientists say they need to understand how various fields have tackled a question, and how each field evolved through time.
- They need to curate some examples that illustrate limitations of the current models and methods.
- They need rich data where they can find and test new models.
- Once they have a hypothesis, they need rigorous reviewers who will be open to a potential advance.
So with this value, and these hard steps in hand, a door opens. Can we align the legitimation process of science with this source of meaning that scientists have?
If I were going to take this on as a project, I’d try a few tricks.
- First, I’d ask if this process build the relationships needed for the value.
- Next, I’d ask if this process surfaces the information required for this value.
- Finally, and most importantly, I’d ask if legitimizing a scientific contribution is difficult in the right way.
Looks like we need open-but-rigorous reviewers. Does the process help us find them?
For instance, would a thesis advisor or an anonymous set of reviewers help?
I think the answer here is… maybe? These relationships don’t emphasize this quality of openness to new ideas.
I can imagine an alternate structure that would. Think about how trial lawyers come in pairs—how prosecutors are paired with public defenders. What if thesis advisors or reviewers also came in pairs, like that? And one reviewer’s job was to be open, the other’s, to be rigorous.
One piece of information we need to be at the edge of knowledge is “Data which doesn’t fit the field’s paradigm”.
The scientific record can be poor on this. In the natural sciences, published data is often just the data that fits.
A more-value-aligned legitimation process would have venues for paradigm-busting data.
Legitimizing a scientific paper is hard. That’s not necessarily a bad thing. The difficulty of making a good contribution can be part of what motivates you, gets you to up your game.
So: we don’t want to make contributing easy. What we want, is for it to be hard in the right way.
In particular, we want contributing to be difficult in the same way that it’s always going to be difficult to make good contribution. The effort spent to prepare a contribution—or to get it accepted or seen—should be the same effort it takes to makes the contribution excellent according to the values of the field.
Is this the case with scientific publishing? With this value?
What kind of skills do you need to find patterns that current models don't account for? To develop intuitions that break existing methods?
It’s not the same skills as those to publish a paper, at all!
This value is a creative one. To train people in it, the scientific record would need to be more creative, more exploratory, and probably more collaborative. We could imagine if science publishing was more like a breakdance circle—at least at early stages—with more reward for pointing out limitations of existing methods, and surprising counter-examples.
In fact, this missing factor in the scientific record of jamming on big ideas might explain why. Other sciences distributed now all over the world Nobel Prize winners tend to be mentored by other Nobel Prize winners or and to start off in one of just a few labs, where people seem to have big ideas if this is really the cause then creating more forums for jamming on big ideas and science rather than for making these grounded careful claims could unlock a lot of new scientific successes and a true decentralization and democratization of science
The important thing is, I can see how to align a legitimation process with a value. We can inch science closer to being good for “Edge of Knowledge”. I can’t see how we could do that with incentives, with recommender systems, or with networks of independent private groups.
Now the scientific process isn't perfectly aligned with the values of science and scientists. And by showing you how to tweak the scientific, scientific publication I can show you how these other kinds of legitimation processes might also be aligned with the sources of meaning that could make them go well by the sources of meaning that would keep them on the rails. So, one source, if I introduce interviewed a bunch of scientists, I might discover this source of meaning about operating on the edge of knowledge. And this is not just a source of meaning of scientists, it's one of the sources of meaning to keep science working. So if scientific publishing is not well aligned with the source of meaning, if it's oppressing the source of meaning, that means it's also damaging science. Because people are going to have a harder time deriving meaning from being at the edge of knowledge. And then science as a whole will do worse. So if I interviewed some scientists, and I get this source of meaning, and I get these hard steps, then I can ask, how does the legitimation process of science line up with this source of meaning, and these hard steps? And then sure, there are some places it lines up. The background reading, required to write a good paper encourages people to understand the edifice of knowledge that they're building upon. And this is one of the hard steps it's one of the information requirements here. And the search for collaborators to publish may help but only to the extent that your collaborators are these open minded but rigorous type of people. Hello, hello. There are also several places where this value and the related heart steps diverge. From what scientific publishing supports
By structuring the legitimation process cleverly, you can make those hard steps easier. A perfect system will set the core values of science, and of scientists, free.
So… this talk is a draft, and chapter 4 isn’t ready. I did a rough cut of it, but Andy didn’t like it, so I’m not going to present it tonight.
But I’ll tell you briefly what’s in it.
One problem the happens as design scale up, is, people stop being able to navigate the diversity of contributions themselves.
There are three common solutions, when this happens:
- Recommender systems
- Paid curators
- And networks of private groups, which share contributions among them.
So this chapter is about how each of these models is difficult to align with values. The problem is that recommender systems and paid curators keep contributors and consumers isolated. They create a barrier with individual consumers on one side and individual contributors on the other—and this is not a good space. Contributors are not just alone but they are also unsupported in making their contributions.
Networks of private groups have another problem—it’s hard to see how to align them with anything.
In chapter four, I introduce a fourth model that I’ve found easier to align with values and sources of meaning. Which I call the “legitimation process” model.
A legitimation process is like a network of private groups where the groups agree to use common criteria and policies for sharing contributions.
An example of a legitimation process is scientific publishing, where journals and academic departments use common criteria for evaluating scientific work.
So, when chapter 4 is finished, I’ll give examples of how to think in terms of legitimation processes instead of recommenders, curation, and private groups.
I’ll show how TikTok, a recommender system, and Telegram, a network of private groups, can be redesigned as legitimation processes. In these redesigns, the criteria for success is more legible, and creators are more deeply supported to revise and improve their contributions.
I’ll show how to align legitimation processes with sources of meaning, and I’ll argue that much of the societal dysfunction in politics, media, and even in science can only be addressed by legitimation processes rather than by recommenders, curators, or private groups.
This last chapter will focus on the special problem of designing large-scale systems to support values and meaning.
As systems scale up, there are many challenges. Values cards and hard steps can help you monitor meaning, to make sure it’s not just transactions that are increasing, that meaningfulness also holds strong.
But in this chapter, I’ll focus on one problem that comes with scale: an increase in the amount of information that needs to be processed to surface “the good stuff”. Many kinds of systems have this problem—national media, national politics, global communication.
People taking on such a challenge tend to take inspiration in certain places:
- Some, from machine learning—they imagine training recommender systems or collaborative filtering to find the gems and surface to each person what they need.
- Others take inspiration from markets—they hope clever incentive structures will reward those who contribute or curate the best stuff.
- A third group imagines a network of private chats. Where contributors are first invited to a local one, then hopefully discovered into other venues, their excellence gradually surfaced in larger groups.
Each of these models has big problems. As I’ll show, when used for surfacing excellence at large scales, they have undemocratic effects, and are impossible to align with values. They are hostile to meaning and togetherness.
In this chapter, I'll present a fourth way to think about large-scale, excellence-finding systems: the “legitimation process model”.
Instead of taking inspiration from AI, markets, or private chats, the legitimation process model takes inspiration from scientific publishing and democracy.
Legitimation processes are a fourth way to imagine surfacing excellence in large-scale systems.
Let me first show an example of a legitimation process. Then I’ll talk about advantages they have.
I'll take an example of each of these models and show how they could be rejiggered as a legitimation process starting with recommenders
Perhaps part of it, is people came to think they need to hustle first.
They think: “After I can get my goals finished, and gather enough money, I’ll escape these funnels and tubes. I’ll find a space.”
But for now, they’ve got to focus on funnels and tubes. So spaces feel less important, in the short term.
This is actually just one way in which sources of meaning make things less rivalrous in which they reduce the hustle. There are others and for those see Ellie's talk. And just come join us and ask some questions. There's a lot of simulation modeling, research work to be done to explore these hypotheses, verify them and share them with the world. And we'd love collaborators on this but now it's time for me to move on.
- a shared language of meaning and superstructures that let you find resources, and negotiate resources based on things like values cards would mean there's less rivalry, I would also show people that there that a meaningful life is much closer than they think. They don't maybe need to put all their goals to go through all those funnels and tubes. Before getting to spaces, so the amount of hustle in society would decline.
So articulacy About meaning. I think that articulacy About meaning has declined. It was never great. There's never been a society that made values cards. That had a kind of Wikipedia meaning, like I want to build. There's never been a society where space makers knew. Well, that's not really true. But I think there are two ways in which particular see that meeting has declined. One is that our sense of meaning has become much more personal and less and there's thus less shared language around it. When our societies were more overtly religious and shared religion, it was easier to discuss sources of meaning in a common language and you see this in you see, a more explicit crafting of spaces around sources of meaning that can be shared and communicated. In cultures, which are still religious, like Orthodox, Jewish communities, Catholic communities, monasteries, Buddhist communities, Sufis, and so on, especially the mystical communities are the ones that are built around personal past development. People try to find words for the meaning they're experiencing, and the types of community that are needed. They try to build around it. So this has crumbled in Western liberal society, because we've lost these limited languages have shared meaning. Meaning it's become we start to think of meaning instead as a thin layer of purely personal paint atop the hard shared facts of science and material reality.
Another factor is nihilism. In the West people often think of meaning as a thin, vague layer of purely-personal paint, atop the hard, shared facts of reality. Or they try to make meaning an eventual state, when there’s finally social justice, when we Occupy Mars, or when America is Great Again. That makes it harder to really believe in one’s sources of meaning, to put them in the center of life, and to pursue spaces in a straightforward way.
Legitimation processes have some advantages of incentives systems, and some advantages of private chats.
We want our social systems to be
accountable. The systems should work the same for everyone, and everyone should be able to see how it works. We imagine markets like this. And we imagine recommenders to have this universality, but not the accountability.
We lose both, with a network of private groups. No one can see how it works, or make it work for everyone. The public can’t critique and improve it.
But private groups do have one big advantage: with markets and recommenders, the relationships are anonymous and transactional. Relationships like buyer, seller, creator, curator, and so on.
Private groups can have lasting, ongoing relationships.
A legitimation process is
accountable, and also
personable like this! In a legitimation process, people level up in local groups and with lasting relationships, but the process connects the groups in a universal, accountable schema.
So legitimation processes can be universal, inspectable, personable, and broadly inclusive.
But that’s not why I like them.
I like them, because they can be meaningful.
Let’s compare with incentives. When you think in incentives you think people are externally motivated. If you want people to do something, you gotta incentivize them. You imagine a careerist scientist, following the money or the citations. You imagine scientists need a carrot leading them towards good science. In a perfect system, the carrot points in the direction of good science.
When you think in terms of legitimation processes, you can imagine scientists following their own sources of meaning. They want to contribute, but making a contribution is hard. In other words, the hard steps can stop them from doing it.
Starting there—with respect for scientists’ internal motivation—opens up a related possibility: the legitimation process of science can be values-aligned.
For instance, they show why the internet’s been so disruptive. The rise of the internet replaced complex, value-laden legitimation processes with cartoonishly simple ones—often just upvotes or share counts.
The results are unsurprising: even world leaders now discuss what's going viral, rather than what made it through the levels of pre-approval which, in earlier times, would have helped them stick to the plot.
It’s the clear-cutting of legitimation processes that led to our current crises, with politics, media, fake news, etc.
They cannot be replaced with recommender systems, markets, or private groups. These can be used in some places, but not to surface excellence at large scales.
Only by carefully building new, values-based legitimation processes, can we get back to sanity, and return to a situation where science, media, and democracy legitimate the good stuff.
I guess that’s enough for one video.
Throughout this talk, I’ve presented values cards, hard steps, and spaces—what I call “a shared language of meaning”—as the solution to numerous problems.
This might seem implausible. How could one thing solve so many problems?
But it’s not implausible. In fact, it happens every few hundred years. A new literacy transforms society.
One factor that limits social organization is the information that people can pass around. And this changes over time.
In medieval society, few were articulate about goals and preferences. What mattered about someone was not whether they had goal X or Y, but whether they were a king, a peasant, a priest, a woman, or a slave.
The transition to modern society was a transition from articulacy about roles, to articulacy about goals.
Goals and preferences were themselves a breakthrough! When people became articulate about goals, they started to make contracts with each other, to transact, to form temporary coalitions and vote or fight together, to advance their goals.
Nowadays, people have a clear enough sense of their goals that they can use large scale systems like Google search, and to do lists, and project management to make them happen.
Similarly, the transition from modern to postmodern society was a transition from articulacy about goals, to articulacy about consumer tastes, subcultures, and so on.
The shift I’m proposing is similar, and we can expect it to have similarly radical effects.
This, I believe, is the great clock of history ticking along.
A new way of understanding ourselves spreads, creates a new articulacy, which then allows large-scale systems to be built based on that new vocabulary.
Each tick of this clock involves tremendous effort. People need to learn new ways of interacting, and new systems need to be built.
So they only happen when the previous systems and ways of talking are breaking down.
Well, guess what?
That's what's happening now. The literacies we have about goals and preferences are played out. Their problems are stressing the system. It's time for another tick of the big clock. And some of us have already developed this new articulacy.
I’ll end by telling you what it’s like for me to be there already, to live in this future. And how we hope to get you, and everyone else, into this future too.
So: I live in this future, and it’s good. It’s not mind-blowing. It’s not “enlightenment” or “spiritual awakening”. But it’s good.
I see people develop a shared language of meaning, which allows them
- to collaborate in new ways,
- to understand and honor their own emotions, and the emotions around them
- to see when other motives pull them away from sources of meaning—motives like fitting in, being successful, living up to obligations, etc. They become less attached to their goals, and to the expectations of others.
- to build and justify new kinds of projects, and
- to find agreement on topics where, beforehand, they didn't have the words.
I've seen couples come to agreement about what's meaningful to them, and change cities, jobs, lifestyles.
- I seen teams completely change their products.
- I've seen long-standing arguments evaporate, as people see the sources of meaning behind them and are suddenly inspired by one another.
- I've seen people gather sources of meaning from their parents, their children, their friends—and learn from someone they love something that neither could have communicated before. They start admiring people for the meaning they've found, and making space for it.
- They become much more discerning about meaning.
- They can spot funnels that aren’t going to be meaningful—even when they claim to be about ”community”, “sharing”, “adventure”, or “love”. They stop using dating apps which are funnels rather than spaces, drop out of web3 projects which aim to reinvent democracy and community but are stuck in funnels-based, incentives-based designs. Bullshit jobs no longer make the cut. Neither do corporate and organizational structures that don’t support meaningful work.
- They drop out of the attention economy—all the doomscrolling, polarization, and internet outrage and the “consumption economy”—everything people use their discretionary time or money for, but which leaves them isolated, and disempowered.
- They learn about hard steps. They redesign their own lives to make their sources of meaning easier, day-by-day. They make things meaningful on purpose.
- They do the same for their friends. Meaning becomes a part of life to be collectively understood and defended.
- Finally, they get interested to do the same thing at larger scales. They break down complex products and services into funnels, tubes, and spaces, they design legitimation processes, and so on.
- They become interested to restore the parts of society where spaces were most important, and were replaced by funnels. What we call democracy now is a system of massive ideological funnels for riling people up—at least nationally. But it used to be a system of spaces for deliberation, discussion, and debate. Something similar happened to science and to education. Space-makers see how to repair organizations for exploratory research (with its research labs and career grants) and deliberative democracy (with its town halls, citizens assemblies, and so on).
I see all this. And it’s enough to convince me that this is the way out. We need to spread this literacy and develop these new systems. How can we do this for everyone, in an orderly way?
Our plan is to fund an incubator, and to incubate four projects.
- First, a social movement. The idea is that, in the near future, any scientist can stand up and say “hey, my research lab used to be a space but this policy has turned it into a funnel”. And people will understand and look into it.
- Project 2 is a maker community. Something like Y Combinator, Organic, Zebras Unite, or On Deck, but for spacemakers.
- The third project, which we haven’t started yet, is a cross-industry working group. Kind of like the W3C, which makes standards for web browsers. Call it the M3C.
- Finally, while we want to work with the tech giants, we know we can’t count on them to drive the meaning economy forward. So we want to start our own tech giant.
To do this, values-articulacy must follow in the footsteps of feminism, climate justice, Effective Altruism—many other social movements that spread new vocabularies.
We can accelerate this by making values-based redesign “kits” that students at a university, scientists at a lab, or citizens in a town, can use to overthrow the funnel-based designs they are stuck inside. To get clear which sources of meaning are oppressed, what needs to change, and to pressure administrators to make those changes.
We can have local chapters and events.
And where the other social movements mostly spread gloom, ours will be immediately empowering for the people in it—because they can make their lives more meaningful and beautiful even before they change the system.
That’s project one, and Ellie and Alexander are leading it.
We need to do whatever we can to dramatically accelerate the education of spacemakers and the infrastructure that exists to support and connect them.
Sam, Ben, and I are in charge of this project. We run the School for Social Design, and this video has touched on most of our curriculum.
We teach people to break designs into funnels, tubes, and spaces. To use values cards as the design criteria for spaces, to detect demand for spaces, and to evaluate and maintain them. To design for hard steps. And to make legitimation processes.
Spacemakers design based on different information. They build different things. And they ask different questions as they prototype.
If you want to learn this, we have a free textbook online. And I’ll personally administer tests while you go through it, to make sure you can write values cards well, interview people about their sources of meaning, identify hard steps, design legitimation processes, and so on.
We want Apple, Google, Mozilla, and so on to join us in drafting the protocols and APIs of a meaning-driven world. The interoperable technologies behind meaning-aligned ad networks, social networks, app stores, and machine learning.
We’re excited to use values cards to
align AI (especially recommender systems).
economics research to be done. The structure of currency and payments could even change on a deep level, to take people's sources of meaning into account. Imagine if every dollar spent counted more, if it was aligned with your sources of meaning, less if it wasn't. Or if everyone subscribed to a “meaning insurance” provider, with the mandate to make things as meaningful as possible.
We’re looking for the right team for this!
We want to make something like “Amazon Prime for Meaning”. A service that takes your money and makes things meaningful for you and for the people you love.
We’ll start with a direct consumer service, and may eventually branch out to make our own meaning-aligned feeds, ad-networks, or operating systems.
So these are the things we're staffing up now.
The best way to get involved is at meaningsociety.org. Whether you want to run the next tech giant or just learn to make values cards, the links are there when you sign up.
The more talented people we have there, the sooner the great societal clock will tock.
As I mentioned at the start, I’m collecting feedback on this talk. I'd also like help making sure people see it when it's released. Would you be willing to host a launch event? Maybe with your team, your startup, or your friends? Get in touch. I'd be very grateful for everyone's help
And here’s a third factor: spaces were considered more important a few decades ago, because people saw them as holding society together.
To make society work, you need some kind of social glue. Spaces and meaning are one such glue—often called the “social fabric”, the “civil society”, the “third sector”, “social capital”.
They’re not the only glue though. For instance, when a society has few spaces, it often has more
ideology. When there are lots of spaces, people cooperate because of shared meaning and practices. When there’s lot’s of ideology, people cooperate because of social pressure, and because of common ideological enemies. Ideological leaders build massive funnels to rile up their base against the other side.
Incentives structures can also be a form of social glue. If everyone’s trying to afford real estate, or to be named employee-of-the-month, that can also cause a limited form of cooperation.
In general, we have a strong need to cooperate, but that need has been directed more towards ideology and incentives, and less towards spaces and shared meaning.
I can now return to the central problem of the talk. The long-term trends, away from togetherness and meaning. Or, in other words, why are there fewer spaces today than we want, especially compared to the progress we’ve made in funnels and tubes?
They're also always underway. Smaller ones and bigger ones are always underway. articular seas are always spreading and some of them create new forms of social cohesion and others don't. Currently articulacy About oppressions and micro aggressions is spreading articulacy about feelings of spreading each of these vocabularies as it becomes adopted by a population shows its potential to form new social structures.
It would also be socially transformative.
A society with many more spaces would change things.
- It would have
more social capital. A stronger “social fabric”, “civil society”, “third sector” — whatever you want to call it. The amount of trust or social capital in a society has a ton of be benefits—it correlates with how well institutions function, what some people call “state capacity”. It correlates with how people band together in emergencies—how resilient they are. Increasing social capital is great.
- I think it’d also
decrease the use of substitute social glues. In particular, people would use meaning to connect instead of ideology, that's another way to say it’d decrease political polarization. That decreases the likelihood of war, civil unrest, and so on, and just makes things better in many ways.
It’d be the start of a “meaning economy”.
Imagine there was a membership organization for space makers and for others who want to bring about the meaning economy.
Such an organization could so many fun things:
- It could send members on little
missions— they could collect sources of meaning from the people around them, making their lives meaningful on purpose, etc.
- It could host
conferences and other local events.
Thinking a little bigger,
For instance, so many institutions have become funnels, that should really be spaces.
This includes institutions of democracy, science, and education. These have moved away from the sources of meaning that made them work.
- We think
The membership org—the one working towards a meaning-aligned future—this could be really special. We envision it as having aspects of a city-by-city brand, like SoulCycle or Soho House, aspects of a local economy, and aspects of a social movement, like Effective Altruism or Extinction Rebellion. So we’re looking for founders who’ve grown membership-based communities, and we’re looking for movement builders.
Students. I'd like to thank my students at Facebook, Khan Academy, even.com, Google Mozilla, and many others, who came to me over years with their toughest social design problems. They had to deal with many false leads as I slowly figure out which design frames change the game.
Team. I'd like to thank my current team—including Sam, Ben, and Ellie—and previous team members Jacob, Nathan, and Anne, who contributed to this work.
Peers. Oh, and philosophers and sociologists whose work I build on, most notably Amartya Sen.
I'd like to thank the philosophers and psychologists in which this work is based, most notably, Charles Taylor, Amartya Sen. Ruth Chang, David Velleman, James Gibson. And
And colleagues like Jonathan stray for many good discussions
funding from stripe etc.
- SfSD: Get a free session, check it out.
- Harmony Toolbox