So that's Wise AI.
But Wise AI is only part of the problem of getting AI and humanity working well together.
In this chapter I'll give our thoughts on three other parts of the problem.
So: imagine we had some Wise AIs. We had them in the kind of existing ecosystem of AI labs, hedge funds military the military industrial complex and so on. Probably some individuals would love to use the Wise AIs help them live more meaningful lives, be meeting coordinators and so on. And that’d be great.
But there'll be lots of people who didn't want to use the Wise AIs.
Here's some examples.
- Imagine you run a hedge fund and your job is to make the most money for your investors. Now you could get away as AI but the wise AI will refuse to do a lot of the things will recognize its moral situation as running a hedge fund and will refuse to do a lot of the things that would actually make the most money. So if the Marketplace offers some Wise AIs and some not-so-wise-AIs, that just do as they're told your hedge fund you'd want to get the non-wise AIs.
- Same if you work for the Pentagon you'd want to order or even a country that's or even the military apparatus or country that's in a more precarious situation. You'd want to get the most ruthless you might want the most to be defended by the most ruthless, sociopathic AI you could find.
- Let's say you're the campaign manager for a politician in a country where there's a lot of ideological warfare well, you want to win that game right. Your way of operating so far has been to get the voters angry, scared and then turn that into the idea that the people have spoken and chosen your candidate. If we think about a wise AI at the center of it such a campaign—well it wouldn't want to play along with those particular dynamics. It would want to use its reasons and values, the values of the population it served kind of articulate some course of action, which seemed to take everyone's values into account. And this wouldn't work in several ways that campaign manager probably wouldn't hire this wise AI because (a) it doesn't obviously win the battle that it's that he's in, and (b) all the people might not even understand what the wise AI is doing. They're so used to being scared and outraged. That's their frame of mind. They might not be prepared to follow the reasoning of the wise AI to articulate their values to look at them and so on.
- And because of all of that there's going to be a demand for non YZ eyes. The AI labs will continue their race to make those as well. The race towards super intelligence instead of super wisdom. Super wisdom might be at best, a kind of a niche product or group of people that are already values driven and who want to work with wise AIs.
So, clearly wise AI is not enough and I think the story I've just told says that there are at least three more things that need to be done
- needs to be a way to transition people from thinking in terms of fear and outrage and and their own goals which might be goals. vehicles produced by the system to thinking in terms of values and sources of meaning. This needs this is necessary for for two of the reasons in the story that I just told.
- First of all, it grows the market for wise AI among consumers and for all this coordination that I talked about above and
- second of all, it makes it so that especially political structures can switch to the wise AI kind of politics instead of the outrage and fear kind of politics, and the people will recognize this as actually being more in accord with their interests rather than less. So that situation with the campaign manager and otherwise I will go down differently because people will see their values reflected in the actions of OSI and they will feel like this is actually more of a win for them than the continuation of the ideological battles.
- Then there's the issue of the race. In the AI labs, can we switch the AI labs from racing towards super intelligence to racing towards super wisdom? How do we change these race dynamics? That's the middle layer of full stack alignment.
- And then the top layer concerns those hedge funds that just want to make money those geopolitical actors that just one sociopathic defender a eyes or attacker eyes the campaign managers who want to keep people outraged or scared? How do we lessen or eliminate those dynamics?
So that's the base layer of full stack alignment is a kind of a spreading values awareness among people.
So this will be the structure of this chapter. I'll go through these three layers of full stack alignment starting with preparing people. Then changing the race dynamics labs and finally, on to the financial geopolitical and ideological landscapes.
Before we do this tour. There are some something I want to mention about social change in general because what we're talking about here in full stack alignment is a massive social change. An upgrade of the tire social stack. According to new criteria the thing I want to say is that this is not as crazy utopian, unrealistic as it sounds because this kind of upgrade of the social stack. This happens periodic periodically it happens in big ways and happens in small ways. For instance one of these upgrades was the transition from medieval society. Which was where the social stack was based around the monarchy kings nobleman peasants and so on. And the church clergy etc. This was how society was organized, roughly speaking around social roles and then it switched to being organized around democracies and markets which at the time were more about goals and preferences collective goals and individual goals, collective goals and household goals and preferences. This was another full stack change of society. So that makes the idea that we need full stack alignment shift from goals and preferences to values a little more plausible because we've already seen a shift like that. That's one of the big shifts but I think there's also in history, many smaller shifts. I think the shift I'm talking about is kind of of this order. And these shifts I believe. have certain kinds of ingredients they involve a shift from in house like one of those ingredients. We'll go through several of them when is it things are kind of at a breaking point. I think that's true. another and another and another I should go look at my imaginaries minions essay. But I think actually the most difficult ingredient in making these shifts is that it involves a mindset shift about what is legitimate governance What are legitimate institutions on the part of the people the people have to go from thinking that one kind of thing makes sense to thinking that another kind of thing makes sense and this is actually a very deep change in people because these different kinds of social orders make sense. Because people are thinking of themselves their own identity differently. Like when in the monarchy and then you know, in the church and so on. The person is really deeply aware of their place in society. There's this kind of idea. sociologists call it the or intellectual historians call it the great Chain of Being. It's this idea that everyone has their place in this vast social order designed by God. That's one way of thinking about yourself as something that has its place in a vast social order designed by God and thinking of yourself as having personal or household goals that are expressed through voting and in the market this is very different way to think of yourself. This is a really hard kind of transition to pull off. And that's what we need to pull off in the popular mindset in order to make this full stack transition. That's what we want to focus on when he's talking about the people the bottom layer here. So my colleague le Hain will talk briefly a bit here about how you can make one of these shifts happen on purpose then
Nothing to Be Done
One thing you might find implausible, is, how short this video is, and how big the social problems are. How could a few new ideas (values cards, hard steps, spaces…) solve so many social problems?
That might seem weird. But something like this happens every few hundred years. A new literacy transforms society.
One factor that limits social organization is the information people can pass around. And this changes over time.
In medieval society, few were articulate about goals and preferences. What mattered about someone was not whether they had goal X or Y, but whether they were a king, a peasant, a priest, a woman, or a slave.
The transition to modern society was a transition from articulacy about roles, to articulacy about goals.
Goals and preferences were themselves a breakthrough! When people became articulate about goals, they started to make contracts with each other, to transact, to form temporary coalitions and vote or fight together, to advance their goals.
Nowadays, people have a clear enough sense of their goals that they can use large scale systems like Google search, and to do lists, and project management to make them happen.
Similarly, the transition from modern to postmodern society was a transition from articulacy about goals, to articulacy about consumer tastes, subcultures, and so on.
Here’s some of these transitions. On the right, you see the new institutions that emerged, and in the middle, the new kind of information people became aware of inside themselves, that they were able to talk about and build relationships around. This made them see how society should be changed, and led to these new institutions.
In each case, a new way of understanding ourselves spreads, and creates a new vocabulary. Then large-scale systems get built on that new vocabulary.
It’s the great clock of history, ticking along.
Each tick of this clock involves tremendous effort. People need to learn new ways of interacting, and new systems need to be built.
This only happens when the previous systems and ways of talking are breaking down.
Well, guess what?
That's what's happening now. The vocabularies we have—about goals and preferences—have gotten us a long way, but they’re straining at the seams. It's time for the big clock to strike a new hour.
When this happens, potential new vocabularies get proposed.
Right now, one proposal is feelings-articulacy. Another is oppressions-articulacy. Neither of those is working out. They don’t show us how to rebuild big things.
In this talk, I’ve tried to show how—with meaning articulacy—we can rebuild things, big and small.
That means we’re at a pivotal moment. Like, right before the American and French revolutions, when new ideas about private households, goals, markets, and voting were spreading through the population.
Bottom layer: hearts and minds
I’ve found that learning all of this—seeing the universe of meaning, making values cards from your emotions—it changes a person.
It changed me! Not long ago, I was focused on my goals and fears—including fears of being unloved, or worthless, or bad. Developing this language of meaning made it clear when these other things pulled me away from doing what’s meaningful to me. I started to put meaning at the center.
I’ve watched many people go through this, at the School for Social Design.
- They have a new way to honor their emotions, and the emotions of those around them.
- Teams completely change their products. They put their own sense of meaning at the heart of what they build—in place of functionality or efficacy or some numerical model of impact. I saw Cathrine, Nick, Dara and Greg redesign their products around the kinds of socializing they find meaningful.
- Finally, they advocate for change in their organizations and institutions. I saw Adam, Rhys, and Josh change policies in schools, research labs, and companies—to protect what’s meaningful.
I saw Adam, Ryan, and Ari gather sources of meaning from parents, children, and friends. They start to admire these people for sources of meaning they didn't see before, and they also make new kinds of space for them.
I’ve even seen long-standing arguments evaporate, when people see the sources of meaning behind them. They’re inspired instead of angry!
People also come to agreement about what's meaningful, and change their lives. I saw Wiley and his wife change cities, jobs, and lifestyles.
People’s impressions of their customers changes. They see them beautiful and value-driven. They stop trying to smooth out flows, incentivize, or entertain.
I get to watch all this happen when people go through our course.
But I think the most important change is that, when you put meaning at the center of your own life, it protects you from many of the mindset problems I mentioned in chapter 1.
For one, you hustle a bit less. When you prioritize meaning, it means exploration and curiosity go first. You’re also less drawn in by the attention economy
—the doomscrolling, polarization, and internet outrage. You’re less drawn in by the “consumption economy” or by ideological battles. You return to the core social glue of meaning and shared space.
Middle layer: labs
We can talk about the labs Okay, let's move up a level in full stack alignment to talk about lining the labs to race towards super intelligence. Kind of have two strategies here. One of them is about the people who work at the lab work at the labs and one is about the way they test their models. So starting with the people that work at the labs same kind of thing that Ellie just talked about can be really good for people who work at AI labs. They can change their own identity and see that a values based direction is best that super wisdom is best. Other strategy is to make an evaluation suite for artificial super wisdom, a way of testing a model and seeing how wise it is. Writing that up is a paper published by all the labs and just running comparisons How was his Bard How was ChatGPT4, How was his llama etc. We want to make it so everybody can do this and because why is AI is relevant to several research areas that already have headcount at the labs. It's relevant to capabilities it's relevant to safety, it's relevant to interpretability. We think that we can create a situation where a significant number of people at the labs will want that was the number to go up. Want to make lab they want to make models that are wiser and wiser. So that's what we want to aim to do with the labs.
Top: Wise Collectives
Finally, let's talk about the highest level of the stack those financial geopolitical and ideological actors. Here's where I plan I think is the weakest. I mean, I think it's a better plan than anybody else has. But it still doesn't strike me like it's probably going to work. It's probably not going to work but it's it's definitely worth a try the reasons probably not gonna work because because our plan involves a slow replacement of existing financial and geopolitical actors. With a new kind of actor. And that slow replacement I think can really happen. But it might be too slow to change the game in time to keep people from deploying sociopathic dangerous, non wise AIS at large scales and in very consequential situations. So this plan might be too slow to save us, at least to save us from some of the damage there'll be caused by those guys. But I'll still give my plan you can think of any kind of organization or government company as the thing that's there to take care of certain people. Companies do some taking care of their employees, their shareholders and their users or customers. headphone is mostly just take care of the shareholders and a bit the employees nations are there to take care of the citizens. At least supposedly now, in the section on wise AI and these these organizations do that. By principally by holding assets and running a decision making process so. So hedge funds and companies run a kind of hierarchical decision process to take care of those three groups. Hedge funds run into hierarchical decision process democracies around a different kind of decision process that involves things like voting deliberation in court systems. Are these decision processes result in some kind of collective choice that hopefully takes care of the people the relevant people so why is AI that does social coordination is also a thing that's there to take care of people and that runs the decision process. I mentioned in the wise AI section actually a few different decision processes that you can think of a wise AI doing social coordination is running. One of them is referred to as democratic fine tuning where the wise AI tries to pull the best values from a population and make decisions based on that. That's kind of similar to voting and deliberation. Another decision process I mentioned is so why is AI makes its own decisions with human explainable reasons and values. It's kind of similar to court systems and to business decisions. So the big thing that's missing if all these organizations are made of these three things, a community of care, a decision process and decision processes and assets. The big thing that's missing is assets are what I want to claim in the rest of this section is that well as AI is can end up with a lot of assets these communities around wise AI is gonna end up with a lot of assets. Let's call the community a wise collective. Could a wise collective ever end up with assets comparable in size and power to hedge fund to government today geopolitical actor I think the answer is yes. Why it's collective could end up with asset pools and communities who care at the sizes and especially if such wise is powerful wise collectives. Could make treaties with one another to work together and defend against these other more ruthless fighting financial and geopolitical actors. I think that's how we could start. We could address the top layer of the stack. So how would such a wise collective end up with a large community of care and a large number of assets I think that's in a way the main question. And I'd like to tell a few stories about that. So first of all assets are held by people. So if you can get people to join a wise collective a lot of them that's part of the story. And I do think people would join a wise collective for the following reasons. The other thing I want to say is that I think a wise collective could out compete, or especially networkable as collectives, with treaties between them could compete traditional organizations and nations on several levels, fun IP and media need a structure so that this translated to even more assets inside the wise collective. I think you do this by giving the wise collective temporary stewardship of the assets of people who join it to the ideas if you join a wise collective you give or lend your assets to the collective. If you decide to leave, you can get them back again so that's my theory about the top level the stack, like I said, it's a little more sci fi than the rest might not happen fast enough. But it's the best proposal and it's a little hand wavy, but it's the best proposal I've heard for how to deal with these financial geopolitical actors.
- Against game dynamics — the point system unit is not fixed. one day it’s money, the next day money is useless and people care about something else
- But people, land, and inventiveness — these things are real
- So, under what circumstances would a values-aligned actor win people, land, and inventiveness?
- A new kind of govt actor — that opts out of ideological and security battles, but wins…
- A new kind of financial actor — that returns lower profits but wins people, land, & inventiveness
Conclusion / how to help
So that's full stack alignment. It's actually got four levels values aligned people. Why is AI are strategies to unite the labs and this idea of why is collectives of a network of wise collectives. What makes all of this work is what I covered in chapter one. A new way of talking about flourishing and understanding flourishing and enduring understanding ourselves not as having goals or preferences. Or feelings but as having sources of meaning. This is a massive social change and it will be done by millions and billions of people but we have started a small nonprofit to coordinate this work you can help us by donating joining our teams working with us in various ways. We also have researchers, policy people and so on that are homed in other places in academia and the labs so you can work with us from wherever you already are. Thanks for listening