Quantitative Realism as a Game

From Realpolitik.io
Jump to navigation Jump to search

The law of motion is deterministic; it doesn't depend on human decision making or agency. Its consequences follow from the network topology of agents interacting with each other. When two countries engage in protracted conflict with each other, they tend to reduce each other's strength. That is simply a fact about the world that the law of motion describes.

In contrast, the formation of a foreign policy entails volition, calculation, and judgment. It also has a moral aspect because it involves the infliction of negative consequences upon others. So what does it mean to propound a theory about foreign policy formation? Is it a statement about what real world actors actually do (or did, or will do), or is it a statement about what they should do? In other words, is it descriptive or prescriptive?

The answer is: it's neither. It's not descriptive because we do not expect whatever algorithm we come up with to represent how countries actually behave. Life is too complex for that. On the other hand, it would be a mistake to treat the theory as prescriptive because doing so would elevate its dark assumptions about human preferences into moral assertions, turning the theory into a self-fulfilling prophecy.

If our theory of foreign policy formation is neither descriptive nor prescriptive, then what is it?

Gamelike Nature

The most straightforward approach is to think of quantitative realism as a game, like chess. Chess establishes rules, such as about what pieces are used, how they are allowed to move, the consequences of moves, and how the game ends. But the decision about what moves to make is up to each player, and over the centuries elaborate theories have been developed based on the exploration of various lines of moves and countermoves. This theory of chess does not describe what moves players do (or will) make, and it does not tell players what moves they should make. It is neither descriptive nor prescriptive. It instead tries to establish general principles based upon an exploration of the space of all possible moves, seeking to understand the consequences of player decisions in light of the opposing player's interests and scope of action.

Quantitative realism, as a game, has the following characteristics:

  1. Multi-player
  2. Simultaneous move (ideally)
  3. Perfect information
  4. Non-zero sum
  5. With a known evaluation function
  6. Played infinitely (there is no terminal condition, however, it is not an infinitely repeated game like the iterated prisoner's dilemma because the game is different at each time step)

The "rules of the game" in quantitative realism are: (1) the law of motion, (2) social inertia, and (3) the assumption of ongoing interaction. To figure out strategies of gameplay, we lean on the assumption of how power is pursued and the tendency for reciprocal relationships to form. We then want to understand the space of moves and countermoves to see if we can perceive general patterns within the game. We are looking for a function that takes a power structure as input and returns a tactic vector for a given agent, indicating their preferred move. We'll call this function UpdateTactic[s, T, i], where s and T make up the power structure, and i is the index of the focal agent. This function is what animates agent behavior.

We assume for the sake of simplicity that when determining what to do, agents look only at the current power structure. Because they are embedded in a dynamic process that unfolds over time, we could just as well generalize move search to allow agents to consider not just the present power structure but its history as well. In that case, agents could bear in mind their particular histories with other agents, the 'personalities' of the other agents in terms of how they tend to respond to situations, and the pattern of interaction among third parties with each other (see Axelrod 1984). This would make for rich and nuanced reasoning about what foreign policies to adopt, and it is obviously how real world agents go about their calculations. A full understanding of quantitative realism will require exploring this idea and taking into account deep interaction history in order to determine player action in the present.

Game Tree Complexity

Even with this simplification, there is an intimidating level of complexity afoot. When agents strategize about what to do next, they have to consider the potential moves of their competitors, anticipate what they might do, and in turn think about what those agents think they might do. As in chess, this implies a chain of reasoning composed of many if I do that, she'll do that links, in which numerous game states have to be evaluated mentally before anyone makes their next move. This reasoning process produces a game tree, which starts from an origin state — the current power structure — and looks at the agents' possible moves and countermoves. The resulting tree grows at an exponential rate. If an agent fails to carry out this analysis, or if they do so with insufficient rigor, they will be at a disadvantage compared with competitors who plan more thoroughly and logically.

Game tree sketch.png

Unlike chess, which has just two players, a power structure can have any number of agents, and the number of game states to be examined increases at a truly astonishing rate. In chess, each player has around 30 possible moves in a typical a middlegame position. Looking four moves ahead in chess, there are about a million board positions to contend with. In contrast, in a power structure, if agents had 30 possible moves at their disposal, and they wanted to look just three moves ahead against a dozen competitors, they would have to consider over [math]\displaystyle{ 10^{3245} }[/math] individual states. This game tree complexity is beyond astronomical — literally, as there are only [math]\displaystyle{ 10^{80} }[/math] atoms in the known universe.

Also unlike chess, in quantitative realism there's no notion of when the game might end, because there's no terminal condition that defines when someone has won a power struggle. Winning just means surviving to play the next round; there is no last round. Agents just continue to interact indefinitely in an unrelenting competition for survival. This indefinite interaction eliminates the possibility of working backwards from some specific goal. For example, in chess, a player can reason backwards from a checkmate, trying to figure out what combination of moves will entrap her opponent. That strategy is not available here.

And yet, despite the apparent impossibility, humans seem to solve this problem every day, in numerous social contexts, without melting their CPUs. Presumably we have developed, by virtue of biological evolution, an instinct for how to navigate power struggles through the intelligent use of constructive and destructive action. It stands to reason that there must be a way to approximate this natural sense computationally.

Despite these and other difficulties that we'll address as we proceed, the approach we will take is conceptually similar to that of a chess engine. Chess engines are based on a board evaluation function, which allows game states to be compared and ranked, and a search algorithm, which plays out hypothetical games to find the most desirable path forward for a given player.

In the sections that follow, we develop both of these components, building upon the axioms to see how simulated "games" can help us make generalizations about the trajectories of power struggles.

<< Prev | Next >>