Data Structures

From Realpolitik.io
Jump to navigation Jump to search

Here we show how quantitative realism can be represented formally, using mathematical notation and the Wolfram Language.

Power Structure

A power structure is an object composed of a size vector s and a tactic matrix T. In the Wolfram Language, a power structure is represented using an Association object of the form:

<|"s"→ s, "T"→ T|>

The function PowerStructure[s,T] instantiates a power structure object that takes the form of the expression above.

The variable n represents the number of agents in the power structure. It is often convenient to refer to agents by an index number i that indicates the position of that agent's data in each data structure. (In the Wolfram Language, list indexes start at 1, not 0.)

Size Vector

The variable s, for size, is a nonnegative number representing the amount of power an agent has. The larger the number, the greater the agent's power. It is always the case that s ≥ 0. Zero power means that the agent has no influence and is effectively dead. The vector s lists the sizes of all agents in a power structure. For example, in a power structure with three agents, s = {.8, 1, .5} means that agent #3 has a size of 0.5.

By convention, size vectors are normalized at the beginning of a simulation such that the largest agent has a size of 1, using:

NormalizeS[s_] := s/Max[s]

Normalization ensures that size vectors with the same proportions but at different scales give rise to the same behavior in the course of a simulation. This is necessary due to Axiom 4, which recognizes that agents want their power to increase in an absolute sense. For example, two power structures with the size vectors {.8, 1, .5} and {1.6, 2, 1}, ceteris parabus, will not necessarily evolve the same way.

A random size vector can be generated using the function:

RandomS[n_] := NormalizeS[RandomReal[{0, 1}, n]]

Random power structures are useful points of departure for exploring the consequences of quantitative realism.

Tactic Matrix

A tactic matrix T is composed of n tactic vectors. A tactic vector [math]\displaystyle{ \tau }[/math] is expressed as a list of real numbers, indicating how an agent's power is to be allocated to the other agents and to itself. The element [math]\displaystyle{ \tau_{j} }[/math] represents the amount of power being allocated from the agent whose tactic vector it is to agent j. The slot in the tactic vector at the agent's own index number indicates the amount of power that the agent is self-allocating, in other words, keeping to itself and not transferring to other agents. Constructive action in a tactic vector is represented by a positive number; destructive action by a negative number.

Conventions

There are three conventions related to tactic vectors.

Convention 1

Every element of a tactic vector must be between -1 and 1 and an agent cannot use more (or less) power than it has. Therefore, the sum of the absolute values of the numbers in the vector must be 1. Expressing this convention mathematically, we get:

[math]\displaystyle{ \sum_{j=1}^{n} \left|\tau_{j}\right| = 1, \hspace{1cm} \tau_{j} \in [-1,1] }[/math]

In other words, a tactic vector shows how an agent's power is to be distributed, by percentage. For example, if agent #2's tactic vector is {.1, .8, .04, .06}, it is allocating 4% of its power to agent #3 and 80% to itself. We can test a tactic vector for compliance with these constraints using:

LegalTacticQ[tactic_] := Total[Abs[tactic]] == 1

A tactic vector can be legalized by normalizing it with:

NormalizeTactic[tactic_] := tactic / Total[Abs[tactic]]

We can use this normalization function to generate random tactic vectors. A simple approach is to pick a random point on a sphere of dimension n-1 and then normalize it:

RandomTactic[n_] := NormalizeTactic[RandomPoint[Sphere[Table[0, n]]]]

Normalization essentially adjusts the length of the vector such that it falls on the surface of a cross-polytope of dimension n. We can see this by generating random tactics and plotting them. For example, randomly generated n=2 tactics form the outline of a two dimensional cross-polytope, or square:

Cross-polytope 2D.png

If the tactics generated above were for agent #1, the x-axis would represent the agent's self-allocation and the y-axis would represent the amount of power allocated to the other agent. For n=3, legal tactics fall upon the surface of an octahedron:

Cross-polytope 3D.png

Convention 2

There is a minimum percentage of power that each agent must allocate to itself. This convention helps simulations maintain a higher level of fidelity to the real world, by limiting the amount of power that the agents can expend in any given time step. Without such a limit, they could conceivably expend all of their power at one time, and the result would be erratic simulations that bore little resemblance to any realistic situation. We can correct for this using an additional parameter ρ, which establishes a minimum self-allocation percentage. For example, when ρ=0.9, agent #2's tactic vector might be {0, 0.9, 0.02, -0.05, 0.03}. The self-allocation percentage controls the tempo of a simulation: the more power that agents allocate to themselves, the less they allocate to others, and therefore the less change occurs from one time step to the next. Experience has shown that suitable values for ρ are typically in the range ρ ≥ 0.9.

Convention 3

Agents are assumed not to engage in self-harm and therefore their self-allocation is never negative. This assumption obviates the need to search for self-destructive tactics that an agent will likely never adopt.

Random Generation

It is often useful to randomly generate tactic vectors and matrices, both to initialize simulations and to explore an agent's possible moves.

Random Tactic Vectors

We can generate random tactic vectors that fulfill all of the above conventions using:

RandomTactic[n_, i_, ρ_] := With[{a=RandomReal[{ρ, 1}]}, Insert[(1-a) RandomTactic[n-1], a, i]]

This function uses our definition of RandomTactic above to build a tactic vector for a given agent i. For instance, to generate a random four-agent vector for agent #2, one would evaluate RandomTactic[4, 2, .9] to get, for example {-0.0098, 0.9158, 0.0135, -0.0607}.

Note that ρ does not set a fixed amount, but is instead a floor below which the self-allocation percentage cannot fall.

Random Tactic Matrices

The tactic vectors of the agents are aggregated together into a tactic matrix in which each tactic vector is a column vector. The tactic matrix encodes all information about how power is allocated within the power structure. We generate random tactic matrices using the function:

RandomT[n_] := Transpose[Table[RandomTactic[n, i], {i, n}]]

at which point we are now able to generate random power structures such as:

Continuous PS - pretty data.png

The main diagonal of the tactic matrix indicates the self-allocation percentages of the agents.

Even though the data structures and algorithms in the model are fundamentally numeric, it's easier to comprehend a power structure by looking at a picture of it. Here's a depiction of the power structure above:

Continuous PS - diagram.png

Even though this is easier to understand than a raw numeric data structure, the diagram is still fairly busy, showing every transaction among the agents. It also violates Heuristic 1's assumption that all relationships are reciprocal in polarity.

Ternary Tactics

For these and other reasons, it is easier to initialize and interpret simulations using ternary matrices, which use 1 to represent constructive power, -1 for destructive power, and 0 for null relationships. Ternary vectors are preferable because they're easier to set up and interpret, and because they dramatically narrow the state space to be explored. (There are an infinite number of continuous tactics that can be considered, even within the limitations of social inertia.) However, ternary tactics fail to capture situations where an agent has some relationships that are more intense than others, because an agent's outgoing power is divided equally among its recipients. This limitation glosses over some behavioral nuances and has to be borne in mind when evaluating the apparent preferences of agents in a given simulation.

Conversion

Tactic vectors and matrices composed of only the elements {-1, 0, 1} can then be converted to legal (continuous) tactic objects, with power being allocated equally among the downstream agents. A ternary tactic matrix can be legalized using the function:

NormalizeTernaryTactic[tactic_, i_] := ReplacePart[NormalizeTactic[ReplacePart[tactic, i → 0]] * (1-ρ), i → ρ]

This function can be applied to all columns in a tactic matrix to convert it from a ternary to continuous matrix. By convention, an agent's self-allocation in ternary is always 0.

Conversely, a continuous tactic matrix can be converted to ternary using:

ToTernary[T_] := ReplacePart[Sign[T], {i_, i_} → 0]

Using this function, the tactic matrix in the power structure above becomes:

Continuous PS - pretty T.png

Symmetrization

Notice that the matrix above is not symmetrical about the main diagonal, as is also evident from its visualization. We can symmetrize a matrix based on the concept in Heuristic 1 that one party in a relationship can singlehandedly start a conflict, whereas it requires two agents to cooperate:

SymmetrizeTernaryT[T_] := MapThread[Min, {T, Transpose[T]}, 2]

This function symmetrizes the tactics used by each pair of agents by taking the lower of the two tactical values. For example, using ternary, if one agent is giving constructive power (+1) and the other agent is neutral (0), the symmetrized relationship becomes neutral (0). If one agent is attacking (-1) and the other is neutral, the symmetrized relationship becomes a mutual conflict (-1). Symmetrizing the matrix above would yield:

Continuous PS - symmetric T.png

The power structure above, converted to ternary and symmetrized, is now:

Continuous PS - symmetrized pretty.png

and it looks like:

Continuous PS - symmetrized diagram.png

This interpretation tells a simpler story: an alliance of three agents attacking a fourth.

Reciprocalization

A more sophisticated way to symmetrize a ternary tactic matrix is reciprocalization, which takes into account the size differences of the various agents by trying to equalize the amount of pairwise flow. The algorithm for achieving this is described in the code, but a visual example should convey the idea:

Reciprocalized tactics.png

Reciprocalization is the standard way to balance an asymmetric ternary matrix.

"Gravitized" Tactics

As noted above, when ternary tactics are normalized with NormalizeTernaryTactic, an agent's outgoing power is allocated equally among its downstream recipients. An alternative approach is to make the outgoing allocations proportional to the sizes of the recipients, thereby emulating the relationship structure of a gravity model of trade. There are two ways to gravitize tactics, one which varies the amount of power allocated from one agent to another, and one which allocates a fixed amount.

Variable Allocations. A ternary tactic can be "gravitized" with the function:

GravitizeTernaryT[T_, s_] := NormalizeTernaryT[s*T]

where NormalizeTernaryT is just NormalizeTernaryTactic mapped over each column in the tactic matrix. A gravitized ternary tactic would look something like this, where the amount of power transferred is proportional to the thickness of each arrow:

Gravitized tactics.png

If a particular relationship is null, that power is allocated among the remaining relationships - hence the description of this as a "variable allocation" methodology.

Fixed Allocations. An alternative way to gravitize tactics is to have each relationship represent a fixed amount of potential power that is proportional to the other agents' sizes. The tactic vector is then normalized by allocating all unused power back to each agent:

GravitizeTernaryTactic[tactic_, s_, i_] := InsertSelfAllocation[tactic*s*(1 - ρ)/Total[s], i]

With this approach, agents would have an incentive to diversify their relationships: the more cooperative relationships, the faster they would grow. There would still be jealousy when a third party gets empowered, as well as a disincentive to two-front wars, which would rapidly wear down an agent.

In summary, there are some intricacies in how tactic vectors and matrices are defined, manipulated, and permuted. These finer points generally follow from the requirements imposed by the axioms and by our need to create realistic simulations that are reasonable to initialize and interpret.

Time

We model time t discretely. The model proceeds in time steps and the other parameters together control how much activity can occur at each step. For example, we could configure the parameters such that a lot of power is transferred at each time step, or an incremental amount. This might correspond to the system changing every decade or every year, respectively. Such decisions are not merely about the "speed" at which events in the simulations unfold. They may yield qualitatively different results. Ultimately, parameters must be chosen so as to give rise to intuitive behavior that corresponds with the real world phenomena being modeled.

It is possible to devise a continuous-time model of quantitative realism, replacing the difference equations below with differential equations. However, for computational purposes, discrete time models are more manageable.

Information

It is assumed that agents have complete information, meaning that they know the entire power structure. This is obviously a significant simplification, one that is almost never true in the real world, in which the management of information is the primary art of statecraft and politics. There, agents may not know the entire power structure with certainty. Their knowledge is approximate, gleaned from signals, and often subject to misinformation, deception, and cognitive errors. Nonetheless, in the international context, it is generally reasonable to assume that actors for the most part operate with a shared understanding of the salient power relationships.

Deviations from perfect information can be explored by assuming that each agent has their own model of the power structure, and that they make decisions based upon that subjective representation.



<< Prev | Next >>