# PrinceRank

PrinceRank is a metric that quantifies an agent's level of satisfaction within a given power structure. It is named after The Prince, Machiavelli's handbook for aspiring power-mongers. This page discusses the rationale for and definition of PrinceRank. A subsequent page applies PrinceRank to a variety of power structures.

## Motivation

The utility function considers only the sizes of the agents in a power structure. It is clearly onto something important and interesting in the way it enables the comparison of different power structures. However, it would be very useful to have a metric that also took into consideration the agents' interrelationships. That way, we would be able to see how the alliance structure of the entire network affects each agents' preferences. For example, is an agent in a beneficial position within the network? Or is its security being compromised, with one of its allies being attacked by another agent? We have no way to know these things from the utility function alone.

To illustrate, the four power structures below all have the same size vector, s = {1, 1, 1, 1, 1}, and therefore the utility function gives the agents the same utility rating in each case. Yet the agents in these structures are in very different predicaments. But he utility function is simply blind to these distinctions.

What we need is a metric that quantifies an individual agent's preference for a particular power structure, that takes into account the entire structure: both the size vector and the tactic matrix. We can achieve this objective by combining the utility function with the law of motion.

## Definition

We define an agent's PrinceRank $\displaystyle{ \mathbf{p}_{i} }$ as follows: For a given power structure, run the law of motion, find the agents' utility at each time step, and then add up the total utility over time, weighting future time steps less heavily than those closer to the present. Expressed mathematically:

$\displaystyle{ \mathbf{p}_{i} = (1-\delta) \sum_{t=0}^{\infty} \delta^t \mathbf{u}_{i}(t). }$

This "intertemporal utility" represents an agent's naive appraisal of utility as it accrues over time, naive in the sense that the agent assumes implausibly that all of the agents, including itself, will hold their tactics constant over those future time steps. The equation is a common method of discounting utility, used in the economics literature in the context of infinitely repeated games (see Ratliff 1996).

PrinceRank produces a complete ordering of all possible power structures, reflecting the preferences of a given agent. Equivalently, it also provides a preferential ordering of all agents within a single power structure, making it possible to know how happy each agent is with its position in the structure, relative to the others.

To make it easy to see the agents' PrinceRank in a power structure diagram, we color the agents' vertices along a blue-green-yellow spectrum. Yellow agents have the highest PrinceRank, while blue agents have the lowest. Green agents are somewhere in the middle.

PrinceRank can be implemented as:

PrinceRank[s_, T_] := (1-δ) Sum[δ^t * Utility[Simulate[s, T, t+1]], {t, 0, Infinity}]

which has the same structure as the equation above. Note that this implementation is rather inefficient due to the repeated invocation of Simulate. Also, for practical purposes, Infinity in the above function can be replaced with Floor[Log[δ,x]], which gives the number of time steps required to approximate PrinceRank by ignoring negligible future states, with the degree of negligibility determined by x (0.1 should suffice).

## Use of Symmetrization

The usefulness of PrinceRank is not immediately obvious if one is focused on asymmetric tactics like this one:

In the diagram above, two agents are allocating constructive power at a focal agent who is not reciprocating. While the focal agent is clearly in a strong position, that good fortune is unlikely to persist because the other agents are not going to keep giving away their power without getting something in return. This may at first seem like a flaw in PrinceRank: it should not rate this focal agent's position so highly, because the assumption that the others will not change their tactics is tenuous (Heuristic 1).

For this reason, it is preferable to correct such asymmetries before applying PrinceRank, using the SymmetrizeTernaryT function. Symmetrizing the power structure above and then computing PrinceRank, one now gets:

which is a much sounder appraisal of the situation at hand.

## Notable Characteristics

PrinceRank has a few other characteristics worth mentioning.

### Some Mathematical Properties

Here we mention a few properties of PrinceRank. A more intuitive discussion of PrinceRank's behavior is on the next page.

First, consider the three plots below. On the left is the discounting function, which weighs near-term values more heavily than values farther in the future. In the middle is the utility of an agent in a random simulation of the law of motion. And on the right is discounted utility, which is what you get when you multiply the utility curve by the discounting function. The area under the discounted utility curve, multiplied by 1-δ, is the agent's PrinceRank.

Next, consider the simplest case of two equally sized agents. In the plot below, the PrinceRank payoffs for each agent are shown as a function of the power that each agent allocates to the other. Agent #1's tactic is $\displaystyle{ \mathbf{T}_{2,1} }$ and agent #2's is $\displaystyle{ \mathbf{T}_{1,2} }$. The blue surface represents agent #1's PrinceRank and the orange surface #2's PrinceRank.

Note that the plot above assumes that there's no preset self-allocation percentage: an agent can allocate 100% of their power.

### Network Centrality Measure

PrinceRank is a kind of network centrality measure, vaguely similar to degree, Katz, Bonacich, and PageRank centrality (Valente 2008). These other measures generally move in the same direction when the graph links are positive. When there are negative edges in the graph, PrinceRank goes its own way, emphasizing different aspects and always keeping the result nonnegative. PrinceRank also takes into account information about graph vertices (the size vector), which these other measures do not. These differences are probably not so significant that one of these other measures couldn't be used as a coarse position evaluation function. However, PrinceRank derives from an axiomatic foundation that has a meaning consistent with the metric's purpose and results, and a conceptual coherence about what power is and how it behaves.

### Computational Shortcut

PrinceRank can be computationally intensive to calculate, especially when δ requires it to simulate a significant number of time steps. For instance, when δ=0.9, PrinceRank has to compute 29 steps, each of which applies the law of motion and the utility function. Raising δ to 0.95 requires 59 time steps. While a single PrinceRank calculation can be done in a thousandth of a second, game trees grow exponentially and require PrinceRank to be run at each leaf state, so we want to be frugal.

To make PrinceRank more efficient, we can approximate it by limiting the number of steps run to a preset parameter k. This way, even if δ=0.99 (299 steps), we need only compute the first, say, k=25 steps of it. Even though the result for an individual agent may be different when we truncate the simulation, experiments suggest that the agents' approximate PrinceRanks relative to each other will be similar enough to be workable:

Each of the plots above shows the approximated PrinceRank as a function of the agent index, i.e. their slot in the PrinceRank vector. Each graph is the "signature" of that vector. As suggested by these plots, the signature has basically the same contours starting around k=20, and anything beyond that is simply wasted computation. By cutting off the long tail of discount function, we are dropping only trivial values.

A caveat is that k should be at least as large as the graph radius of the power structure being analyzed. Because power flows through the graph one step at a time, an agent will not feel the effects of an upstream agent that is farther away than the number of time steps elapsed. So if k is too small, PrinceRank won't reflect all of the interlinkages among the agents.

### Handling an Edge Case in the Utility Function

Another notable aspect of PrinceRank is that it helps correct a deficiency in the utility function. An agent can get a high utility score by being large and alone. But due to the law of motion, an agent would not want to be the only one in existence, because then it would stagnate or shrink (Axiom 3), depending on how λ was set:

PrinceRank recognizes that a lone agent is destined to decay, because it looks ahead whereas the utility function looks only at the present state.

The next section uses visualizations to illustrate PrinceRank in a less technical manner.

<< Prev | Next >>