The Price of Anarchy in flow networks as a function of node properties

Many real-world systems such as traffic and electrical flow are described as flows following paths of least resistance through networks, with researchers often focusing on promoting efficiency by optimising network topology. Here, we instead focus on the impact of network node properties on flow efficiency. We use the Price of Anarchy to characterise the efficiency of least-resistance flows on a range of networks whose nodes have the property of being sources, sinks or passive conduits of the flow. The maximum value of and the critical flow volume at which this occurs are determined as a function of the network's node property composition, and found to have a particular morphology that is invariant with network size and topology. Scaling relationships with network size are also obtained, and is demonstrated to be a proxy for network redundancy. The results are interpreted for the operation of electrical micro-grids, which possess variable numbers of distributed generators and consumers. The highest inefficiencies in all networks are found to occur when the numbers of source and sink nodes are equal, a situation which may occur in micro-grids, while highest efficiencies are associated with networks containing a few large source nodes and many small sinks, corresponding to more traditional power grids.

P has been studied in a variety of contexts, such as in network growth games [6], job scheduling [7], resource allocation in public services [8], supply chains [9], and in network traffic flows where a cost (i.e., travel time) is incurred for traversing edges [10,11]. If the individual drivers comprise only a very small amount of the overall flow, then it can be treated as a continuous quantity. Such flows also serve as a model for electrical current, comprising infinitesimally small particles, following paths of least resistance [12]. The Nash equilibrium corresponds to all routes on the network between an arbitrarily chosen source-sink pair having equal cost [13], or local voltage drop in the case of an electrical network, such that no change in flow pattern or routing can lower the cost. In [14] the upper bound on P was found to be 4/3 if the edge cost functions are linear functions of flow volume. Although these worst case values of P are independent of network topology, depending only on the class of edge function, values of P that differ from these extremes are strongly influenced by topology, flow volume, placement of sources and sinks and distribution of parameters in cost functions [4,11]. For example, [15,16] considered the case of a lattice network and revealed how P is affected by the size, aspect ratio and total flow through the lattice.
In some cases, the addition of new edges into a network can cause a counterintuitive increase in the cost of the flow due to the inefficiency of the Nash equilibrium. This is referred to as Braess's paradox [17,18], and has been studied in traffic networks [11,19], where the addition of a road can increase average travel time, and electrical circuits [12]. Variants of this phenomenon have also been reported in supply chains [20] and oscillator networks [21,22]; refer to [23] for an overview.
Previous studies of the Price of Anarchy have considered sources and sinks of flow only in specific arrangements. In [14] a single source-sink pair was considered, whereas [11] treated ordered source-sink pairs with characteristic flows along overlapping paths occurring between them. The present letter first establishes a connection between P and the efficiency and redundancy of leastresistance network flows, and then investigates the dependence of P on the relative and absolute numbers of flow source and sink nodes, to ascertain whether, for a given network, the configuration of node types can be altered to change efficiency. This is of importance to the design and control of electrical micro-grids which typically have varying numbers of low output intermittent sources of electrical power distributed throughout their structure. As the drive towards smaller, distributed generators becomes more urgent in order to mitigate climate change, understanding the impact of variable generation on electrical networks presents a pressing interdisciplinary challenge [24].
Network flow model. -We consider flows though graphs G = (V, E), with n = |V| nodes and m = |E| edges, wherein n s node have the property of being sources of flow, n d are sinks and the remaining n p are passive or empty. Each edge e ∈ E has a linear cost function c e (f e ) = α e f e + β e , where f e is the volume of flow or electrical current on that edge. The functions c e can be interpreted as the voltage drop across the edge, while the coefficients α e and β e represent Ohmic resistance and flow independent voltage drops, respectively. For a flow vector f ∈ R m the total cost across the network is e c e (f e )f e , representing total power loss. The global optimum flow f GO is then the flow pattern that minimises this cost: where E ∈ R n×m is the node-edge incidence matrix [13] and b is the flow injection vector with components with F being the total flow or current injected into the network, and ξ v being random noise. The condition Ef = b enforces conservation of flow at nodes, equivalent to Kirchoff's current law. The Nash equilibrium flow f Nash is given by the optimisation problem [13] min f e fe 0 c e (q)dq constrained by The optimisation problems in (1) and (3) are both convex and solved using subgradient projection methods [25]. The Price of Anarchy is then Nash equilibria conditions are equivalent to Kirchoff 's voltage law. -A physical interpretation of the Nash equilibria obtains from a consideration of Kirchoff's voltage law (KVL), which states that voltages around closed cycles in an electrical network sum to zero. If there is a cycle embedded in a network, then there will be at least two distinct paths between a pair of source and sink nodes. At the Nash equilibrium, each arm of the cycle must have equal cost; hence the cost of any traversal around the cycle is zero, and so the Nash equilibrium condition is equivalent to KVL. The Nash flow therefore necessarily satisfies both Kirchoff's current and voltage laws and is thus a physically legitimate electrical flow for an electrical network in stable operation with matched supply and demand. The relative inefficiency of this flow, resulting in P > 1, stems from the constraints of Kirchoff's conservation laws that define the Nash equilibrium.
Relationship with network redundancy. -P measures the disparity between the costs associated with the Nash and GO flows. In an electrical context the GO would correspond to a flow being able to violate KVL in order to minimise total power loss; however, such an equilibrium would nevertheless be desirable to obtain because it minimises the power consumed by the network. Therefore, P remains a useful metric for assessing efficiency in networks with flows following paths of least resistance, and also for topological redundancy as we now show.
Consider the network shown in fig. 1(a), first introduced by Pigou [26], being the smallest graph admitting a value of P > 1, and which serves as the canonical example to demonstrate the Price of Anarchy [11,13]. Edge 1 has variable cost c 1 = f 1 , whereas edge 2 has fixed cost c 2 = 1. F units of flow enter on the left and exit on the right. Figure 1(b) shows the value of P in this network as a function of F . For 0 < F ≤ 1/2, indicated by the unshaded area, all flow is routed over edge 1 under both the Nash and GO equilbria, with identical costs C = F 2 ; consequently P = 1. For 1/2 < F ≤ 1 (light-gray area), f 1 = F under the Nash flow, so C Nash = F 2 . The GO minimises its cost when f 1 = 1/2, f 2 = F − 1/2 and the total cost is then C GO = F − 1/4, giving P = F 2 /(F − 1/4). For F > 1 (dark-gray area), the Nash equilibrium routes all flow surplus of 1 through edge 2, giving C Nash = F , whereas the GO remains unchanged -hence, P = F/(F − 1/4). We now establish a qualitative relationship between P and network redundancy. Recall that the Nash equilibrium condition and KVL are equivalent in electrical networks. It is possible to drive the Nash flow, with cost C Nash , towards the GO by manipulating the network such that excess flow is transferred from edge 1 to edge 2. This is achieved by reducing the capacity on edge 1. This excess capacity is given by the difference between the flow on edge 1 for each equilibrium, i.e., f Nash The equilibrium on this modified network has cost C Nash ≤ C Nash . This means that edge 1 provides redundant capacity that can be removed. Defining this edge redundancy in terms of the costs obtains which is the relative decrease in cost available by removing capacity from edge 1. No relative decrease in cost is possible by removing any capacity from edge 2. In order to generalise this measure to larger networks it is averaged over both edges to give R := R e , which is the mean decrease in cost attainable by removing capacity from an edge. Figure 1(b) shows R, whose form emulates P. For larger and more complex networks, such as the small-world network depicted in fig. 1(c), this correspondence between P and R prevails, as shown in fig. 1(d).
The correspondence between P and R is observed for real-world networks such as the Austrian power grid, displayed in fig. 2(a), where the flow has been computed using the network flow model outlined above. The peaks in R and P clearly coincide as shown in fig. 2 118 bus test networks [28,29]. Here the peak values of P are ∼1.035, corresponding to a value of R indicating an average 0.4% increase in efficiency available to the whole system from reducing the capacity of a single edge; as this is a per edge value, it reveals a substantial amount of inefficiency across the network as a whole.
Key to what follows is that the maximum values of P and R occur at the same flow volume F . Determination of R is computationally onerous, requiring the evaluation of a convex optimisation problem for each of a network's edges, rendering it impractical for all but the smallest of networks. Evaluating P therefore provides a simple computational proxy for identifying regimes of relative redundancy, enabling very large networks of complex topology and composition to be investigated. The algorithm for computing R in a complex network is presented in the below.
Computation of R. -Recall that R is defined as the mean relative increase in flow efficiency attainable by capping the capacity of an edge in the network. This requires computing the optimal amount by which each edge should be capped, which can be evaluated analytically for the network in fig. 1(a). However, R is not analytically tractable in the general case of complex networks with overlapping paths from sources to sinks; therefore, the method outlined in algorithm 1 is used.
This algorithm takes a graph G = (V, E, c) comprising a set of nodes and edges, V and E, respectively, together with a set of edge functions c, and compares the Nash flow volume on each edge to the GO flow volume on that edge in order to determine by how much its capacity should be capped. A new Nash flow C Nash is then computed Dependence of P on flow and network composition. -We first consider networks whose source and sink nodes have homogeneous flow outputs and inputs respectively, given by the case where ξ v = 0 for all v in eq. (2). For a total flow volume F , the dependences of P on network structure and composition are obtained from an ensemble of 1000 such random small-world network realisations [30,31]. These networks are parameterised by the rewiring probability q, initial degree k, and the number of nodes n, comprising n s , n d and n p source, sink and passive nodes, respectively, whose locations are randomly allocated. The edge cost coefficients α e and β e are both uniformly distributed random variables in the range [0, 1]. At the microscopic scale in the network, fig. 3(a) shows that the individual edge costs are exponentiallydistributed. Unsurprisingly, at the macroscopic scale the total Nash and GO costs (representing total power loss) are gamma-distributed with a probability density function P (C) = (C − σ/μ) ν−1 exp(−(C − σ)/μ)/μΓ(ν), since they are formed from an ensemble of exponentially-distributed edge costs. This is shown in fig. 3(b), (c) and confirmed by Kolmogorov-Smirnov tests (see supplementary material Supplementarymaterial.pdf (SM) for more detail). For each F , the mean of the resulting distribution of P, denoted P, is shown in fig. 3(d). With increasing flow, P rapidly rises to a maximum P * at F * , before declining to unity. How the values of P * and F * depend on the network configuration, defined by n s , n d and n p is now considered. The condition n s + n d + n p = n constrains the space of possible network configurations to a triangular-shaped simplex whose vertices touch one of the n s , n d , n p axes, as depicted in fig. 4(a). The variation of P * and F * for constant n are then projected onto this simplex, as shown in fig. 4(b), (c), respectively. The contours are symmetric about a line bisecting the simplex, corresponding to networks with n s = n d and shown by section (i) in fig. 4(b). Along this line the value of P * decreases monotonically with increasing n s , as shown by the plot in fig. 4(d). Section (ii) is a slice across the simplex at whose mid point n s = n d . P * increases monotonically as this point is approached from either direction, as shown in fig. 4(e), revealing that inefficiency and average edge redundancy are maximised as the number of source and sink nodes becomes equal. Figure 4(f) shows P * ∼ a + bn −1/2 s on section (iii), along which n s increases (and n p decreases) with n d = 1. The morphology of the contours shown in fig. 4(b), (c) remains invariant with q, meaning that these results pertain to both small-world and random Poisson (q > 0.6) networks, as demonstrated in fig. 5. This invariant property also persists (see SM) when considering scale-free networks [33], whose topology is quite distinct from the small-world and Poisson classes.
In practice sources and sinks may be expected to have heterogeneous levels of output and input, such as an electrical grid containing a range of generators with different output capacities. To account for this, ξ v in eq. (2) is now set to be a normally distributed random variable with mean 0 and variance 0.2. This represents a substantial amount of heterogeneity whilst typically still preserving the types of the nodes, and therefore the location on the simplex. Figure 6 demonstrates this heterogeneity in    The linear scaling shown in fig. 7(b) can be explained. F * corresponds to a threshold beyond which the network flows adjust such that the two equilibrium costs begin to converge. To exceed the threshold the total flow must increase linearly because the expected density of flow decreases with increasing n.
Conclusion. -This letter has investigated how inefficiency of flows occurring on different classes of random network, as gauged by the Price of Anarchy P, is affected by the network structure and the function of its nodes. It has also established a correspondence between P and measures of network redundancy, an important consideration in addressing issues of network resilience and cost-effectiveness. This is primarily motivated by understanding properties associated with flows of current in electrical micro-grids, wherein nodes are either sources or sinks of current, or are passive conduits. Poisson, scalefree and small-world networks are used to establish the generality of the results with respect to network topology; this reveals a predictable dependence of P upon node composition for networks of arbitrary structure.
The simplex plots fig. 4(b) and (c) and their symmetry and invariance properties, when taken in conjunction with the system size scalings shown in fig. 7, provide an operating space that defines maximal inefficiency and redundancy for an ensemble of networks with general topology and with variable node composition. With application to micro-grids, a given network's composition will change both diurnally and seasonally, traversing a trajectory through this configuration space. This path will depend on the nature of the sources of power and the load consumed by the sinks -features that will vary with population behaviors and the variable outputs from renewable power sources, for example. This information can be exploited to aid in the dynamic design and management of smart networks so as to constrain trajectories to preferred regions on the simplex. Insofar as redundancy is related to resilience [34][35][36], this aspect of the system's performance can be manipulated dynamically via the network's node type configuration and edge costing. A striking feature is that greatest values of inefficiency (or redundancy) occur when the number of sources and sinks are equal, as apparent in fig. 4(b) and fig. 5(a), a situation that is prevalent for small renewable energy networks where the numbers of generators and consumers are comparable. By contrast, the results show that a centralised electrical distribution grid comprising a few sources but many sinks has a low P * , indicating it is both efficient and lacks redundancy. Equivalent plots can be constructed that are particular for an individual network's structure and composition with which its performance can be gauged.
These findings have established that even for simple linear edge functions, network topology and flow conservation laws are sufficient to induce inefficiency that depends predictably on the configuration of nodes. An interesting extension to this work would be the consideration of nonlinear cost functions, for which the values of P may be substantially larger [11,14].
The inefficiency caused by redundancy is only one metric with which to assess performance and it is inefficient networks that will generally also be the most resilient to faults or attack. Redundancy may also give networks flexibility to operate in a variety of conditions; however, since inefficiency and redundancy coincide, as we show in our results, optimising a network's structure and composition purely for efficiency may result in a loss of useful redundancy. Hence in using the simplex to aid network design it is likely that options will be constrained to an operating space offering an acceptable efficiency-resilience trade-off. * * * OS acknowledges the support of the Leverhulme Trust via the "Modeling and Analytics for a Sustainable Society" doctoral training scheme.