This Week
This week has been occupied with questions of truth. Broadly speaking, QV/QF and plural mechanisms like COCM [1] can be considered “adjustment protocols” or even “correction” protocols in that they take some input and adjust them to a state which the protocol implementer feels will better serve some goal. In QV, imbalances in voice credits between various holders are deemed to be too high, so all values are square rooted to mitigate this. In COCM, further interaction terms and penalties are applied in an attempt to discount between correlated voters.
Two broad categories of questions arise:
- Is the protocol implementer correct that the adjustment is warranted? What if that situation changes?
- Does the protocol correctly capture the true state of the network it uses for adjustment? What damage is done if it doesn’t?
It’s the second category which interests us right now. To take QV as a simple example, for QV to be useful, the square-rooting adjustment needs to be warranted AND the voice credit values before the reduction need to reflect the “true” state among the underlying voters. The well-known Sybil issue is an obvious example of where this deviation from the true state may occur and even be encouraged.
EIP-7716
A brief detour into the bowels of Ethereum: EIP-7716, which proposes anti-correlation penalties on validators who miss attestations (EIP-7716: Anti-correlation attestation penalties). In short, validators whose errors are correlated are likely to be large operators with unified infrastructure. They benefit from economies of scale, but they introduce centralization and therefore risk: a fault which affects one note is more likely to affect others. Identifying these validators and adding extra penalties when they make errors could help to mitigate this. This is work by dapplion based on a suggestion by Vitalik Buterin (Supporting decentralized staking through more anti-correlation incentives - Proof-of-Stake - Ethereum Research)
We won’t go into too many details here, but initial analysis suggests this is a good adjustment protocol: a simple rule set based on a clear metric which serves as a good proxy for the underlying network state, with no obvious ways to game the system without supporting the aims of the protocol designer (e.g., a large centralized player could decentralize their architecture to avoid penalties. This wouldn’t satisfy the simplistic goal of breaking up these large players in favour of smaller ones, but it would still reduce risk, increase costs for those players and make the playing field more even for smaller stakers).
We’ll investigate this EIP further in future weeks, because we feel it makes a nice comparison to the more human complexities that arise from QV and COCM.
What if the protocol does not correctly capture the true state of the network?
Returning to COCM, this week we embarked on a simulation to try and assess the discrepancies and risks that arise when the identity data used by the protocol deviates from the true state of the relationships between participants. Capturing this true state (both in selecting the identity dimensions to be gathered and correlated, and the accuracy of this data gathering) is a key assumption of COCM in terms of its utility and anti-collusion property.
First, we simulate a network that defines relationships between agents and call this the “true” network. Second, we gradually remove relationships between agents to simulate situations where the implementer/overseer does not capture all information about the underlying network. Third, we calculate the plurality scores using the “true” and the networks with missing relationships. Finally, we define the difference between the obtained scores as the impact that a violation has on the plurality score.
We generate the “true” network using the famous G(n,p) random graph model [2] that generates a so-called Erdős–Rényi graph (ER). The graph above illustrates an ER graph with parameters n=100 and p=0.05, where n specifies the number of nodes and p specifies the probability that there exists an edge between any two nodes. The algorithm to generate the ER random network works as follows: 1) Take the first node and the second node 2) generate a random number between 0 and 1 3) if the number is smaller or equal to p=0.05 connect the two nodes via an edge and otherwise not. 4) in the next iteration take the first and the third node and repeat that process until all node combinations are exhausted.
To calculate the plurality score of the network generated by the ER model we must define groups and votes. We simply assume that agents who share an edge are in the same group, which seems quite natural given the way we generated the network. In this initial simulation, we assume a uniform distribution of votes. Here, we assume that each agent votes 10 on a hypothetical option. We assume this to avoid the potential effect of an interaction between the voting distribution and the underlying network on the plurality score. Remember, we want to cleanly evaluate the effect of the underlying network, therefore, we eliminate the interaction by assuming a uniform distribution. Evaluating the effect of the interaction (if any) might be interesting but it’s not part of this exercise. Under these assumptions the plurality score of the “true” ER network depicted above is 207.584. Let’s put this number into perspective.
We gradually remove edges from the network at random and recalculate the plurality score. By doing this, we transform the “true” ER network into a “partially true” version and compare the resulting plurality scores. The figure above illustrates the results of this exercise. The x-axis denotes the percentage of edges that we randomly remove from the “true” network. For example, 0 denotes that we do not remove any edges (i.e. this corresponds to the “true” network with a plurality score of 207.584), 0.1 denotes that we remove 10% of edges at random and so on. Note that the case where we remove 100% of edges with a score of 316.2278 corresponds to the score that one would obtain using the classic quadratic voting model. Remember, no edges imply that there are no groups. The y-axis denotes the mean plurality score. Lastly, the band around the mean plurality scores for each share of removed edges displays the standard deviation of the plurality scores. For each share of removed edges we simulate 100 networks where we randomly remove the given percentage of edges and calculate the plurality score. We can see that the result displayed in the figure is robust to variations in removed edges. In other words, within a given share it does not matter much which share of edges gets randomly removed.
In summary, the exercise that we performed today clearly demonstrates that the plurality score depends on the available information about agents’ pre-existing relationships. Therefore, the property of collusion resistance that COCM provides might be in jeopartdy due to the fact that participants might be able to collude by hiding (e.g. not reporting) existing relationships. Moreover, the incentive to collude could be substantial, as the potential difference in plurality scores resulting from partial information about the “true” network appears to be significant. However, further evaluation is needed to show that the difference in obtained plurality scores due to partial information is robust to, for example, changes in p, n, or the network model itself.
Next Week
Next week is Protocol Week at Edge Esmeralda. We’ve submitted Plurality vs Authority as our tension, which we hope will spark a discussion on the pressures between gathering relevant social information and how this is curated / enforced.
Protocol Musings
People may enjoy the recent edition of the Epicenter Podcast, featuring Glen Weyl and Audrey Tang discussing the success of plural tools in Taiwan: x.com
References
[1]: Miller, J., Weyl, E. G., & Erichsen, L. (2022). Beyond Collusion Resistance: Leveraging Social Information for Plural Funding and Voting. Available at SSRN 431 1507.
[2]: Gilbert, E. N. (1959). Random graphs. The Annals of Mathematical Statistics, 30(4), 1141-1144.