I want to define a number of concepts I’ve been thinking about via in-line usage in an argument I’m constructing about a notion of “sufficient interoperability.”
Thesis: Protocols have a fundamental bias towards decentralization and distribution of agency and authority across nodes, even when they structure interactions with centralized and/or monolithic things (eg royalty interactions protocol). Because codifying procedure tends to empower the weak nodes and constrain the strong nodes.
So for eg a barbarian royal court can have a monarch with unchecked power who can capriciously go “off with his head.”
A protocolized court checks monarchical power and protects commoners who might be in court to appeal to the monarch for something.
Modern example: healthcare protocols weaken arbitrariness and negligence moral hazard of doctors, empower patients. Also trad open-source BDFLs.
I’m using centralized and monolithic interchangeably here because:
Thesis 2: Centralization tends to create monoliths, defined as structures that are hard or impossible to unbundle without losing much of the value. So mainframes are monoliths but equivalent server clusters are centralized without being as monolithic.
AT&T and Standard Oil were less monolithic than FAANG. It’s easy to see how to unbundle a large telephone network into small regional ones without much value loss (in fact value gain). Harder to see how to unbundle Google or Facebook.
Blockchain-like protocols have a subtler version of the monolith tendency/risk. If there is low node diversity (or client, or, hub or whatever the building blocks are called — I’ll use node as the generic term) they simply catalyze monolith (monolithicism?) risk at the node design level. Note that node diversity is different from node variety. Variety is the number of different node types (eg hubs and clients in farcaster, clients and wallets in a blockchain). Diversity is the number of unique and independently maintained interoperable variants of a given node type.
Non-blockchain protocols have this problem too (for example I assume all DNS servers use only a few implementations?) but there is less associated risk because the incentives to control are looser and weaker. Lots of people might want to steal Ethereum, but only a few state actors have a good reason to attack the DNS network. This is a low-confidence argument for me.
In either case, exploiting node-level monolithicization risk is very high cost with both blockchains and non-blockchain systems compared to platform level. If Facebook tweaks its internal code to improve ad yield with moe intrusive user tracking, nobody knows. If you try to insert exploitative code in an Ethereum client, many users might detect it.
Unlike generic informal protocols like “handshake” which are simple enough that every actor essentially maintains a unique but sufficiently interoperable version of the “node code,” blockchains are sufficiently complex one or a few flavors of each node type become necessary, and the people building them turn into de-facto third parties with some inevitable residual centralization/monolith risks.
You could say the “handshake” protocol is simple enough that every user can essentially write their own “light client” simply by observing and trying to participate for the first time. After that you have zero dependency on a “handshake client” maintained by a third party. Unless it’s a secret handshake codified and maintained by a secret society. Nodes light enough to be few-shot learned by a new user by observing input/output behaviors of existing users, and subsequently maintained by the user, can be called self-maintainable nodes by analogy to self-custody keys: nodes that have exactly one user who also maintains it. This is maximal diversity. Every node of every type is unique to the specific user and maintained by them at code level. There is no meaningful notion of “open source” because there is no need to share. It’s easier for users to “write their own node code” than copy/reuse others’ code. The protocol is then entirely defined by its interoperability constraints. It’s a pure protocol. A pure protocol is a protocol for which the specification of interoperability constraints is sufficient for the protocol to exist sustainably. There are no “nodes” as such. Only mutual observation and learning of input/output behaviors.
There is also a possibility that there are no-free-lunch dynamics here and any protocolization within a scope will catalyze a counter-monolithic tendency just outside the scope at the boundary. With blockchains we see this with CEXes, Infura, dominant wallets, L2 sequencers etc.
This leads me to a definition of robust protocols. They are protocols that satisfy the following conditions:
-
Node behaviors are few-shot learnable and result in the local installation of extremely light self-maintained nodes of every required type
-
The protocol is defined by interoperability constraints that are loose enough that all/most nodes can be maintained to be sufficiently interoperable with all/most others at low cost.
I think getting to this condition is the holy grail of any protocol, and the key is to minimize interoperability constraints to maximize self-maintained nodes.