Monoliths, Decentralization, Robust Protocols, and Sufficient Interoperability

I want to define a number of concepts I’ve been thinking about via in-line usage in an argument I’m constructing about a notion of “sufficient interoperability.”

Thesis: Protocols have a fundamental bias towards decentralization and distribution of agency and authority across nodes, even when they structure interactions with centralized and/or monolithic things (eg royalty interactions protocol). Because codifying procedure tends to empower the weak nodes and constrain the strong nodes.

So for eg a barbarian royal court can have a monarch with unchecked power who can capriciously go “off with his head.”

A protocolized court checks monarchical power and protects commoners who might be in court to appeal to the monarch for something.

Modern example: healthcare protocols weaken arbitrariness and negligence moral hazard of doctors, empower patients. Also trad open-source BDFLs.

I’m using centralized and monolithic interchangeably here because:

Thesis 2: Centralization tends to create monoliths, defined as structures that are hard or impossible to unbundle without losing much of the value. So mainframes are monoliths but equivalent server clusters are centralized without being as monolithic.

AT&T and Standard Oil were less monolithic than FAANG. It’s easy to see how to unbundle a large telephone network into small regional ones without much value loss (in fact value gain). Harder to see how to unbundle Google or Facebook.

Blockchain-like protocols have a subtler version of the monolith tendency/risk. If there is low node diversity (or client, or, hub or whatever the building blocks are called — I’ll use node as the generic term) they simply catalyze monolith (monolithicism?) risk at the node design level. Note that node diversity is different from node variety. Variety is the number of different node types (eg hubs and clients in farcaster, clients and wallets in a blockchain). Diversity is the number of unique and independently maintained interoperable variants of a given node type.

Non-blockchain protocols have this problem too (for example I assume all DNS servers use only a few implementations?) but there is less associated risk because the incentives to control are looser and weaker. Lots of people might want to steal Ethereum, but only a few state actors have a good reason to attack the DNS network. This is a low-confidence argument for me.

In either case, exploiting node-level monolithicization risk is very high cost with both blockchains and non-blockchain systems compared to platform level. If Facebook tweaks its internal code to improve ad yield with moe intrusive user tracking, nobody knows. If you try to insert exploitative code in an Ethereum client, many users might detect it.

Unlike generic informal protocols like “handshake” which are simple enough that every actor essentially maintains a unique but sufficiently interoperable version of the “node code,” blockchains are sufficiently complex one or a few flavors of each node type become necessary, and the people building them turn into de-facto third parties with some inevitable residual centralization/monolith risks.

You could say the “handshake” protocol is simple enough that every user can essentially write their own “light client” simply by observing and trying to participate for the first time. After that you have zero dependency on a “handshake client” maintained by a third party. Unless it’s a secret handshake codified and maintained by a secret society. Nodes light enough to be few-shot learned by a new user by observing input/output behaviors of existing users, and subsequently maintained by the user, can be called self-maintainable nodes by analogy to self-custody keys: nodes that have exactly one user who also maintains it. This is maximal diversity. Every node of every type is unique to the specific user and maintained by them at code level. There is no meaningful notion of “open source” because there is no need to share. It’s easier for users to “write their own node code” than copy/reuse others’ code. The protocol is then entirely defined by its interoperability constraints. It’s a pure protocol. A pure protocol is a protocol for which the specification of interoperability constraints is sufficient for the protocol to exist sustainably. There are no “nodes” as such. Only mutual observation and learning of input/output behaviors.

There is also a possibility that there are no-free-lunch dynamics here and any protocolization within a scope will catalyze a counter-monolithic tendency just outside the scope at the boundary. With blockchains we see this with CEXes, Infura, dominant wallets, L2 sequencers etc.

This leads me to a definition of robust protocols. They are protocols that satisfy the following conditions:

  1. Node behaviors are few-shot learnable and result in the local installation of extremely light self-maintained nodes of every required type

  2. The protocol is defined by interoperability constraints that are loose enough that all/most nodes can be maintained to be sufficiently interoperable with all/most others at low cost.

I think getting to this condition is the holy grail of any protocol, and the key is to minimize interoperability constraints to maximize self-maintained nodes.

5 Likes

On Warpcast I mentioned: “Code is harder to cheat, and easier to fork”

The main sentiment here is that control (of a protocol or process) is somewhat delegated to the public when it is encoded for public review. The control aspect seems to have two components: audit and exit.

Audit, as in “code is harder to cheat,” means that anyone can review the rules of the game. Yet moreso, if you prove you are running ~that code~ in a ‘trusted environment’ then, by definition you also have to execute ~that code~.

Exit, as in “easier to fork,” was related to the concept of being able to adopt code for a particular context (e.g. like github version control), should the original context no longer be to your liking. Yet moreso, forking tends to highlight what parts of the code ~actually matter~ for the public and auditability.

The combination of audit and exit then create a sort of power balance by default. Personally, I’ve become more and more of a fan of doing everything “200% onchain” – because it enables audit and exit.

Law, Markets, Norms are all very hard to audit – because they tend to not be fully encoded. Instead they are semi-encoded, which allows for agents to capture power in the ambiguity (and then encode the bias to their advantage – see regulatory capture here). Architecture tends to be easier to audit (and even then walls are not transparent and buildings can span larger spaces than any average joe can effectively create monitoring tools for). But Architecture is hard to exit – physical mobility is no small task.

In any case, protocols seem to enable audit and exit by allowing strict encoding. That being said, audit and exit are definitely costly, like decentralization itself. Reminds me of the blockchain trilemma of cost, safety and scale.

2 Likes

DNS has a reasonably robust set of implementations as it’s widely used inside compute clusters for routing so not just for the public Internet. Most languages either have robust libraries for it or have it baked in e.g.for node.js DNS | Node.js v21.7.1 Documentation

BIND is what I think most of all root DNS servers run BIND 9 - ISC

On a related note, I was re-reading Matter from the Culture Series and there was an incident where some non-culture AIs were taken over by a malevolent alien force but the culture ships were resilient as each ship’s AI was grown from scratch leading to many idiosyncrasies in the OS making it very difficult to attack them.

2 Likes

An attempt at mapping the concepts to the protocol flexibility vs cultural adaptability 2x2:

(1) Adaptive-Experimental: “Sufficient interoperability” + “The key is to minimize interoperability constraints to maximize self-maintained nodes.”

(2) Adaptive-Experimental: “Nodes light enough to be few-shot learned by a new user by observing input/output behaviors of existing users, and subsequently maintained by the user, can be called self-maintainable nodes by analogy to self-custody keys: nodes that have exactly one user who also maintains it.”

(3) Tradition-Constrained: “If there is low node diversity (or client, or, hub or whatever the building blocks are called — I’ll use node as the generic term) they simply catalyze monolith (monolithicism?) risk at the node design level.”

(4) Structured-Evolving: “A protocolized court checks monarchical power and protects commoners who might be in court to appeal to the monarch for something. Modern example: healthcare protocols weaken arbitrariness and negligence moral hazard of doctors, empower patients.”

(5) Structured-Evolving: “They are protocols that satisfy the following conditions: Node behaviors are few-shot learnable and result in the local installation of extremely light self-maintained nodes of every required type; The protocol is defined by interoperability constraints that are loose enough that all/most nodes can be maintained to be sufficiently interoperable with all/most others at low cost.”

(6) Procedure-Bound: “Blockchain-like protocols have a subtler version of the monolith tendency/risk. If there is low node diversity (or client, or, hub or whatever the building blocks are called — I’ll use node as the generic term) they simply catalyze monolith (monolithicism?) risk at the node design level.”

(7) Procedure-Bound: “Unlike generic informal protocols like ‘handshake’ which are simple enough that every actor essentially maintains a unique but sufficiently interoperable version of the ‘node code,’ blockchains are sufficiently complex one or a few flavors of each node type become necessary, and the people building them turn into de-facto third parties with some inevitable residual centralization/monolith risks.”

2 Likes