Blockchain Protocol Problems Braindump

The blockchain space makes heavy use of protocols, evolves rapidly and is relatively open for new observers and contributors to join. These attributes make it a fertile ground for protocol research.

This post highlights three interesting protocol-related areas of research in that domain, with an emphasis on Ethereum. These could potentially be the basis for SoP 2024 PIGs, either as-is, within Ethereum (assuming good experiments can be devised around them) or as broader themes to be consider in other domains. I’ve ordered them from simplest to most complex.

Internal vs. External Functionality

Most blockchains support Turing-complete computation and can therefore “implement anything” in-protocol. That said, not everything should live inside the protocol, and not all internal functionality should be exposed. Depending on the protocol, the same functionality may be exposed differently, and a specific piece of functionality can be absorbed or carved out of a protocol over time.

For example, in Ethereum’s case, timestamps have become more deeply ingrained in the protocol after the network’s transition from proof-of-work to proof-of-stake. Although in both modes the network primarily relies on a “native” counter for ordering, block numbers, timestamps have always been included alongside block numbers.

In proof-of-work, the constraints around timestamps were minimal: as long as a block had a larger timestamp than its parent, it was valid. Under proof-of-stake, block times are fixed and therefore timestamps can reliably be expected to be a multiple of that block time. This has allowed timestamps, which were originally introduced as convenient metadata, to become critical activation triggers for protocol upgrades.

To contrast with this, note that the Ethereum protocol has no internal notion of ETH’s fiat-denominated exchange rate, such as the $ETH/USD price. From a distance, one can make the case that both the current time and current price are “nice to have” within the protocol itself. Yet, in practice, the Ethereum community has adopted only the former and left price data reporting as a fully external consideration.

More interesting cases of internalizing vs. externalizing functionality can be seen in the protocol’s most complex areas. For example, Ethereum’s scaling roadmap currently relies on Layer Two constructions which are treated like any other application by the protocol. In other words, they are fully “external”. Conversely, when specific applications grow and concerns emerge around their dominance, the Ethereum community will often bring up the idea of “internalizing” (a.k.a. “enshrining”) them.

While this has not happened on Ethereum, other blockchain ecosystems have launched with, say, enshrined token exchange contracts. Notably, Bitcoin, whose limited scripting language does not allow for most complex features, has an enshrined multisignature wallet feature. Ethereum relies on applications built on top of the protocol itself for this.

A middle ground between individual data points and full applications are “precompiles”. These are specially deployed contracts which provide complex functionality (e.g. cryptographic operations) at a subsidized cost relative to implementing this functionality directly by using the EVM.

What should a protocol internalize vs. externalize? When and how? At what scale? etc. are all deeply interesting questions to explore in both the context of Ethereum and other domains.

Here are two recent blog posts by Vitalik Buterin exploring this:

Complexity Management vs. (Composable) Backwards Compatibility

Most upgrades to blockchain protocols introduce new features or functionality. For example, adding a new opcode to the Ethereum Virtual Machine. Over time, the default outcome is for the protocol to become more complex and incremental changes needing to consider a growing number of edge cases.

In traditional software development, the problem of backwards compatibility is handled with versioning, where different versions of the same software may become incompatible with each other. While blockchains can rely on versioning to an extent, their public, composable, state makes this harder.

When a program is deployed to, say, Ethereum, the default behavior is to keep it publicly accessible forever. This means that while protocol changes can restrict future applications’ behavior, it’s always possible for previously deployed code to be called. The public nature of Ethereum applications means they are “composable”, such that newly deployed application A can interact with the older application B in an intermediate call to another application C, and so on.

This constraint reduces the degree to which complexity can be managed and functionality can be fully deprecated. For a specific example, see the case of SELFDESTRUCT’s deactivation in Ethereum, which was ultimately done via EIP-6780.

Are there better ways to approach this problem? Have other protocols where “legacy users” must be accommodated managed to reduce complexity over time?

Long Term Adaptability vs. Ossification

Blockchain protocols are designed to enforce specific claims over a relatively long time horizon (see “Hardness”). At the most basic level, if someone holds a blockchain’s native asset, such as ETH or BTC, they expect the protocol to enforce this ownership “forever”. With general purpose blockchains such as Ethereum, the commitments enforced by the network are more diverse and complex. For example, if a user runs an Ethereum proof-of-stake validator, they expect the rewards and penalties they obtain to be predictable. If developers deploy an application to a blockchain, they expect its behavior to be fixed.

The simplest way to meet these expectations is to never make changes to the protocol, which is typically referred to as “ossification”. The downside of this approach is that the prtocol can no longer adapt to a changing environment. For example, computer architectures and cryptography both continuously evolve and “ossifying” the Ethereum Virtual Machine would imply that these advances are never leveraged on Ethereum, potentially weakening its value proposition.

Even if a blockchain protocol commits to never making changes which break existing functionality, the possibility of changes in and of itself creates uncertainty. What if future changes go against important norms or negatively impact “external” functionality?

As protocols grow more complex, most potential changes involve tradeoffs: they may be net improvements but almost always come at a cost somewhere. As more gets built on a protocol, the set of stakeholders affected by changes to it grows both in diversity and absolute size. Does this imply all successful blockchain protocols will ossify by default?

While there almost certainly isn’t a “silver bullet” to address this problem, what are better and worse ways to think about it? Have other protocols successfully navigated this tradeoff over long periods of time? Are there short experiments that could be done to “speedrun” a protocol going through its full lifecycle?


Are you looking for a collaborator? Maybe I can also work on this topic. Your proposal seems to be a purely theoretical discussion

Do you consider applying for a grant to research on this?

Tim runs the program along with me. He is offering these as application suggestions.