The Questions of Protocol Scales

TL;DR: Proposing a “protocol-hilbert” problem focusing on the challenges of designing protocols that operate effectively across various scales (temporal, spatial, complexity, etc.) while remaining comprehensible and manageable for humans.

Disclaimer: I’m not part of Summer of Protocols. If this message is not appropriate, please remove.


Intro

Motivated by the post Hilbert Program/Problems for Protocols?, I would like to propose a possible “protocol-hilbert” problem (or set of problems): the questions of scales - note the plurals.

In the context of protocols, “scale” applies to the various dimensions along which systems operate and interact with humans: temporal, spatial, complexity, informational, organizational, energetic, cognitive, …

It’s implied that we humans have access to only a range of the scales of those dimensions.

Examples

  • Temporal scale: Time-lapse videos of plants growing illustrate how speeding up slow processes allows us to perceive changes that were invisible in real time. And videos played too fast become blurred. An example of changes too slow to notice could be language evolution, where most people perceive changes as “speaking wrong” because they think of language as static. For changes too fast to process, microsecond operations in financial markets can blur into incomprehensibility when watched in real time. Only “in slow motion”, analyzing the logs after the facts, we can understand what happened. [EDITED]
  • Complexity scale: Legal systems often reach levels of intricacy that no single human can fully comprehend, yet must be implemented and followed by individuals and organizations. [EDIT, I missed the other extreme:] When protocols are too simple (e.g.: waiting lines) we may miss that there is a protocol that we could change.

Examples for other dimensions are left as an exercise for the reader.

Problem Statement

As systems and protocols increasingly operate at scales beyond human perception and cognition, how can we design protocols that remain effective, comprehensible, and manageable across all relevant dimensions of scale (temporal, spatial, complexity, etc.) while maintaining human agency and understanding?

Ideas to explore

  • Fractal Protocols: Designing protocols that maintain similar structures and principles across different scales.
  • Scale-Bridging Interfaces: Developing intermediary systems that translate between human-scale interactions and larger or smaller scale processes.
  • Adaptive Complexity: Creating protocols that can dynamically adjust their complexity based on the scale of interaction or the user’s capacity.
  • Scale-Invariant Metrics: Identifying or developing measures of protocol effectiveness that remain meaningful across different scales.
  • Perceptual Augmentation: Exploring ways to enhance human perception and cognition to better interact with protocols at different scales. How does inequality factor into this?

Questions to consider

  • What are the fundamental limits of human perception and cognition in relation to different scales of system operation?
  • What are the ethical implications of protocols that operate beyond human perceptual and cognitive limits? Do countries, corporations, social groups have agency or ethics?
  • Can we develop a framework for translating between human-scale processes and system-scale dynamics?
  • How can we ensure accountability and governance for protocols that span multiple scales of operation?

See also

  • Timothy Morton’s idea of hyperobjects. From Wikipedia): “objects that are so massively distributed in time and space as to transcend spatiotemporal specificity, such as global warming, styrofoam, and radioactive plutonium.”
  • Excerpt “Pattern Recognition” from McLuhan’s talks (1967) where I found the idea that the acceleration of certain processes enables and facilitates their perception.
  • I, with ergod’s help, explored the more “doomish” aspect of the acceleration of the temporal scale in the comments of the post Cryonics law.

I tried to relate the question to some of Hilbert’s problems. The first one (The continium hypothesys *) concerns different scales of infinities, but it seemed too forced. Cantor’s theorem ** fits a bit better, because it provides a method for generating “infinite infinities” (infinite qualitatively-different scales?), but it’s not part of Hilbert’s problems and I think it confuses more than helps. [Added on an EDIT]

* is there a set with cardinality greater than the integers, but smaller than the reals
** for any set A, the set of all subsets of A, known as the power set of A, has a strictly greater cardinality than A itself

Disclaimer-2: This post was written assisted by the use of Claude.ai.

4 Likes

From John Cutler, TBM 314: Using Enabling Constraints for Situational Awareness

Assuming for a moment that some incoherence and inefficiency will always accompany rapid growth and scale, one has to ask: “Is there anything you can do about it? (…)”

While we can’t prevent this dynamic completely, I think you can do things as a team (…).
An area I’ve been thinking a lot about is the role of enabling constraints in making sense of potentially incoherent situations during periods of rapid growth, uncertainty, etc. (…)

Let’s start with what we tend to observe during periods of rapid growth:

  • Hiring rapidly, including throwing people at problems instead of more graceful fixes
  • Spinning up new initiatives quickly, with minimal oversight
  • An exponential increase in dependencies, process overhead, etc.
  • Focusing on either near-term revenue or lofty, far-off “innovation.”
  • Feedback loops degrade, factions form. Breakdown in systems, tools, etc.
  • Rapid information/context loss (people leaving and the % of people who lack core context).

(…)

Some examples of potentially graceful enabling constraints:

  • Some set of gates to prevent teams from hiring people to patch problems. You must battle the perverse incentives for large teams.
  • Any guardrails around the customer experience, outages, onboarding times, etc. Trip the guardrail—pay attention. For teams to show up at product review, they need to have nominated ~5 core health metrics (along with thresholds) that they revisit regularly.
  • Anything that can help limit work in progress. One option might be a combination of a WIP limit and a “finish before you start” policy for large, high-dependency projects. You might also limit when certain types of efforts can start quarterly to have fewer “whoops, we somehow agreed on that mid-quarter without anyone weighing in” effects.
  • Whenever you have an initiative involving more than N people and >N teams, you must run it through a more rigorous, multi-perspective vetting process. When a team is pulled between more than N efforts, it triggers an immediate “tie resolution” meeting, whereby they get outside support to help them decide on a single effort to focus on.
  • Do a mandatory, well-facilitated retro on things involving a certain amount of $ and capacity. Invite execs.

I’m reminded of Reid Hoffman’s point in blitzscaling that every problem has to be solved anew at every scale.

My null hypothesis is that protocols that span more than a couple of space-time scales rapidly accumulate more complexity and overwhelm the value, which grows much slower.

1 Like