I woke up with a thought recently: Does protocolozation emerge at a specific scale? Like if a pattern or issue persists for long enough across enough people? And if so, is there a type of minimum boundary? Furthermore, is there a top boundary where something happens beyond protocolization?
This initial question now seems malformed, but the kernel of the idea is about the relationship between protocolization and scale. Scale may mean time, participants, space, etc.
I pitched this question to a few participants of the initial Summer of Protocols participants and we had a good conversation.
Seth, on a requirement of awareness and a reframing to opportunity:
S: “I think your intuition about protocols being associated with certain scales is spot on, but I don’t think protocols necessarily auto-emerge based on reaching a scale threshold. I think about it more as a protocol opportunity. But action/designed protocol requires awareness of things being suboptimal… But while there’s a vague awareness that things suck, there’s almost never actionable categories into which they can productively direct effort. There is margin that can be captured. Protocol is predicated on awareness of things being off. Er, protocolization as response. But this is often missing, eg think of people in line at sandwich shop line in airport, spilling into concourse and blocking traffic. Easy protocol response but no awareness or maybe no sense of agency. Anyway I’d bet money there are formal relationships that follow familiar models for when protocolization shows up in a domain”
V: “No stake. A queue customer who tries to do something will be labeled a Karen (kinda correctly). An employee herding the queue will have the legitimacy.”
Venkat and Seth on protocolization effects and finding the sweet spot
V: “Protocolization effects” seems worthy of a glossary entry… Suggested definition: when failure rate of a system scales sublinearly with throughput. Normally they’d scale linearly or superlinearly. So on a per-passenger-mile basis air travel even on a Boeing 737-max is safer than driving."
S: “That might sound trivial but in my experience the opposite is true—orgs often fail to scale either from inadequate protocolization, or (more often over time) too much protocolization.”
V: "Moore’s law is a good analogy. It’s true but doesn’t happen automatically. Thousands of engineers and physicists work really hard to apply new scaling ideas and shrink previous node ideas. And integrated circuits are protocolized circuitry. The tighter you pack, the more systematic the design. Right now we’re doing chiplets, which are kinda “Lego for chips”
Seth, Venkat, and Rafa on Protocolization Boundaries
S: “As far as what happens at max boundary, I think that’s just end state ossification. You’re all-in on some market assumptions, and eventually those shift and you are hoisted by your own petard”
V: “Protocolizing the nth level to go to n+1 level has to happen at the optimal point. Too early or too late and the stack will be unstable. This is partly what happened to Intel with Moore’s law. They mistimed use of some scaling levers and broke their scaling. Then tsmc overtook. For eg, arguably they switched from copper to cobalt conductors a bit too early, which was part of why their 10nm process broke Intel 4 Process Scales Logic with Design, Materials, and EUV”
S: “Very strongly agree. What is often lost (and captured here) is not just a sense of progression through time, but the relational sense of protocol, metric, and domain (eg chip production) over time. Over time, each element of this loop affects the other”
R: “I think maybe at that end is ooziefication per Venkats definition, unless it’s broken down into specialized protocols. Like, I think there’s something of a specific protocolization level that supports specific scale? And if there’s a mismatch, you end up in a weird spot (eg trapped or oozed)”
V: " “Sufficiently advanced protocols are indistinguishable from ooze… Though… as the stack grows it’s the lowest, oldest layers that oozify the most"