The wilderness boundary in protocol-space

Been thinking about 2 propositions:

Proposition 1: Protocols are the basis for a behavioral dual of structural ontologies of the world

Proposition 2: The fallacious structural boundary between “natural” and “artificial” or equivalently “wilderness” and “civilization” is somewhere between weak and absent in protocol-space.

Ie protocols are the categorical extension of “natural laws” into “artificial” or “civilized” structural spaces.

The propositions suggest that protocols serve as a bridge between natural laws and artificial constructs, blurring the line between what we consider natural and artificial. While protocols may draw inspiration from or attempt to model natural phenomena, their potential to cause harm when misapplied, I would argue, sets them apart.

As a counter-proposition, I’ll introduce the concept of protocol-induced harm (p-iatrogenesis), borrowing from the notion of e-iatrogenesis in healthcare. E-iatrogenesis refers to patient harm caused by the application of health information technology.

P-iatrogenesis draws attention to the potential for protocols to cause harm, a characteristic not associated with natural laws. Natural laws, such as gravity or thermodynamics, operate consistently and universally, independent of human intervention or interpretation. In contrast, protocols whether in computer science or human systems can fail, be misapplied, or produce unintended consequences. For instance, rigidly applied bureaucratic protocols can lead to unfair outcomes when context and individual circumstances are not considered.

As we move towards more complex, AI-mediated protocols, I would argue keeping this distinction in mind will be essential for minimizing p-iatrogenesis.

1 Like

I’m not sure I buy that natural laws are somehow morally neutral. I’ve always held that the universe is slightly evil and in fact the best way for humans and human tech to be is slightly evil.

Your reply reminded me of the concept of pharmakos or pharmakotocols. However, the idea of the universe or technology being “slightly evil” seems to create a false dichotomy.

I’m curious about how this relates to the concept of p-iatrogenesis I mentioned earlier. If we accept that our tech, including both human devised and AI-mediated protocols, might inherently carry a touch of “evil”, how does that change our approach to mitigating unintended consequences? Does it make us more vigilant, or could it lead to a kind of fatalism?

I’m also wondering about the anthropocentric nature of ascribing moral qualities like “evil” to natural laws or AI systems. Are we perhaps limiting our understanding by viewing these through such a human-centric lens?

In a way, this seems related to some of the ideas in Heidegger’s “The Question Concerning Technology”, where he argues technology is neither good nor bad, but rather a way of revealing the world. Perhaps what you’re suggesting is that this revelation isn’t always comfortable or aligned with human values?

I’m also curious about the practical implications of your view. How do you think embracing this idea of being “slightly evil” impacts our protological approaches? Could it lead to more robust systems, or might it risk becoming a self-fulfilling prophecy?

It’s partly tongue-in-cheek for me, a check against the tendency towards utopianism or dystopianism.

It’s not anthropocentrism the way I use it. It’s simply a literary-figurative technique for exploring their nature.

1 Like

Phew, I am going to have to sit with this. Big if true!