ROTW: Neither project or platform, but a third, more interstitial thing

I’m coming off the weekend having wrapped up Object Oriented Ontologies: A New Theory Everything by Graham Harman and have been thinking a lot about how close he comes to describing protocols.

The book is definitely too long for ROTW, but reminded me of a great article by Dominic Hofsetter on how complex systems and challenges require something more than just projects and programs…something that instead takes up/becomes the space in between these things.

In the second half of the book, Harman references Niklas Luhmann, a German sociologist and philosopher perhaps most well-known now for creating the zettelkasten method of note-taking, but also for his Systems Theory. In Luhmann’s Systems Theory (there are many others) he posits that the most basic unit of social systems isn’t the individual, but the communication (read as: protocols?) between them.

Reading Hofsetter’s article with Harman and Luhmann’s thoughts (and war/genocide) in the background, brought up some thoughts on agent-centric architecture and how if protocol-thinking was employed well, it might be possible:

We currently live in a very data-centric world. Blockchain is a perfect example of a data-centric ontology, as every node needs to be in agreement with a shared global state. Data-centric architecture requires that there is a single room temperature, even if I’m sitting by the air conditioner and am cold, while the person sitting across from me is by the sunny window and feels warm.

The question that comes up for me is if protocolization can help drive agent-centric ontologies. By recognizing that each person, node, etc experiences a reality/truth determined by their position in the social system and the linkages between them, they are therefore experiencing a different, but equally valid perspective of reality. Can we create social and technological infrastructure that can accurately reflect our reality of multiple truths and the numerous local states that exist? This is something that I think naturally can happen if we shift our focus to the spaces in between things – how things are linked and communicating – rather than the thing itself.

There are also some good thoughts in the article about the kinds of projects and funding required to deal with complex challenges in complex systems. Messiness or as Hofsetter calls it “fuzziness”, isn’t a bug or negative externality in the process of finding a solution or even the within the solution itself, it’s a requirement.

5 Likes

@keikreutler research has me wondering if for certain datasets completely covered by the memories of a set of agents, whether agents and data are basically interchangeable

I got to a parallel conclusion by considering the intelligence of an LLM to be a function of its data rather than its processing.

1 Like

I think the presence off attention and weights makes equating agents with data more difficult. Data is more like conscious and unconscious experience, and agents are more like memory, attributing varied attention to that experience with a limited ability to alter that attention.

To note, I guess it could be easily said weights are part of data, but I would argue that they become agential only through processing.

I’ve been soft exploring the concepts of place and space for a while now. Reading across architecture, philosophy, and drawing on my own scientific and professional work, a structure is starting to crystalize:

Places: This is anything that has a boundary, function and enables exploration (utility) of an area can be thought of as a place. Places come together, like nodes in a graph to describe the area that can be explored through their functions. So, a garage is a place where the space of car maintenance is explored but is limited by the tools (places) available in the garage.

Space: The area being explored. Exploration of space is really only defined by the places that surround it and are able to address the space. Spaces themselves usually much bigger (and nebulous) than their explored regions. One could think of spaces as unexplored potential, which upon exploration become a place and return utility.

Staplers and the space of binding paper: As a very simple example, a stapler is a place that enables the exploration of the space of binding papers together. It specifically does this by using a sharp pin that folds. Because of it’s inherent limitations it doesn’t allow you to explore ALL the ways that paper can be bound, so one could say that a stapler enables the exploration of the space of binding papers with folded sharp pins. A broken stapler or in a world where paper doesn’t exist, the stapler is not a place, unless redefined, perhaps as a weapon.

In the context of agent-centric ontologies: Every agent is a place and their individual background and experiences are enabling the exploration of a space. Like the space of experiencing temperature in a given room.

In the context of machine learning models: Every model is a place defined by the datapoints that go into making it. Each datapoint is also a place in itself. One could think of a model as a rough map of a place based on limited data. The model allows the exploration of the space within its bounds and a slightly different model allows a slightly different mode of exploration. Think different rocket and space ship designs across the world and the functions they enable in the exploration of the universe.

Combining the agent-centric and machine learning: Agents communicating in the world is a lot like ensemble models of machine learning where weights from different models are combined to get at a better prediction model. I wouldn’t think that the communication is the basic unit of social systems because social systems enable the exploration of the space of prosperity (or survival) and that exploration is really a function of the places in that social system. So, having specifically trained people, machines, etc drives the utility of the social system in a way that simply communicating would not.

At some level this whole thing decomposes into the blind men and elephant parable, at which point one must ask what is the utility of the elephant (the space) being explored.

Thanks for bringging up Luhmann. I think his theory is very interesting when it comes to protocols, but I’m not sure that his concept of communication can be treated as such.

In Luhmann’s system theory, which is based on George Spencer-Brown’s calculus of indications, communications are understood as the unity of difference between utterance, information and understanding. Importantly, information is such only when it brings something new (in line with Batesonian the difference that makes a difference, and the law of calling from the calculus if indications but also not very from Shannon’s amount of surprise). Communications communicate in an autopoietic network that creates the system’s identity and the distinction between system and environment. People are structurally coupled with the social systems they enable, but they are not part of the social systems. They are part of its environment. There are specific social systems, like organizations, which are created by a closed network of specific kinds of communication: decisions. Decisions, by their nature, are paradoxical since they communicate what they exclude and are such only if used as the premise for a future decisios(3 more reasons). There is also the insightful idea that decisions both happen in time and create time as distinctions between before and after a decision is made.

This rough sketch doesn’t really do any service to the depth of the rigor of Luhmann’s theory (not to mention the volume: 1464 works, of which over 70 published books), but I thought it might bring some value as clarification and hopefully trigger more curiosity.