See, if you go to a regular trusts and estates lawyer, she will ask you questions like “if your spouse and children die before you, whom do you want to inherit your estate,” but if you go to a science fiction trusts and estates lawyer, she will ask you questions like “if your frozen head cannot be attached to a fresh body and reanimated in 200 years, but your consciousness can be cloned in a computer simulation, would you like your estate to go to the cloned consciousness or stay with the frozen head?”
(…)
questions that at times seem more like prompts in a philosophy class.
Can money live indefinitely?
Are you dead if your body is cryonically preserved?
Are you considered revived if you have only your brain?
And if you’re revived, are you the same person?
EDIT: I realized it may not be obvious why I see this is related to protocols.
The broader topic would be “the impact of technology on protocols”. In a sense, this happens all the time, but one can also consider that some technologies are so disruptive (such as cyrogenic resurrection or consciousness cloning) that they seem like a different topic beyond the usual adaptations.
Problem solving agents will try to advance solutions to a particular problem, that other entities will benefit from. However, complications arise when it is not clear what to do when the flow of two solutions collide.
from, You Never Truly Solve a Problem - Thoughfolio (symplectic.link).
In the sense that we have two types of techs, cryonic conservation and consciousness transfer, and since both are viable ways of persisting and branch into two flows, it is not clear what is the precedence in term of heritage.
I agree, it fits the framework, you would need something to coordinate the 2 conflicting strategies.
Reading your post I realized that you don’t need a specific technology to be extremely disruptive, the rate of change of the system could be very fast without any specific highly-disruptive instance:
If you want to solve a problem in an open world, your main challenge is accounting for of its ever changing nature.
It’s obvious that a faster changing environment makes more difficult to adapt. The question is: for real human societies are there any tipping points for the speed of change?
This is a quote from a post in Maven (I posted there mainly to test the vibes of the site):
One extreme interpretation would be that the speed of progress has accelerated so much that not only we are unable to catch-up (develop mental models about what has happened before “the next thing” happens), but the more time it passes, the more we lag in our mental models.
Yes, I think I see it as, if the environment changes too fast (and too fast is not about physical speed only), a problem-solving agent lags, if it fails to detect, or internalize the change or its effect.
We can have situations where,
We don’t detect something changed. Maybe too early.
Something changed but we can’t pinpoint the cause. We detected some weird effect that might suggest something shifted.
We know the cause, but our modelling fails to explain first-order effects.
The problem-solving agent works this way, (1) they need an up-to-date model, (2) they try to describe the problematic situation and (3) they start solving using (1) and (2).
Because a PSA needs first to internalize the change and reflect it on its understanding (so that acts as a lens through which he views the world) and see how that works with many of our alignments, then he will be able to detect the problems. Otherwise, there is no way to see them coming.
That is why I mentioned sense-making agents whose role is to go deep into interpreting the changes and suggest mental models to work with. They are just specialized problem-solvers with the particular task of generating mental models. That other PSAs use to work with.
Speaking about, “fast changing environments”, I have this idea of a world as “all-encompassing” problem-solving entity. Where environment changes could be viewed as shifts in its understanding, the same we experience shifts upon the digestion of a new idea that has the capacity of altering many of our beliefs. The same way, world beliefs are like “local invariants”, that cease to be once the locality has been altered. So, when we have a fast or slow changing environment, it just tells us about the current sensitivity of the world, a partial view into its “mental” model.
This is helping me to understand what I’m really trying to address here. It’s related to “quantitative changes become qualitative when the quantity is large enough”. And I’m talking about the “change in the speed of change”, not “changes in technogy”. If the world becomes slightly “faster” (or slower) in its rate of change, there are ways to deal with that change. But we are bounded by the scales in which we operate (space and time scales) and what lies above and below the thresholds is not accesible by us.
McLuhan’s idea * is that we cannot percieve processes that are too slow, it’s like those time lapse videos of plants growing. We need to speed the video up to “hours per second” to see how the plants move. It’s a quantitative change (the video just plays faster) that produces a qualitative change (it allows us to see what we weren’t able to see before).
* linked on the first John Robb’s twitt from the previous message) - EDIT: link changed with archived version because it’s broken right now.
I’m thinking about the other extreme, when changes become so fast that it goes beyond our “fastest rate to process changes” and it all becomes blurry, with no signs to slowing down. The problem here is not any specific change and how we adapt to it. The problem is the always-increasing part of reality that we have no time to process.