Your reply reminded me of the concept of pharmakos or pharmakotocols. However, the idea of the universe or technology being “slightly evil” seems to create a false dichotomy.
I’m curious about how this relates to the concept of p-iatrogenesis I mentioned earlier. If we accept that our tech, including both human devised and AI-mediated protocols, might inherently carry a touch of “evil”, how does that change our approach to mitigating unintended consequences? Does it make us more vigilant, or could it lead to a kind of fatalism?
I’m also wondering about the anthropocentric nature of ascribing moral qualities like “evil” to natural laws or AI systems. Are we perhaps limiting our understanding by viewing these through such a human-centric lens?
In a way, this seems related to some of the ideas in Heidegger’s “The Question Concerning Technology”, where he argues technology is neither good nor bad, but rather a way of revealing the world. Perhaps what you’re suggesting is that this revelation isn’t always comfortable or aligned with human values?
I’m also curious about the practical implications of your view. How do you think embracing this idea of being “slightly evil” impacts our protological approaches? Could it lead to more robust systems, or might it risk becoming a self-fulfilling prophecy?