PIG: Autonomous Realities


Autonomous Realities

Team member names

Short summary of your improvement idea

“A dream you dream alone is only a dream. A dream you dream together is reality.”
― Yoko Ono

Apple’s release of Vision Pro indicates the beginning of the spatial computing era. The increasing prominence of Mixed Reality (MR) head-mounted displays (HMDs) suggests a future where MR HMDs are as ubiquitous as smartphones [1]. In this protocol research works, we propose three interconnected studies in three chapters. These will explore how human society, spatial computing, and programmable cryptography intertwine to shape our future reality.

Hyper-Reality (2016) by Keiichi Matsuda

Chapter 1: “A Dream You Dream Alone is Only a Dream”

In this chapter, we aim to predict the near-term future of the spatial computing era and develop a media theory called “Permissionless Realities” that reflects the inherent nature of mixed reality media [2]. This concept aligns with the permissionless characteristic of blockchain. We envision a future where reality becomes asymmetric, layered with numerous entangled mixed realities, all without needing anyone’s permission [3]. Supporters of the blockchain world believe that “protocol is permissionless”. From this perspective, based on our literature review, mixed reality media has not yet been theorized from such a protocolized, permissionless angle. This theory comes from the observations during my previous CHI work, MOFA, which is a magic dueling AR game in the public street [4]. So in this under-researched theory, we plan to explore further and summarize our findings in an academic paper to share our new media theory released in Digital Art, or SIGGRAPH community.

Multiplayer Omnipresent Fighting Arena, i.e. MOFA.ar (2023) by Botao Amber Hu

Chapter 2: “Allow Me Into Your Dream: From Dream Alone to Dream Together”

Chapter 1’s media theory posits that “reality is asymmetric”. Typically, users of Mixed Reality (MR) Head-Mounted Displays (HMDs) use Augmented Reality (AR) apps privately to perceive their own version of reality without requiring others’ approval. But what happens when two HMD users meet in person? How do they share their mixed reality? Inspired by the “Secret Handshake” from gangster culture, we propose an embodied protocol called “FingerSync”. This protocol allows two users to synchronize and share their mixed realities simply by touching fingertips, akin to the famous Sistine Touch, thus creating layered mixed realities (for example, saying “Wizarding World”). Yes, I know. It’s truly like unbelievable magic, but we’ve indeed created a prototype with two Vision Pros. The final deliverable of this chapter will be an academic paper released in SIGCHI community.

The famous “Sistine Touch” is part of “The Creation of Adam” (1512) by Michelangelo.

FingerSync is inspired by the secret handshake culture.

Chapter 3: “A Dream you dream together is reality”

How can layered mixed realities persist without permission, even when there are no HMD wearers to perceive them? Can they evolve and operate autonomously? This is much like nature, which, for example, does not need permission for a tree or an animal to grow.

With the rise of Programmable Cryptography (ProgCrypto) technologies like zero-knowledge Machine Learning (zkML) and Decentralized Physical Infrastructure Networks (DePIN), we foresee blockchain evolving into a powerful “planetary-scale computation megastructure.” that term coined by philosopher Benjamin H. Bratton. This structure can permissionlessly and autonomously execute complex AI tasks, such as Digital Physics, as discussed in Dante’s talk at the AW Assembly. Following this vision, we propose a meta-protocol called “Autonomous Realities” for persisting the mixed reality layer. This conceptual protocol could underpin many protocols for future mixed realities. Firstly, it relies on (1) “permissionless autonomy,” using the existing Autonomous Worlds infrastructure to persist the rules, digital physics, and virtual objects of mixed reality layers. (2) it implements a kind of “permissionless permission,” building on the work in Chapter 2, “sharing mixed reality by permitting others.” This protocol integrates with Zupass, a zero-knowledge-proof-based digital passport for network state citizens. Thus, only those who have access to this Autonomous Reality layer can see its virtual objects, like buildings, pets, and items within the realm of the network state. This protocol could potentially evolve into what we’ve termed an “Experiential Network State”.

Zuzaland (2023) by Botao Amber Hu. ZuzuluEmblem is a virtual monument located in the pop-up city. In reality, it’s located in a hotel bay situated in front of the residences of all co-living citizens in Montenegro. The monument symbolizes key topics of Zuzulu, revolving around its logo in text. Topics like “longevity”, “network state”, and “zero-knowledge proof” are all represented, as these are subjects that citizens care about.


Drifting Whispers (2023) by Botao Amber Hu and Fangting. The virtual monument is situated at the co-living space of ZuConnect, Istanbul during DevConnect conference. Every flying star symbolizes a participant in this co-living event.


What is the existing target protocol you are hoping to improve or enhance?

We envision a future in the era of spatial computing where MR HMD becomes more prevalent. This project involves protocol research and design work, based on four existing protocols.

  • Secret Handshake or ‘Gang’ Handshake

    A “Secret Handshake” is a private, prearranged gesture or series of gestures used by individuals to identify membership in a group or to grant mutual recognition.

  • Session-Sharing Protocol and Technology for Collocated Mixed Reality

  • Zupass, a Digital Passport based on Zero-knowledge proof

    Zuzalu’s digital passport, similar to Apple’s Wallet, houses Zuzalu Passport Cards to verify the holder’s identity. This system is specifically designed for the storage and management of Proof-Carrying Data, which is data whose authenticity and structure can be cryptographically verified.

  • Autonomous Worlds

    It’s a meta-protocol that treats the blockchain as an autonomous substrate, similar to nature. This substrate runs its own digital physics with composability.

We remix these protocols to innovate theory, interaction, and protocol, speculating on a future vision of how human society, spatial computing, and programmable cryptography might interweave. The philosophical thinking behind this can be encapsulated by the famous quote:

“A dream you dream alone is only a dream. A dream you dream together is reality.”
― Yoko Ono

What is the core idea or insight about potential improvement you want to pursue?

Chapter 1. Permissionless Realities

We introduce a new media theory, “Permissionless Realities,” which offers fresh perspectives on understanding the nature of mixed reality in the era of spatial computing.

Chapter 2. FingerSync

In envisioning a future where MR HMDs are prevalent, we’re curious in moment how two wearers can share their mixed realities. We’ve invented a new embodied protocol, “FingerSync,” to seamlessly and almost magically establish sessions for sharing collocated mixed reality.

Chapter 3. Autonomous Realities.

We have proposed a new meta-protocol, “Autonomous Realities”, which is based on the concept of “Autonomous Worlds”. This envisions permissionless realities independent of users, thereby creating layers of new ‘nature’.

What is your discovery methodology for investigating the current state of the target protocol?

Chapter 1. Permissionless Realities

  1. Literature Review.
  2. Write a peer-reviewed paper.

Chapter 2. FingerSync

  1. Research Through Design with N=30 user studies.
  2. Write a peer-reviewed paper.

Chapter 3. Autonomous Realities

  1. Formative Studies with Expert Interviews
  2. Compose an essay to receive feedback from the AW community.

In what form will you prototype your improvement idea? Eg: Code, reference design implementation, draft proposal shared with experts for feedback, A/B test of ideas with a test audience, prototype hardware, etc.

Chapter 1. Permissionless Realities

Write the theory paper.

Chapter 2. FingerSync

We plan to prototype the protocol on multiple Vision Pros and make the code open source.

Chapter 3. Autonomous Realities

We then create a prototype based on Zupass and the MUD engine.

How will you field-test your improvement idea? Eg: run a restricted pilot at an event, simulation, workshop, etc.

Chapter 1. Permissionless Realities

Plan to host an academic workshop that explores the permissionless aspect of mixed reality media. This event will engage expert communities from the fields of spatial computing, human-computer interaction, and media theory. Possible venues for this workshop could be academic conferences like CSCW, CHI, and SIGGRAPH.

Chapter 2. FingerSync

We use a Research Through Design approach to explore various augmented “secret handshakes” as our protocol for spontaneous sharing of mixed reality. This involves iterating and testing across 20-30 user groups, along with conducting interviews and tests with mixed reality experience designers.

Chapter 3. Autonomous Realities

We host an academic workshop titled “Autonomous Realities” within the Autonomous Worlds Community, using Formative Studies.

Who will be able to judge the quality of your output? Ideally name a few suitable judges.

Chapter 1. Permissionless Realities

We are experts from the media art and media theory community. Our intention is to compile our discoveries into an academic paper. This paper will describe our new media theory. We plan to submit it for peer review at relevant academic conferences, such as SIGGRAPH art papers.

Chapter 2. FingerSync

The experts from the Spatial Computing and HCI community. We plan to submit the paper to be peer-reviewed in related academic conferences such as SIGCHI and ISMAR.

Chapter 3. Autonomous Realities

Experts from the Autonomous Worlds community, or zkML community.

How will you publish and evangelize your improvement idea?

Chapter 1. Permissionless Realities

  1. Publish a paper at the SIGGRAPH conference and in media art related events.

Chapter 2. FingerSync

  1. The open-source code related to our paper is attached on GitHub.
  2. Publish a paper at a conference related to SIGCHI.
  3. Organize a workshop at universities for students with a background in interaction design. The aim is to encourage them to use our open-source code as a base for creating multiplayer mixed reality vision pro apps.

Chapter 3. Autonomous Realities

  1. Publish an essay in the community of Autonomous Worlds.
  2. Publish an essay in the community of Network State commnuity.

What is the success vision for your idea?

Chapter 1. Permissionless Realities

  1. Successfully accepted by peer-reviewed publications.
  2. The theory has the potential to impact the paradigm of future mixed reality and Autonomous Worlds.

Why me

In SoP23, I truly enjoyed collaborating with Fangting on the speculative design + design fiction project “Composable Life”. The process of envisioning blockchain as an unstoppable force of nature was truly mind-blowing. We designed a speculative protocol, ERC42424: “Inheritance Protocol for On-Chain AI Agents”, to reflect the uncontrolled future of on-chain artificial life.

In SoP24, I would introduce a meta-protocol titled “Autonomous Realities,” envisioning a future with a term I proposed: “Permissionless Realities.” This term conceptualizes the permissionless nature of both mixed reality and programmable cryptography, based on the growing advancement of spatial computing technology and the theory of Autonomous Worlds.

Why me? For the past 10 years, I have been working in the spatial computing industry, witnessing how rapidly this technology is integrating into our daily lives. I invented an open-source mixed reality headset, HoloKit, and maintain its open-source community. The vision is similar to ‘Arduino’ for Mixed Reality education. Besides hardware, my interactive XR artworks, which I have created, have earned numerous accolades at prestigious venues like SIGGRAPH. I also worked an award-winning social mobile game: Sky.

Last year, I participated in the pop-up city Zuzalu in Montenegro. I created an first-of-its-kind AR monument symbolizing Zuzalu as an experimental network state. This digital building was based on Augmented Reality and Zupass, a Zero-Knowledge Proof (ZKP) passport system. The concept is that this digital layer is exclusive to Zupass holders, who are citizens of Zuzalu, making it invisible to non-citizens.

During the Autonomous Worlds Assembly, I found inspiration in the book “Istanbul: A Tale of Three Cities,” which was mentioned in Venkatesh Rao’s talk "Towards a Metaphysics of Worlds.” The idea of having three versions of cities overlapping in the same physical location, as seen in Istanbul, resonates with the multi-reality concept I aim to express in this protocol project and through “Autonomous Realities” - embodying the permissionless nature of mixed reality.

In this protocol improvement project, I am going to collaborate with more thinkers in the SoP community to propose a detailed implementable protocol for our Experiential Network State future.

  1. L.-H. Lee, T. Braud, S. Hosio, and P. Hui, “Towards Augmented Reality Driven Human-City Interaction: Current Research on Mobile Headsets and Future Challenges,” ACM Comput. Surv., vol. 54, no. 8, p. 165:1-165:38, Oct. 2021, doi: 10.1145/3467963. ↩︎

  2. Manifest.AR Artist Group , “The AR Art Manifesto.” Accessed: Apr. 09, 2024. [Online]. Available: https://manifest-ar.art/ ↩︎

  3. J. W. Chung, X. J. Fu, Z. Deocadiz-Smith, M. F. Jung, and J. Huang, “Negotiating Dyadic Interactions through the Lens of Augmented Reality Glasses,” in Proceedings of the 2023 ACM Designing Interactive Systems Conference, in DIS ’23. New York, NY, USA: Association for Computing Machinery, Jul. 2023, pp. 493–508. doi: 10.1145/3563657.3595967. ↩︎

  4. B. Hu, Y. Zhang, S. Hao, and Y. Tao, “MOFA: Exploring Asymmetric Mixed Reality Design Strategy for Co-located Multiplayer Between Handheld and Head-mounted Augmented Reality,” in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI EA ’23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, pp. 1–4. doi: 10.1145/3544549.3583935. ↩︎



I’m deeply fascinated by the discussions around "Autonomous Realities”. Diving into the concept of digital self-governance and growth, I recommend checking out Balaji Srinivasan’s “The Network State: How To Start a New Country.” It delves into how autonomy could shape the future of network governance, community building, and cultural evolution, like explores how technology carves out new forms of sovereignty, and essentially acting as ‘protocols’ for maintaining social order and contracts. He treats the networks step as the new Leviathan and encryption becomes the most powerful force in the world. It highlights the pivotal role of encryption in ensuring data security, free expression, and secure communication, enables the safe exchange of information without external intrusion, reshaping the balance of power and control, like zkML, ProgCrypto you mentioned.

Regarding “FingerSync” and its use of cultural elements like the secret handshake, I love the idea of sharing mixed realities through a simple touch. But interaction protocols for virtual reality in physical space also need to be evaluated more carefully, for example, have you thought about the varying degrees of acceptance of physical contact across different cultures, and how this might affect the universal applicability of the protocol? Also, it’s worth considering the protocol’s impact on cognitive load, especially in complex shared experiences and multi-user interactions( although I admit that “FingerSync” is a very simple interaction). Maybe it would be interesting to see how this interaction protocol could be evaluated from an ‘Embodied Cognition’ standpoint, in particular, considering how users’ physical engagement and perceptions affect their experience and interaction when sharing virtual content, and how it shapes the ‘Sense of Presence.’ These could be shown during the prototype testing phase.

If you are still looking for a teammate, I’d love to have a talk:)

1 Like


Thanks for your interest. I’ve slightly adjusted the scope to “Autonomous Realities”. FingerSync is more of an HCI work that is part of this meta-protocol.

Still looking for collaborator.

Such a grandiose project, bravo Amber! I wonder, what type of collaboration that you are seeking here? In awe of your past works as well.

1 Like