[PIG] RFC: Consent Protocols


[PIG] Consent Protocols

Team member names

Riley Wong (they/them), Emergent Research
Val Elefante (she/they), Metagov

Short summary of your improvement idea

Co-design, develop, and implement consent protocols inspired by cultural frameworks for consent (i.e. feminist, sex-positive, kink/BDSM, LGBTQIA+). Publish research laying out a framework for improving consent in collective data governance (i.e. data co-ops, trusts, coalitions, etc.) and release open-source prototypes of composable UX features for consent interfaces.

What is the existing target protocol you are hoping to improve or enhance?

We want to improve consent protocols and interfaces for collective data governance.

Examples of existing data collection, sharing, and governance protocols that we hope to improve include: traditional terms of service and privacy policy agreements (i.e. Big Tech); opt-in vs. opt-out; cookie consent interfaces; requests to be forgotten; Creative Commons; GPL3 licensing; regulation frameworks such as GDPR in the EU and CCPA in California; and the socio-technical approaches being taken by nonprofits such as Open Data Manchester, Open Referral, and the Open Data Institute.

What is the core idea or insight about potential improvement you want to pursue?

How can clear, effective, and fluid models of consent be built into digital public infrastructure and data governance?

We believe that current data governance protocols can be improved with insights, practices, and embodied wisdom of intersectional feminist, sex-positive, BDSM/kink, LGBTQIA+ communities’ frameworks for consent. We want to extend these cultural frameworks of consent into protocols that can be applied to digital space and collective governance.

Some examples of cultural frameworks for consent we will explore include: Betty Martin’s Wheel of Consent, University Title IX Policies of Affirmative Consent; “Yes Means Yes”; Informed Consent; Sociocracy’s “Consent Decision-Making” + “good enough for now, safe enough to try”; willing vs. wanting; safewords; the role of community guardians; non-verbal consent, and more. These frameworks have proven effective at promoting agency for individuals and communities. We are eager to explore how they may be adapted in different contexts, broken down, and/or blended together to improve individual and collective decision-making about data.

Diagram of the Wheel of Consent, Source

What is your discovery methodology for investigating the current state of the target protocol? Eg: field observation, expert interviews, historical data analysis, failure event analysis

  1. Field research: conducting an analysis of data collection and privacy policies across major platforms, both commercial and nonprofit, such as Reddit, Wikipedia, Signal, Arena, Facebook, Instagram, Google, and more.

  2. Expert interviews: interviewing experts in consent and governance, such as those working at or with: Creative Commons; Open Data Manchester; Open Referral; GPL3 licensing; Wikipedia and commons-based peer production; trauma-informed and somatic consent practices; GDPR in the EU, CCPA in California, and privacy law; cookie consent interfaces; and data collectives like the Open Data Institute. Also interviewing experts in sex work, transgender care, and reproductive justice, e.g. Danielle Blunt from Hacking//Husting; Lips.social; Digital Defense Fund; Red Canary Song; the Trans Revolutionary Action Network; and more.

  3. Participatory co-design: recruiting community members, especially those most vulnerable and impacted by digital consent practices, including sex workers, community members seeking or advocating for transgender care, and community members seeking or advocating for reproductive justice. Using Miro to create interactive experiences for participants to brainstorm, deliberate, and interact with design elements (speaking, writing, and drawing) and documenting the co-design process in blog posts.

In what form will you prototype your improvement idea? Eg: Code, reference design implementation, draft proposal shared with experts for feedback, A/B test of ideas with a test audience, prototype hardware, etc.

  1. Research: Drafting a white paper and report laying out a framework for improving consent in collective data governance, shared with identified experts for feedback.

  2. UX mockups: Co-designing and prototyping with members of the targeted communities including sex workers, BIPOC and LGBTQIA+ community members, and reproductive justice advocates.

How will you field-test your improvement idea? Eg: run a restricted pilot at an event, simulation, workshop, etc.

Conduct virtual co-design sessions inviting small groups to interact with the designs, engage in discussion, offer feedback, and collaboratively iterate on changes.

Who will be able to judge the quality of your output? Ideally name a few suitable judges.

  • Una Lee, The Consentful Tech Project
  • Coraline Ada Ehmke, The Social Compact
  • Danielle Blunt, Hacking//Husting
  • adrienne maree brown, Pleasure Activism
  • Zahra Stardust, Data Governance Legal Scholar
  • Betty Martin, Wheel of Consent
  • David Jay, Relationality Lab
  • Glenn Brown, Creative Commons
  • Katherine Bertash, Digital Defense Fund
  • Lisa Featherstone, University of Queensland

How will you publish and evangelize your improvement idea? Eg: Submit proposal to a standards body, publish open-source code, produce and release a software development kit etc.

We will publish a white paper, report, and series of blog posts detailing our research, co-design, and prototype testing processes for academic and mainstream audiences.

As members of both Metagov, a governance research collective, and DWeb, a community for the networked and distributed web, we share research and collaborate with leading experts working on: modular and composable governance; exit to community; the digital commons; digital public infrastructure; platform coops; federated networks; interoperable protocols; personal data stores (e.g. Solid); decentralized identity; data trusts; and more. These networks provide ample opportunities for feedback and collaboration through Metagov seminars, research presentations, and community discussions.

We will also publish and open-source the UX prototype library on Github or on our own project website, which we will design and launch for sharing all public materials as well as forming partnerships for future potential partnerships and collaborations with targeted organizations.

What is the success vision for your idea?

A successful execution of this project will include the following deliverables:

  1. completing multiple facilitated co-design sessions and interviews with community members and experts;
  2. publishing research and writing on consent protocol frameworks in the form of a white paper, report, and blog posts; and
  3. publicly releasing prototypes for consent-based interfaces.

This is tight – I really like this recombinant approach to developing new data protocols you’re taking here (Ghost in the Shell comes to mind haha). AI driven data sovereignty considerations and therefore data protocols seem like a big missing piece right now, especially around things like community data trusts, data coops, DAO’s, etc. The proposed work has lots of important applications!


This is such a meaningful goal. I’ve often wanted for a similar protocol to point folks to that wasn’t focused on sexual relations. I’m excited to see this work be made available to a broader variety of interactions, especially online where power dynamics can easily become baked in and ossified.


Hey, this is a great project. I think exploration, conversation and articulation in this area are badly needed and I really like your proposal. With all this recent emphasis on “trustless” protocols and systems it’d be refreshing to get some balance in more “trustful” directions.

In the Secure Scuttlebutt (SSB) community there was a longstanding challenge around the issue of data deletion being technically unenforceable and potentially confusing in peer-to-peer asynchronous contexts. For a long time, the creators of SSB and popular clients avoided implementing any kind of deletion/editing functionality so as not to give people a false sense of control, and to respect peer autonomy in storing and sharing information.

(From conversations with people during this time, it was clear that not everybody had the same awareness of just how damaging a complete lack of any deletion functionality could be. It took courage for people to communicate around this issue, especially in medium where you can’t edit or delete what you post!)

After a lot of epic community conversation, people finally came up with something like consent protocols for deletion and modification.

For interactions where trust might be necessary, should we just give up? What if we could instead make the bold decision to rely on and cultivate trust, to embrace it? I think there’s a lot of room to build out trust-based protocols.

In the medium post you linked for the wheel of consent diagram, the author Adam says:

The Wheel is really about uncovering inner autonomy.

To motivate your project, I’d love to see some critical exploration of just what “inner autonomy” means in digital contexts. More specifically, if consent is about navigating together boundaries between self and other, what exactly do digital boundaries of self and other even look like? How do they arise? Who/what defines them?

Though the “user model” of operating systems and social media platforms have given us illusions of digital selves as these clean, straightforward projections of some idealized offline self into magical digital spaces, the actual substrates that host our “user selves” are messily distributed for a huge variety of reasons.

Consent protocols could be indispensable tools for facilitating collaborative realization of individual and collective self/other boundaries in digital contexts, so that these boundaries can be articulated by the people and communities most effected by them in ways that promote, cultivate and rely on interpersonal autonomy and trust, rather than being mandated in ways that erode or atrophy autonomy and trust.


This is a really exciting project. It resonates with work I’ve been trying to do for sharing governance practices across domains, and of course it brings all the challenges and power dynamics that domain-crossing work often involves.

My first thought is around those power dynamics. Given the history of extraction from sex-work communities for tech innovation (see the latter half of this paper), I wonder if this project could start from a logic of giving back to the source communities, ensuring that they benefit from the circulation of their wisdom into wider spaces. How can you ensure that credit and value (or even just consent!) flows back to the real architects of these protocols? Is there a way this work could help make the internet safer for people who have done so much to deepen the possibility space of consent?

Second, I wonder if we could move beyond the co-design outlined here to identify specific living communities where these kinds of protocols could be explored and tested. For instance, are there experimental platforms that might be willing to implement the interfaces you explore? Which are the kinds of communities where consent might be especially essential? I think of communities like An Archive of Our Own or Ravelry or Metalabel or even Arena—creator spaces that are designed around creator communities and not yet subject to corporate capture.

I’m really looking forward to learning from this work. I’d also love to explore implementing your findings in my own experimental platforms like CommunityRule.info and Modpol.net.


TYSM for all the thoughtful comments!!

1 Like

TYSM Micah!!

Yes – revoking consent is such an interesting problem, and thanks so much for sharing this example from SSB! I like this idea of deletion and consent revocation as a collective agreement. Also agree with you about this idea of leaning into trust-building over “trustlessness” and thinking about protocols as trust-based. I also really like this idea of thinking about “the actual substrates” that mediate these agreements.

I’ve also been thinking about if a “proof of data deletion” could be [mathematically? technically?] possible… e.g. if I revoke consent, can I provide a proof or guarantee that, say, a model was retrained with that piece of data removed from its training set? Or can I provide a proof that, when consent is revoked, that the piece of data in question is removed form the dataset? Not quite sure how that would work yet as far as mathematical or technical implementation, but it has been an idea I’ve been simmering on for a while.

Re: boundaries between self and other, I’m reminded of Salone Viljoen’s “A Relational Theory of Data Governance” and some of the ways they lay out ideas around autonomy and data relations, too!

1 Like

Thank you so much Nathan!!

Yes definitely agree re: value flows and working by and with community. I know @yogival and I have been thinking about about community and participatory co-design processes, especially given our proximity and membership to a lot of the communities in question, too. I’m also interested in ways of empowering community beyond just compensation for participation and definitely hope to further explore this with community throughout the process.

Re: exploration and testing, we have an existing collaboration with the queer and sex worker community of Lips, of which Val is a co-founder. Would love to explore additional partnerships (thinking about art.coop and other coop communities now too) and appreciate these suggestions for further possibilities to collaborate with the greater ecosystem! And would for sure love to explore implementing findings on CommunityRule and Modpol together :slight_smile:

1 Like

Thank you Rithikha! Yes, exactly re: AI data sovereignty. With all the lawsuits happening right now around copyrighted data being used to train the big LLMs, it feels like a really important area to explore.

As an alternatives to the big ones (Midjourney, Dall-E), Getty created an image-generator that is only trained on data it has and compensates artists for their contributiions to the tool as its used. Right now, this is a fixed model that determines 1) what proportion of the training set a certain artists content represents and 2) how that content performed in the Getty licensing world over time (as a kind of proxy for quality and quantity).

Our project would explore even more creator-centered versions of this that are more commons-oriented and even cooperatively owned with even more levers to pull around consent, transparency of use, values and impact-alignment, etc.

1 Like

Thanks so much for your thoughtful comment Nanomonkey :slight_smile: I could not agree more. I think there’s so much evolution that feminist, queer, and other marginalized communities have made on the concept of consent specifically in sexual context given life experiences of oppression and domination. But these concepts can and should be applied to other contexts to strengthen communities and organizations online and off.

1 Like

Yes, thank you Micah!

On your point “It took courage for people to communicate around this issue.”

Wow. Yes. I love the idea of building software (socio-technical systems) that give people courage - like literally features that lower the bar to signaling your real, honest preferences, deliberating and evaluating tradeoffs, and making decisions but with more fluidity and room to experiment, make mistakes, try again, etc. Thank you for sharing that.

Also to your point about boundaries, Boundaries are absolutely crucial. Visualizing them, exploring them, playing with them, again experimenting! Seeing what is individual data, what it relational data, and exploring how can we negotiate for ourselves and together.

I think the co-design sessions will be extremely useful here - in my experience they really inspire creative and imaginative thinking, visualizing and ultimately prototyping. The co-designs are themselves explorations of the socio-technical process we are trying to “protcolize”. This is where the negotiations happen within the self and between others. It is in the deliberations, conversations, self-actualizations where the answers (the multitude of answers and possibilities which we will transform into templates) will appear.


Hey Riley

Quick shout of encouragement! and +1 to ‘this is tight.’

Re: “a framework for improving consent in collective data governance (i.e. data co-ops, trusts, coalitions, etc.)”

Independent of your proposal here, a group of folks thinking through artist coalition work recently determined a need for protocol development to support collaboration amongst NYC artists (and beyond).

Needless to say, it’s super encouraging to find your proposal here, already taking shape, with such a solid research outline for moving a consent framework forward. Thank you for that! :green_heart:

Something I’m curious about… It seems the co-design of the consent framework itself will have unique considerations compared to the co-design work underpinning the UX interface design. I find myself wondering whether the timeline of the grant allows enough opportunity to tackle both in complete form, especially given the challenges of coordinating participatory design sessions and the complexity of the considerations. In particular, I wonder about the UX of consent approached through a disability justice lens. Multi-modal consent as a participatory design exploration might warrant an entire summer of work. I also wonder about possibilities for field testing the protocol itself, independent of interfaces, with a summer of human to human interactions. That sounds like fun. :slight_smile:

Publishing the consent framework you propose would itself be hugely valuable and would (almost certainly) lead to opportunities for a future co-design phase inclusive of disability perspectives on the UX of consent.

I’d suggest - as someone who really wants to see this work happen - that the biggest risk to your proposal not being accepted would be an evaluator questioning whether you could get everything you’ve outlined completed in a single summer. If you agree that’s a fair critique, I’d advise considering a softening of the commitment to the composable UX library. Maybe it’s a modest revision to say something like, ‘If time permits, upon completion of the consent framework, we will begin work to advance co-design of a composable UX framework of consent to be published as an open source library for continued development.’ This would put you in a position to overdeliver with UX prototypes if it flows from the consent framework work, but the added flex also makes sure that you have all the time you need for the framework itself.

Independent of whether this feedback is useful :slight_smile: we would be thrilled to explore possibilities of collaborating this summer and including artists from the disability justice community in co-design work alongside your own research. We’re currently working out of the Brooklyn Arts Exchange (bax.org). All the best!

1 Like

Yes, indeed @yogival! That is, not to dismiss all the hard work that has been done by folks towards improving our understanding of sexual consent. Or suggest that there isn’t still work to be done there.

I’m just happy to see it put into practice in any situation where there is a transfer of power, and therefore the possibility of vulnerability. Once used in “mundane life” improving our understanding and capabilities to negotiate and find meaningful interactions, one would hope we could use this familiarity to make it easier to apply in our private lives too.

Looking forward to seeing where this goes.


Tysm!! Val and I are actually both NYC based :slight_smile: Really appreciate the input esp with disability justice focus. Would love to hear more about BAX + artist coalition work :0 as I’m involved with the DIY and community music scene here too!

This is awesome. I have a project we could apply this to. Would love to chat more.