PIG - Revisiting the Source: improving the SourceCred credit attribution protocol

Better measuring and rewarding community value creation with SourceCred.


Alec LaLonde and Seth Benton


SourceCred is a software suite to help measure and reward community value creation. It was originally built and dogfooded in the “Credsperiment” which ran for two years from mid-2019 to 2022. During that time, over $2M was distributed via SC to several different communities in a largely automated fashion by tracking activity and contributions in GitHub, Discourse, and Discord. It has continued to see usage, despite losing funding and being unable to pay devs. We see enormous potential for SourceCred to address existing issues in new and creative ways, helping communities more accurately value contributions and surface quality new contributors.

What is the existing target protocol you are hoping to improve or enhance? Eg: hand-washing, traffic system, connector standards, carbon trading.

The SourceCred protocol, which surfaces valuable contributors within decentralized communities.

What is the core idea or insight about potential improvement you want to pursue?

SourceCred (SC) has proved effective at creating valuations of contributions that communities find meaningful and useful. Using a mixture of ‘objective’ activity metrics (e.g. GitHub PRs, forum posts, discord messages, etc.), ‘subjective’ valuations (emojis, likes, replies to posts, etc.), and sophisticated algorithms (PageRank), SC creates valuations that are intersubjective, achieving rough consensus on what a community finds valuable. If Cred scores are used to distribute tokens, engagement reliably increases. However, after initial excitement, communities often begin to see frustration with SC’s limitations. On platforms allowing less structured contributions such as Discord and Discourse, problematic social dynamics can be rewarded and amplified. Some contributions get more Cred than most in the community believe they should. Communities can tweak the Cred scores via high-level parameters in the algorithm. However, many communities struggle to do so. This is due to a number of factors, including lack of technical expertise, inability to achieve desired outputs via tweaking, or inability to reach consensus on parameter changes. This despite initial effort on the part of the algorithm’s designers to ease governance of parameters. SC also has a plugin system allowing communities to create their own plugins to express their own ideas around valuing contributions. However, In part due to high technical barriers, few have done so.

With a deep experience of SC from dogfooding it in the SC community, and working with communities using it, we see numerous promising directions to address common complaints and allow communities to evolve SC. These include:

  • Governance minimization: reduce the number of parameters that communities need to tweak to ‘steer’ the system. Potentially introduce protocols that aid in the steering of existing parameters.
  • Ease of implementation: reduce the technical complexity of launching, customizing, and maintaining a SC instance. Make it easier to implement new plugins. Create frameworks and playbooks that guide communities through their design and implementation of new ideas
  • Scope reduction: reduce the scope of activity SC analyzes to contributions that are less controversial, e.g. to more structured contributions such as code contributions on GitHub; indeed, a recently launched project (Amitrage) is taking SC in this direction, augmenting SC with AI and more subjective heuristics.
  • Hardening for interoperability: harden the algorithm and explore its use as a noisy but still useful input to other mechanisms. E.g. using it as an input to reputation algorithms such as Eigentrust that are more resilient if seeded with robust initial estimates of trust. Karma3labs’ onchain implementation of Eigentrust for example is designed around the assumption of such ‘seed vectors’, and could see obvious benefit from something like SC. SC could also prove useful to projects incorporating private voting via ZKPs.

By sharing our experience and knowledge of this problem space with protocol experts, we hope to leverage their expertise to help us narrow down the solution space to the most promising directions.

What is your discovery methodology for investigating the current state of the target protocol? Eg: field observation, expert interviews, historical data analysis, failure event analysis

We both have deep experience engaging as major contributors to two projects (SourceCred and MetaGame) that, over several years, used SourceCred as the primary evaluation tool for reputation and compensation.

Additionally, the dogfooding experiment was documented in depth in an ethnographic study, with the community’s blessing: The CredSperiment: An Ethnography of a Contributions System

This base of experience and knowledge could be augmented by additional analysis depending on which direction seems most promising to pursue.

In what form will you prototype your improvement idea? Eg: Code, reference design implementation, draft proposal shared with experts for feedback, A/B test of ideas with a test audience, prototype hardware, etc.

Initially, we would like to distill the improvement ideas above to one or two of the most promising. Then, we would like to build out a lo-fi reference implementation that is sufficiently usable to be tested in communities.

How will you field-test your improvement idea? Eg: run a restricted pilot at an event, simulation, workshop, etc.

We will run an experiment in at least one community eager to try out this enhanced SourceCred.

Who will be able to judge the quality of your output? Ideally name a few suitable judges.

Community members are typically able to easily discern how accurately Cred scores represent a given members’ contribution. Quality could be measured by various means, such as public discourse around scores, interviews with leaders, surveys of participants or data analysis of participants’ engagement.

How will you publish and evangelize your improvement idea? Eg: Submit proposal to a standards body, publish open-source code, produce and release a software development kit etc.

SourceCred is and will continue to be open source, and we will build in the open throughout the SoP and beyond. If the results of the experiments are sufficiently promising, we intend to scale up an organization capable of operating sustainably and supporting SC-using organizations. If not, we will aim to drum up additional funds to continue experimentation.

What is the success vision for your idea?

We ultimately want to see more effective decentralized organizations, both within and beyond web3. There is tremendous room for improvement on the status quo, and would love to see SourceCred provide meaningful data to help these organizations be more effective with recruiting, onboarding, and compensation.

If successful in our work here, a community will be able to surpass existing known limitations of SC and leverage new affordances to realize a breakthrough in decentralized valuation.