Trust Paradigm Shift: An Epistemic Trust Protocol in AI

Title

Trust Paradigm Shift: An Epistemic Trust Protocol in AI

Team member names

Xiaoou He
Xiaomeng Hu

Short summary of your improvement idea

Background. With the wide application of Large Language Models (LLMs) nowadays, more people embrace AI-based instruments to simplify and improve their work. But in the process of human-machine interaction, can AI be trusted? And to what extent should humans have trust in AI? This valuable and recurring consideration attracts the attention of governments, industry and users. Moreover, even if an AI system is presented as particularly trustworthy by certain institutions, companies, or even individuals, the question whether individuals or groups are willing to take the plunge and grant their trust remains open.

We conducted the research The Trust Construction of Large Language Models, and proposed a framework of the epistemic trust in AI, which includes the dynamic interaction of technical trust and interpersonal trust, and is based on reasonable trust established through effective supervision. In fact, most capital or government-led Trustworthy AI protocols (including bills, standards, evaluation systems and regulations) focus on the technique aspects and expect the AI to be equipped with characteristics such as safe and robustness, explainability, non-discrimination and fairness, privacy, environmental well-being, and accountability and auditability.

Project Description. However, in terms of how they are applied, these Trustworthy AI protocols tend to work for large and powerful organisations in a centralised, black-box, static and single-valued way. Meanwhile, our epistemic trust is a paradigm shift that also focuses on how an individual or community develops trust in AI in a decentralised, transparent, dynamic and diverse way. As the term suggests, epistemic trust is essentially a protocol for the ongoing distribution and adaptation of trust.

Our project aims to implement the paradigm shift and develop an epistemic trust protocol in AI for a decentralised automated organisation (DAO). We will present a prototype of the protocol to be used in a DAO.

Questions and answers

Q: What is the existing target protocol you are hoping to improve or enhance?

A: We are hoping to improve these centralised Trustworthy AI protocols, such as Ethics Guidelines for Trustworthy AI by The European Commissions’s High-level Expert Group on Artificial Intelligence, and Holistic Evaluation of Language Models (HELM) by Stanford CRFM.

Q: What is the core idea or insight about potential improvement you want to pursue?

A: We believe that individual and diverse preferences deserve their subjectivity in the AI age. While most capital- or government-led Trustworthy AI protocols tend to work for large and powerful organisations in a centralised, black-box, static and single-valued way, we advocate a trust paradigm shift. We propose the concept of epistemic trust, which includes the dynamic interaction of technical trust and interpersonal trust, and is based on reasonable trust established through effective oversight. We focus on how an individual or a DAO develops trust in AI in a decentralised, transparent, dynamic and diverse way.

Q: What is your discovery methodology for investigating the current state of the target protocol?

A: One team member has already conducted a thorough survey of the status quo of existing Trustworthy AI protocols and is designing an epistemic protocol from an AI governance perspective through interviews, field observations, and policy analysis. We will extend this research and experiment with reference implementations of existing protocols.

Q: In what form will you prototype your improvement idea?

A: Our idea will prototype as a reference design implementation.

Q: How will you field-test your improvement idea?

A: We will run simulations on the DAO with our protocol and hold at least two online workshops for DAO users to generate and develop their epistemic trust in AI.

Q: Who will be able to judge the quality of your output?

A: Sam Altman (OpenAI)

Paul Christiano (Alignment Research Center)

Bingzhe Wu (Tencent AI LAB)

Q: How will you publish and evangelize your improvement idea?

A: We will publish our protocol at one of the workshops, and release the protocol toolkit for DAOs.

Q: What is the success vision for your idea?

A: Simply put, our epistemic trust protocol will be implemented by DAOs, empowering diverse communities. In the long term, it will overturn existing government and industry perceptions of Trustworthy AI, attract more AI-based tools to the WEB3 community, and further facilitate the organic integration of emerging technologies.

3 Likes