Incentive Registry

Summary: An incentive registry records the rewards that correspond to the AI system’s ethical behavior and outcome of decisions, which increases motivation for ethical behaviors and decisions for the ecosystem of the AI system, including AI components, end users, and developers of the AI system.

Type of pattern: Product pattern

Type of objective: Trustworthiness

Target users: Architects, developers

Impacted stakeholders: Data scientists

Relevant AI principles: HSE wellbeing, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: There are serious concerns about AI systems’ ability to behave and make decisions responsibly.

Problem: How can we motivate AI systems and the stakeholders in the AI system ecosystem to perform tasks in a responsible manner?

Solution: Incentive mechanisms are effective treatments in motivating AI systems and encouraging the stakeholders involved in the AI system ecosystem to execute tasks in a responsible manner. An incentive registry records the rewards that correspond to the AI system’s ethical behavior and outcome of decisions [1], e.g., rewards for path planning without ethical risks. There are various ways to formulate the incentive mechanism, for example, using reinforcement learning, or building the incentive mechanism on a publicly accessible data infrastructure like blockchain [2, 3].

Traditional incentive mechanisms for human participants include reputation based and payment based. However, it is challenging to formulate the form of rewards in the context of responsible AI because the ethical impact of AI systems’ decisions and behaviors might be hard to measure for some of the ethical principles (such as human values). Furthermore, all the stakeholders who may have different views on the ethical impact need to agree to the incentive mechanism. In addition, there may be trade-offs between different principles, which makes the design harder.

Fig.1 Incentive registry


  • Increased motivation for ethical behavior or decisions: An incentive mechanism provides motivation for the AI components and the stakeholders involved in the AI system’s ecosystem for ethical behavior and decisions.


  • Limitation of the incentive design: Many stakeholders within the ecosystem of an AI system might have conflicting interests and values. Depending on the design of an incentive mechanism, it might not motivate all stakeholders for ethical behavior. Besides, an incentive mechanism provides motivation but cannot force ethical behavior or decisions.
  • Potential privacy breach risk: There might be sensitive data stored in the incentive registry.

Related patterns:

  • Federated learner: Incentive registry could be applied to federated learner to incentivize more devices to join the learning process.
  • Continuous RAI validator: Incentive registry could work with continuous RAI validator, which validates the ethical impact of the behavior and the decisions of the AI system and the stakeholders within the ecosystem.

Known uses:

  • The open science rewards and incentives registry incentivizes the development of an academic career structure that fosters outputs, practices and behaviors to maximize contributions to a shared research knowledge system.
  • FLoBC is a tool for federated learning over blockchain which utilizes a reward/punishment policy to incentivize legitimate training, and to punish and hinder malicious trainers.
  • OpenAI’s GPT-4 model combines reinforcement learning with human feedback (RLHF) with rule-based reward models (RBRM) to address issues such as refusal training and inappropriate hedging.


[1] Weng, J., et al. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Transactions on Dependable and Secure Computing, 2019. 18(5): p. 2438-2455.

[2] Hacker, P., et al. Explainable AI under contract and tort law: legal incentives and technical challenges. Artificial Intelligence and Law, 2020. 28(4): p. 415-439.

[3] Mökander, J. and L. Floridi. Ethics-based auditing to develop trustworthy AI. Minds and Machines, 2021. 31(2): p. 323-327.