RAI Law and Regulation

Summary: RAI laws and regulations cover enforceable rules and policies that are issued by an executive authority or regulatory agency of a government to ensure the responsible development and use of AI systems within that jurisdiction.

Type of pattern: Governance pattern

Type of objective: Trustworthiness

Target users: RAI governors

Impacted stakeholders: AI technology producers and procurers, AI solution producers and procurers, RAI tool producers and procurers

Lifecycle stages: All stages

Relevant AI ethics principles: Human, societal and environmental wellbeing, human-centered values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, accountability

Mapping to AI regulations/standards: EU AI Act, ISO/IEC 42001:2023 Standard.

Context: A number of laws and regulations already apply directly or indirectly to AI systems. However, the processes and requirements to ensure compliance are not always certain. Also, some laws may need to be updated. There is an urgent need for clear guidance to make sure that AI systems are developed and used responsibly in compliance with existing and upcoming laws (e.g., discrimination laws).

Problem: How can we ensure AI systems are developed and used appropriately?

Solution: Enforceable AI laws and regulations aim to ensure that AI systems are developed and used in a way that is ethical and beneficial to the society at large within that jurisdiction—that is, that citizens and their rights are protected and innovation is supported. Governments across the world, such as the European Union and Canada, have been working on establishing comprehensive regulatory frameworks for AI to reach these goals. Organizations that build AI systems must meet certain criteria before they can enter the AI development market—for example, build AI using methodology for RAI-by-design into AI systems and have governance capabilities in place to ensure ongoing compliance with laws and implement standards and ethical codes. The specific criteria may vary based on the level of risk or the domain in which the AI systems are used.

Fig.1. AI regulation.

 

Benefits:

  • Compliance: RAI laws and regulations can help ensure the development and use of AI systems adhere to ethical principles and align with ethical and human values.
  • Legal recourse: RAI laws and regulations can mandate practices that reduce AI risks, such as processes for limiting bias and protecting privacy, and increase public confidence in the responsible development and deployment of AI.

Drawbacks:

  • Long time to enact: It is common for regulation to take years to go into effect after it is first proposed. The length of time can be related to a variety of factors, such as the time to consult with a wide range of stakeholders and the need to carefully consider the potential consequences.
  • Lack of interoperability and portability: AI systems are often developed across multiple jurisdictions. There may be discrepancies or a lack of interoperability between RAI laws and regulations from different jurisdictions.

Related patterns:

  • Regulatory sandbox: Since RAI laws and regulations can take a significant amount of time to enact, regulators can introduce an agile regulatory sandbox as an interim measure. This sandbox allows innovative AI products to be tested in a live environment for a limited time and within a specified space under the supervision of a regulator.
  • RAI governance via APIs: AI technologies can be delivered as cloud-based services, and the interaction with these services can be governed through APIs in order to enforce RAI laws and regulations.

Known uses: