The Race for Artificial Intelligence Governance

By Nigel Douglas - MAY 13, 2024

SHARE:

As AI adoptions become increasingly integral to all aspects of society worldwide, there is a heightened global race to establish artificial intelligence governance frameworks that ensure their safe, private, and ethical use. Nations and regions are actively developing policies and guidelines to manage AI’s expansive influence and mitigate associated risks. This global effort reflects a recognition of the profound impact that AI has on everything from consumer rights to national security. 

Here are seven AI security regulations from around the world that are either in progress or have already been implemented, illustrating the diverse approaches taken across different geopolitical landscapes. For example, China and the U.S. prioritized safety and governance, while the EU prioritized regulation and fines as a way to ensure organization readiness.


In March 2024, the European Parliament adopted the Artificial Intelligence Act, the world’s first extensive horizontal legal regulation dedicated to AI. 

1. China: New Generation Artificial Intelligence Development Plan

Status: Established


Overview: Launched in 2017, China’s Artificial Intelligence Development Plan (AIDP) outlines objectives for China to lead global AI development by 2030. It includes guidelines for AI security management, use of AI in public services, and promotion of ethical norms and standards. China has since also introduced various standards and guidelines focused on data security and the ethical use of AI.

The AIDP aims to harness AI technology for enhancing administrative, judicial, and urban management, environmental protection, and addressing complex social governance issues, thereby advancing the modernization of social governance.

However, the plan lacks enforceable regulations, as there are no provisions for fines or penalties regarding the deployment of high-risk AI workloads. Instead, it places significant emphasis on research aimed at fortifying the existing AI standards framework. In November 2023, China entered a bilateral AI partnership with the United States. However, Matt Sheehan, a specialist in Chinese AI at Carnegie Endowment for International Peace, remarked to Axios that there’s a prevailing lack of comprehension on both sides — neither country fully grasps the AI standards, testing, and certification systems being developed by the other.

The Chinese initiative advocates for upholding principles of security, availability, interoperability, and traceability. Its objective is to progressively establish and enhance the foundational aspects of AI, encompassing interoperability, industry applications, network security, privacy protection, and other technical standards. To foster an effective artificial intelligence governance dialogue in China, officials must delve into specific priority issues and address them comprehensively.

2. Singapore: Model Artificial Intelligence Governance Framework

Status: Established

Overview: Singapore’s framework stands out as one of the first in Asia to offer comprehensive and actionable guidance on ethical AI governance practices. On Jan. 23, 2019, Singapore’s Personal Data Protection Commission (PDPC) unveiled the first edition of the Model AI Governance Framework (Model Framework) to solicit broader consultation, adoption, and feedback. Following its initial release and feedback received, the PDPC published the second edition of the Model Framework on Jan. 21, 2020, further refining its guidance and support for organizations navigating the complexities of AI deployment.

The Model Framework delivers specific, actionable guidance to private sector organizations on addressing key ethical and governance challenges associated with deploying AI solutions. It includes resources such as the AI Governance Testing Framework and Toolkit, which help organizations ensure that their use of AI is aligned with established ethical standards and governance norms.

The Model Framework seeks to foster public trust and understanding of AI technologies by clarifying how AI systems function, establishing robust data accountability practices, and encouraging transparent communication.

3. Canada: Directive on Automated Decision-Making

Status: Established


Overview:
Implemented to govern the use of automated decision-making systems within the Canadian government, part of this directive took effect as early as April 1, 2019, with the compliance portion of the directive kicking in a year later.

This directive includes an Algorithmic Impact Assessment tool (AIA), which Canadian federal institutions must use to assess and mitigate risks associated with deploying automated technologies. The AIA is a compulsory risk assessment tool, structured as a questionnaire, designed to complement the Treasury Board’s Directive on Automated Decision-Making. The assessment evaluates the impact level of automated decision systems based on 51 risk assessment questions and 34 mitigation questions.

Non-compliance with this directive could lead to measures (the nature of discipline is corrective, rather than punitive, and its purpose is to motivate employees to accept those rules and standards of conduct which are desirable or necessary to achieve the goals and objectives of the organization), which are deemed appropriate by the Treasury Board under the Financial Administration Act, depending on the specific circumstances. For detailed information on the potential consequences of non-compliance to this artificial intelligence governance directive, you can consult the Framework for the Management of Compliance.

4. United States: National AI Initiative Act of 2020

Status: Established


Overview:
The National Artificial Intelligence Initiative Act (NAIIA) was signed to promote and coordinate a national AI strategy. It includes efforts to ensure the United States is a global leader in AI, enhance AI research and development, and protect national security interests at a domestic level. While it’s less focused on individual AI applications, it lays the groundwork for the development of future AI regulations and standards.

The NAIIA states its goal is to “modernize governance and technical standards for AI-powered technologies, protecting privacy, civil rights, civil liberties, and other democratic values.” With the NAIIA, the U.S. government intends to build public trust and confidence in AI workloads through the creation of AI technical standards and risk management frameworks.

5. European Union: AI Act

Status: In progress


Overview:
The European Union’s AI Act is one of the world’s most comprehensive attempts to establish artificial intelligence governance. It aims to manage risks associated with specific uses of AI and classifies AI systems according to their risk levels, from minimal to unacceptable. High-risk categories include critical infrastructure, employment, essential private and public services, law enforcement, migration, and justice enforcement.

The EU AI Act, still under negotiation, reached a provisional agreement on Dec. 9, 2023. The legislation categorizes AI systems with significant potential harm to health, safety, fundamental rights, and democracy as high risk. This includes AI that could influence elections and voter behavior. The Act also lists banned applications to protect citizens’ rights, prohibiting AI systems that categorize biometric data based on sensitive characteristics, perform untargeted scraping of facial images, recognize emotions in workplaces and schools, implement social scoring, manipulate behavior, or exploit vulnerable populations.

Comparatively, the United States NAIIA office was established as part of the NAIIA Act to predominantly focus efforts on standards and guidelines, whereas the EU’s AI Act actually enforces binding regulations, violations of which would incur significant fines and other penalties without further legislative action.

6. United Kingdom: AI Regulation Proposal

Status: In progress


Overview: Following its exit from the EU, the UK has begun to outline its own regulatory framework for AI, separate from the EU AI Act. The UK’s approach aims to be innovation-friendly, while ensuring high standards of public safety and ethical considerations. The UK’s Centre for Data Ethics and Innovation (CDEI) is playing a key role in shaping these frameworks.

In March 2023, the CDEI published their AI regulation white paper, setting out initial proposals to develop a “pro-innovation regulatory framework” for AI. The proposed framework outlined five cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits – they are listed as; 

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

This proposal also appears to lack clear repercussions for organizations who are abusing trust or compromising civil liberties with their AI workloads. 

While this in-progress proposal is still weak on taking action against general-purpose AI abuse, it does provide clear intentions to work closely with AI developers, academics and civil society members who can provide independent expert perspectives. The UK’s proposal also mentions an intention to collaborate with international partners leading up to the second annual global AI Safety Summit in South Korea in May 2024.

7. India: AI for All Strategy

Status: In progress

Overview: India’s national AI initiative, known as AI for All, is dedicated to promoting the inclusive growth and ethical usage of AI in India. This program primarily functions as a self-paced online course designed to enhance public understanding of Artificial Intelligence across the country.

The program is intended to demystify AI for a diverse audience, including students, stay-at-home parents, professionals from any sector, and senior citizens — essentially anyone keen to learn about AI tools, use cases, and security concerns. Notably, the program is concise, consisting of two main parts: “AI Aware” and “AI Appreciate,” each designed to be completed within about four hours. The course focuses on making use of AI solutions that are both secure and ethically aligned with societal needs.

It’s important to clarify that the AI for All approach is neither a regulatory framework nor an industry-recognized certification program. Rather, its existence is to help unfamiliar citizens take the initial steps towards embracing an AI-inclusive world. While it does not aim to make participants AI experts, it provides a foundational understanding of AI, empowering them to discuss and engage with this transformative technology effectively.

Conclusion

Each of these initiatives reflects a broader global trend towards creating frameworks that ensure AI technologies are developed and deployed in a secure, ethical, and controlled manner, addressing both the opportunities and challenges posed by AI. Additionally, these frameworks continue to emphasize a real need for robust governance — be it through enforceable laws or comprehensive training programs — to safeguard citizens from the potential dangers of high-risk AI applications. Such measures are crucial to prevent misuse and ensure that AI advancements contribute positively to society without compromising individual rights or safety.

Subscribe and get the latest updates