Capella Alliance

Capella Blogs

“Insights, Ideas, and Innovation for the Tech Community”

The Global Race to Regulate AI: A Patchwork of Laws and Policies Emerges

As artificial intelligence (AI) continues to advance and permeate various aspects of our lives, governments around the world are scrambling to develop regulatory frameworks to ensure the responsible development and deployment of this powerful technology. From the EU’s comprehensive AI Act to the US’s piecemeal approach, a patchwork of laws and policies is emerging to govern the use of AI. Let’s take a closer look at some of the key initiatives:

The EU AI Act: Pioneering a Comprehensive Regulatory Framework

The European Union has taken a bold step forward with the proposed AI Act, which aims to establish the first comprehensive legal framework for AI worldwide. The Act seeks to foster trustworthy AI by ensuring fundamental rights, safety, and ethical principles are respected, while addressing the risks posed by powerful AI models.

The US AI Bill of Rights: A Guiding Principle

In contrast to the EU’s legislative approach, the US has opted for a more advisory route with the AI Bill of Rights proposed by the White House Office of Science and Technology Policy (OSTP). This non-binding document outlines five principles to protect individuals from harmful AI systems, including the right to be free from algorithmic discrimination and the right to opt out of AI systems in certain circumstances.

NYC Local Law 144: Regulating AI in Employment Decisions

New York City has taken a targeted approach with Local Law 144, which regulates employers’ use of automated employment decision tools (AEDTs), including AI, in hiring and promotion decisions. The law requires employers to conduct bias audits and provide notices to applicants and employees when using such tools.

The American Data Privacy and Protection Act: Addressing AI Risks

While not a dedicated AI law, the American Data Privacy and Protection Act (ADPPA), currently under consideration in the US Congress, includes a section specifically addressing AI risks. Section 207 of the ADPPA would require companies to conduct impact assessments for high-risk AI systems and take measures to mitigate potential harms.

Canada’s Artificial Intelligence and Data Act: A Balanced Approach

Canada has also joined the global effort with the Artificial Intelligence and Data Act (AIDA), which aims to strike a balance between managing the risks of AI and encouraging responsible innovation. The Act establishes a framework for regulating high-impact AI systems while providing assurances to researchers and innovators that the government’s intent is not to stifle good-faith efforts.As the world grapples with the challenges and opportunities presented by AI, this patchwork of laws and policies is likely to continue evolving. While each approach has its merits, the lack of global harmonization may create compliance challenges for multinational companies and limit the potential of AI to benefit society as a whole. Nonetheless, these initiatives represent important steps towards ensuring that AI development and deployment remains safe, ethical, and aligned with human values.


Government of Canada. (2022). Artificial Intelligence and Data Act:

European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act):

New York City Council. (2021). Local Law 144 of 2021: Automated Employment Decision Tools:

US House of Representatives. (2022). American Data Privacy and Protection Act.:

New York City Commission on Human Rights. (2022). Automated Employment Decision Tools (AEDTs):

White House Office of Science and Technology Policy. (2021). AI Bill of Rights: