AI regulations around the world: Trends, takeaways & what to watch heading into 2025
Automation, algorithms, large language models, tools like ChatGPT and more are transforming how people live and work around the world. As they do, they’re bringing with them thorny concerns in areas from data privacy and intellectual property protection to labor rights and bias.
How have the countries around the world reacted to generative AI in terms of guidelines and regulations? How do approaches differ, and dovetail, across jurisdictions? Here’s an AI legislation tracker of the highlights.
AI regulations in North America
AI in the U.S.
In a contrast to the EU’s centralized approach to AI regulation, and differing sharply from the national AI frameworks in Japan and Singapore, the United States is pursuing a decentralized regulatory framework. The CHIPS and Science Act of 2022 lists AI among its key technology areas. Securities and Exchange Chair Gary Gensler has put out a call for AI guardrails and a White House Executive Order has listed key principles for responsible AI development and deployment, with a focus on transparency and worker protection.
While California has been leading the way on state-level AI regulation — proposing several laws to increase business accountability, combat discrimination, and regulate how businesses use data — Colorado became the first US state to enact comprehensive AI legislation. In May 2024, the Colorado AI Act established rules for the developers and deployers of AI systems, with a focus on algorithmic discrimination and “high-risk” systems — AI tools and platforms active in essential areas like housing, healthcare, education and employment.
Developments in Canada
In Canada, the government launched the world’s first strategy for AI in 2017, but AI governance has been a work in progress ever since. As in the US, the state has issued guidelines for responsible use, with current measures covering specific industries including health and finance. A sweeping Artificial Data and Intelligence Act has been proposed, which would enact penalties for unlawful data use, “reckless deployment” and “fraudulent intent” resulting in “substantial economic loss,” and monitor compliance through the office of an AI and Data Commissioner.
AI regulations in Europe
EU AI legislation
On the other side of the Atlantic, the EU AI Act has been making headlines — and shaping corporate agendas — both inside and outside of Europe.
As the United States pursues a patchwork of federal, state and industry frameworks and national legislation remains in the discussion stage in Canada, the European Union has united its 27 member nations under a common set of overarching regulations.
These outline four levels of AI risk, alongside rigorous transparency and data governance obligations for AI providers. The regulations also lay out detailed compliance and monitoring protocols for those deploying AI systems, with enforcement overseen through the European AI Office.
Download our in-depth analysis of the EU AI act here or start with our EU AI Act cheat sheet.
Want more help complying with the EU AI Act? Speak to an expert and ask about our EU AI Act Toolkit, designed to provide a comprehensive solution to streamline governance, compliance and ongoing monitoring efforts.
What’s happening the U.K.?
The UK's AI regulatory strategy, in marked contrast, has been putting AI oversight in the hands of existing regulators under overarching principles of safety, security, transparency, accountability and fairness. It’s a pro-innovation approach grounded in arguments that these officials know best how to regulate AI in their sectors. The end goal is to spur responsible AI development that makes the UK a global leader in this area.
Additional country-by-country initiatives in the EU
Pro-innovation AI initiatives are also emerging in individual EU nations. France, for example, has launched a National AI Strategy focused on research and economic development, proposed AI-related amendments to its intellectual property code and a created a Generative AI Committee bringing cultural, economic, technological and research stakeholders together to help inform government decisions.
The German government is also actively working to stimulate AI innovation — particularly for startups, SMEs and environmental technology — and is offering sector-specific guidelines for ethical AI. One example is the ForeSight project led by the Federal Ministry of Economics and Climate Protection, which encompasses the development and application of smart living services.
AI regulations in the Asia-Pacific
Singapore is a global AI-governance leader
In 2019, Singapore became the first nation in the world to launch a Model AI Governance Framework, and has remained a global and regional leader in AI regulation over the five intervening years. At the same time, it introduced the first edition of its National AI Strategy.
As AI has evolved, Singapore’s policy has been quick to respond. With the release of ChatGPT in late 2023, Singapore updated its national strategy, and in January 2024, its Info-communications Media Development Authority issued a Proposed Model AI Governance Framework for Generative AI.
Japan’s human-centered principles
Japan is also a front-runner in AI regulations around the world. In 2019, Japan’s government published its Social Principles of Human-Centered AI, aiming to help shape the world’s first “AI-ready society” through guidelines grounded in respect for human dignity, widespread sustainability and a society where diverse backgrounds support individual wellbeing.
It’s interesting to note that both Singapore and Japan augment their national AI guidance with sector-specific oversight and the use of existing laws and authorities — an approach echoed by some, but not all, of their Asian-Pacific neighbors.
Australia is developing its own guidelines
In September 2024, the Australian government released proposed guardrails for AI use in high-risk settings and a voluntary standard for AI safety. These two measures are designed to complement existing legal frameworks, including privacy, consumer protection and corporate governance laws.
China’s more centralized approach
China, by contrast, has traditionally taken a centralized approach to AI development and strong stance on regulatory oversight. While its governance approach differs from those of other Asia-Pacific nations, and many AI regulations around the world, its guidance has been similarly quick to reflect new developments, from an August 2023 law designed to regulate generative AI to a September 2024 proposal for standardizing the labelling of AI-generated content.
AI regulations in Latin America
As our tour of AI regulations around the world culminates in Latin America, there’s much to take note of. In May 2024, Buenos Aires hosted UNESCO’s first Regional Summit of Parliamentarians on Artificial Intelligence and the Latin America Agenda. The gathering introduced nine approaches to AI oversight — innovation-centered, standards-based, transparency-focused and more — that could be customized and adapted to each nation’s unique environment.
Brazil incorporates many of these in its proposed AI regulations, spelling out obligations for the providers and operators of AI systems, introducing different categories of risk and a goal of protecting fundamental rights “for the benefit of the human person and the democratic regime” and supporting scientific and technological development.
Similar legislation has been proposed in Mexico, with a dual emphasis on human rights and national AI advancement. Mexico’s policy would include three risk categories of risk, continuous monitoring and other obligations for the developers of AI systems and intellectual property considerations such as clear labelling of AI-generated content and a requirement to secure consent for data use.
AI regulations in the Middle East
In recent years, the Middle East has been a focal point for advancements in AI, with regulations gaining significant attention. Saudi Arabia and the United Arab Emirates (UAE) are at the forefront of this movement, setting benchmarks for AI governance and innovation. The Saudi Framework, known as the National Strategy for Data & AI, and the UAE’s swift tech adoption highlight their leadership in this arena.
AI regulations in Saudi Arabia
Saudi Arabia's National Strategy for Data & AI transforms the kingdom into a global AI leader by 2030. This strategy encompasses a range of initiatives to harness AI's potential, including developing a robust AI ecosystem and promoting of talent through specialized training programs. Ethical AI practices are also a cornerstone, ensuring that AI is used responsibly and transparently.
The Saudi strategy aims to boost economic growth by integrating AI across various sectors, such as healthcare, education and transportation, to enhance efficiency and innovation. By focusing on comprehensive data governance and AI application, Saudi Arabia aims to create a sustainable and inclusive AI-driven economy. The kingdom's commitment to ethical AI also sets a high standard for responsible AI deployment, influencing policy-making in the region.
The UAE's commitment to AI investment and regulation
The UAE has consistently demonstrated a remarkable commitment to integrating AI into its national fabric. With substantial investments in AI research, infrastructure and education, the country aims to spearhead technological advancement. The National Artificial Intelligence Strategy 2031 outlines ambitious targets, such as embedding AI into government services and fostering economic diversification.
The UAE has established specialized research centers and launched AI accelerators to support innovation and development. These initiatives underscore the nation's determination to become a global AI hub. Moreover, the UAE's proactive approach to creating a conducive environment for AI development is evident through its robust policy frameworks and regulatory measures designed to support and monitor AI technologies effectively.
Navigating generative AI risks
Jenn Kosar, Trust and Transparency Solutions Leader at PwC, provides valuable insights and pragmatic advice for board leaders on how to best navigate the risks associated with generative AI.
Incorporating gen AI into business strategy
In this episode of The Corporate Director Podcast, we talk with Sophia Velastegui, a renowned advisor on AI business strategy with experiences at Microsoft, Google, and Apple.
Three tips for generative AI governance
John Rodi, partner at KPMG, gives insight into proper oversight and governance of generative AI in this episode of Inside Today’s Boardrooms.