Japan’s AI regulations: Agile governance in action
As the EU’s Artificial Intelligence Act establishes a comprehensive regulatory framework for its member states and China adopts notably restrictive rules toward social stability and state control, how is AI governance taking shape in Japan?
The short explanation for Japan's AI regulations can be expressed in two words: agile governance. Here’s a more detailed overview:
In April, Japan's Ministry of Internal Affairs and Communications and Ministry of Economy, Trade and Industry launched AI Guidelines for Business Ver1.0. aligned with international discussions and trends. These guidelines aim to offer clear, accessible, voluntary guidance for AI developers, providers and users. It’s an approach that aims to balance societal and individual rights while fostering innovation and the adoption of AI technologies.
While Japan’s guidelines are not legally binding, adherence could influence outcomes in AI-related legal disputes. So, it’s important to understand not only the guidelines themselves but the principles and philosophies in which they’re grounded.
Japan's human-centered approach to AI regulation
In 2019, Japan’s government published the Social Principles of Human-Centered AI. According to this document, Japan's goal is to realize the world’s first “AI-ready society” through a set of principles:
- AI must not infringe upon fundamental human rights
- Basic education and literacy must be ensured
- Privacy must be protected, security ensured, and a fair, competitive environment guaranteed
- Fairness, accountability and transparency are necessary
- AI innovation will be promoted through the close collaboration and cooperation of domestic and international stakeholders
These principles play out across three core philosophies:
- Respect for human dignity
- A sustainable society
- A society where diverse backgrounds support individual wellbeing
The Centre for Strategic and International Studies (CSIS) points out similarities between Japan’s Social Principles and the AI Principles set by the Organization for Economic Cooperation and Development (OECD), which acknowledges AI’s potential alongside its risks and recognizes welfare and wellbeing with frequently cited AI benefits such as innovation, productivity and economic growth.
Importantly, with the introduction of the guideline, Japan aims to achieve its Social Principles through the use of AI, not through restrictions on the ever-evolving technology.
Current Japanese AI regulations
Apart from the guidelines, Japan regulates AI primarily through existing laws rather than specific industry legislation. This means that the development, deployment and use of AI are subject to laws, including:
- The Copyright Act
- The Personal Information Protection Law
- The Unfair Competition Prevention Act
- Antimonopoly law
- The Economic Security Promotion Act
Further, business actors which include the AI Developer, AI Provider and AI Business User who are involved in advanced AI systems are encouraged to adhere to the Hiroshima Process International Guiding Principles for All AI Actors, as well as the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.
Finally, it’s important to note that under the Civil Code, tort claims can be brought against a person who has used AI to produce and published defamatory content.
Adapting to sector specific AI regulations
In terms of sector-specific regulations, CSIS, in its report published in 2023, states, “None of the regulations prohibit the use of AI per se, but rather require businesses to take appropriate measures and disclose information about risks.” Looking ahead, however, organizations should keep an eye on both regulations on AI to manage the risks, and regulations for AI for promoting its implementation.
In May, the government’s AI Strategy Council met to discuss regulating the development of generative AI, with further consultations anticipated shortly, the Japan Times reported. Regulations being envisaged aim to consider mitigating risks from the application of generative AI technologies in medical equipment and self-driving vehicles and preventing the development of AI technologies that could be used in weapons production, crime, terrorism and human rights violations.
Nikkei Asia noted that stakeholders will be looking at AI regulations in Europe and the U.S. for guidance, along with draft policy by the ruling Liberal Democratic Party and that these talks reflect a shift. “Until now, Japan has let companies self-regulate based on government-issued artificial intelligence guidelines in order to bolster growth.”
What should organizations do as these developments unfold?
Strive to integrate Japan’s human-centric philosophies into your AI activities, from developing, provisioning and using AI systems to creating business value.
Connecting the dots: AI regulatory trends across Asia-Pacific and beyond
Similar to Japan, the regulatory landscape across the APAC region is buzzing with each jurisdiction introducing AI frameworks and guidelines for businesses to adopt privacy friendly and ethical practices in the development and use of AI. China and Singapore are leading with comprehensive measures and strategic updates to manage AI's impact and innovation. Hong Kong focuses on data protection, while Vietnam and Indonesia establish legal and ethical guidelines for AI applications, particularly in emerging sectors like fintech.
The Philippines is preparing a regional AI legal framework, aiming for implementation by 2026, underscoring a commitment to regional cooperation. Australia is also contributing to the regulatory landscape by forming an expert group to guide AI policies focused on transparency and accountability. These efforts across APAC signify a unified approach to ensuring AI's safe and ethical deployment.
The emergence of AI regulations across Japan and other APAC regions are examples of what governance and compliance leaders need to know and track — download the most recent edition of our regulation and legislation roundup, with forecasts and insights for the months ahead.