
AI governance: What it is & how to implement it

The 2024 Microsoft and LinkedIn Work Trend Index found that while 79% of leaders agree AI adoption is critical to competitiveness, 60% worry their company lacks a vision and plan to implement it. How do organizations accelerate AI to support transformational objectives and manage risk and opportunity? AI governance.
“Boards are racing to harness AI’s potential, but they must also uphold company values and safeguard the hard-earned trust of their customers, partners, and employees,” says Dale Waterman, Principal Solution Designer, at Diligent.
Here, we’ll explain how to develop an AI governance approach that helps you harness innovation without exposing your organization to undue risk, including:
- What AI governance is and why it’s important
- AI governance frameworks
- The value of technical standards for AI
- Challenges in governing AI today
- Ethical guidelines for responsible AI governance
- AI governance policies (with a template)
- AI governance best practices
What is AI governance?

AI governance encompasses the frameworks, policies and practices that promote the responsible, ethical and safe development and use of AI systems.
Boards will collaborate with key technology and risk stakeholders to set guidelines for transparency, accountability and fairness in AI technologies to prevent harm and bias while maximizing their benefits operationally and strategically. Responsible AI governance considers:
- Ethical standards: AI governance policies should promote human centric and trustworthy AI and ensure a high level of protection of health, safety and fundamental human rights
- Regulations and policies: Boards should also consider compliance with applicable legal frameworks that govern AI usage where they operate, or intend to operate, such as the EU’s AI Act.
- Accountability and oversight: Organizations should assign responsibility for AI decisions to ensure human oversight and prevent misuse.
- Security and privacy: Chief technology officers, risk officers, chief legal officers and their boards must develop a governance approach that protects data, prevents unauthorized access and ensures AI systems don’t become a cybersecurity threat.
Why is AI governance important?
Corporate governance more broadly arose to balance the interests of all key stakeholders — leadership, employees, customers, investors and more — fairly, transparently and for the company's good. AI governance is similarly important because it prioritizes ethics and safety in developing and deploying AI.
“The corporate governance implications of AI are becoming increasingly understood by boards, but there is still room for improvement,” says Jo McMaster, Regional Vice President of Sales at Diligent.
Without good governance, AI systems could lead to unintended consequences, from discrimination and misinformation to economic and social disruptions. Having a strong AI governance approach:
- Prevents bias: AI models can inherit biases from training data, leading to unfair hiring, lending, policing and healthcare outcomes. Governance proactively identifies and mitigates these biases.
- Prioritizes accountability: When AI makes decisions, who is responsible? Governance holds humans accountable for AI-driven actions, preventing harm from automated decision-making. PwC’s Head of AI Public Policy and Ethics Maria Axente says, “We need to be thinking, ‘What AI do we have in the house, who owns it and who’s ultimately accountable?’"
- Protects privacy and security: AI relies on vast amounts of data, a particular risk for healthcare and financial organizations handling sensitive information. Governance establishes guidelines for data protection, encryption and ethical use of personal information.
- Prepares for AI’s environmental, social and governance (ESG) impact: Generative AI has a significant environmental impact, requiring massive amounts of electricity and water for every query. It also reshaped job markets and corporate operations. Governance helps create policies that balance AI’s opportunities with its ESG risks.
- Promotes transparency and trust: Many AI systems are considered “black boxes” with little insight into their decision-making. Governance encourages transparency and helps users trust and interpret AI outcomes.
- Balances innovation and risk: While AI holds immense potential for progress in healthcare, finance and education, governance weighs innovation alongside possible ethical considerations and public harm.
What does AI mean for the boardroom
Master the five factors influencing AI governance today to help your board navigate the complex interplay between innovation and risk.
Discover moreMajor AI governance frameworks
Just like the European Union has the General Data Protection Regulation (GDPR), but the U.S. does not, AI governance frameworks vary by region. Countries often take a different approach to what it means for AI to be ethical, safe and responsible.
“The issue of competing values is not a new one for governments or the technology sector.,” says Waterman. “During a time of regulatory uncertainty and ambiguity, where laws will lag behind technology, we need to find a balance between good governance and innovation to anchor our decision-making in ethical principles that will stand the test of time when we look back in the mirror in the years ahead.” Global AI regulations currently lack harmonization. For instance, certain countries like the United States and UK currently emphasize guidelines and focus on innovation and maintaining a competitive edge on the global stage. In contrast, the EU’s AI Act is a comprehensive law that places a greater emphasis on assessing and mitigating the risks posed by AI to foster trustworthy AI and ensure the protection of fundamental human rights.
Some significant frameworks around the world include:
- European Union: The EU AI Act was passed in 2024 and classified AIU systems into risk categories based on the industry and how AI is developed and deployed. All AI applications under the act are subject to transparency, accountability and data protection requirements.
- United Kingdom: In 2023, the UK published an AI regulation white paper. Rather than instituting a single law, the UK took a pro-innovation and sector-based approach to AI. The document encourages the self-regulation of ethical AI practices in industry, focusing significantly on safety, transparency and accountability in AI development.
- China: The New Generation Artificial Intelligence Plan is one of the most detailed AI regulatory systems. It includes strict AI controls, safety standards and facial recognition regulations. China also implemented the Interim Measures for AI Services in 2023 to ensure AI-generated content aligns with Chinese social values.
- India: Developed by think tank NIT Aayog, India’s National Strategy for Artificial Intelligence focuses on the ethical adoption of AI in sensitive industries like healthcare, agriculture and finance. It proposes self-regulation and public-private partnerships for AI governance.
- Australia: The Australian AI Ethics Principles include eight guidelines, including fairness and accountability. The country also passed the Online Safety Act in 2021 to specifically address AI-generated misinformation and harmful content.
AI regulations around the world
See how the regulatory and governance response to AI’s opportunities and concerns has varied globally.
Discover moreThe importance of technical standards for AI
Beyond regulations, industry bodies and standards organizations have developed technical AI governance guidelines. While voluntary, complying with relevant technical standards can help your organization foster quality, safe and efficient AI-powered products, services and innovation.

Most guidelines attempt to strike a balance between these easily conflicting interests. These include:
- The National Institute of Standards and Technology (NIST) AI Framework: This flexible, voluntary AI Risk Management Framework focuses on addressing bias, explainability and security with AI. This is particularly important given the 2023 White House Executive Order on AI, which reinforces AI’s safety concerns.
- ISO/IEC JTC 1/SC 42: The International Organization for Standardization (ISO) released international AI standards for data management, algorithmic transparency and security in 2017. The ISO has published 34 standards, with 40 more under development.
- IEEE Standards Association: This industry-led group established a committee on AI in early 2021. The standards it developed focus on technical standards that enable AI governance within specific sectors.
- International Telecommunications Union (ITU): The ITU has conducted focus groups to assess the need and requirements for future AI standards. These efforts have focused on AI for digital agriculture, natural disaster management, health, environmental efficiency and autonomous and assisted driving.
Emerging challenges with AI governance
Despite the value of AI governance, getting it right can be difficult. Standards must evolve as rapidly as technology does and consider the distinct regulatory approaches across jurisdictions — not to mention ethical concerns.
Boards working to govern AI may also need to confront:
- Technological advancements outpacing regulations: AI is growing at an unprecedented rate, making it difficult for policymakers and regulators to keep up. With regulations one step behind innovation, organizations can easily expose themselves to the misuse of AI, lack of accountability or unforeseen ethical dilemmas.
- Lack of consensus on AI governance: Different countries have varying perspectives on AI regulation, privacy and data security. The EU, for example, has taken a strict regulatory approach with its AI Act, while the U.S. leans toward industry self-regulation. These variations make it challenging to anchor governance to any universal standard.
- Limited explainability: It’s nearly impossible to understand how AI systems make decisions. Lack of transparency erodes trust in AI and makes it difficult to govern it. How do you know an AI system for healthcare, for example, is making fair and unbiased decisions based on the data available to it? Those developing AI governance frameworks must consider how to balance developing AI systems with public accountability.
- Unclear liability: Determining responsibility when AI causes harm is complex. Is the developer, the user or the organization responsible? Current legal frameworks don’t clearly define AI accountability, particularly in cases where autonomous systems make independent decisions.
- Data privacy, security and risk management considerations: AI systems require vast amounts of data, raising concerns about how personal information is collected, stored and used. This exposure to data also raised the stakes for cybersecurity. Non-Executive Director and Founder of C Squared Consulting Caroline Cartellieri says, “So it’s almost like today boards talk a lot about cybersecurity. Just add that to the power of X because now the risks are becoming so much bigger because nobody quite understands what Gen AI does, its capabilities, and how powerful it can be.”
Ethical guidelines for responsible AI governance
AI governance should encompass more than specific processes for developing and using AI. Frameworks today should also consider five ethical principles to ensure AI is developed and deployed in a way that benefits society while minimizing harm. The principles below are also the foundation for emerging AI ethical guidelines.
- Fairness: AI systems should be designed to prevent discrimination and bias. This includes ensuring diverse representative training data, auditing algorithms for bias and implementing fairness-aware machine learning techniques. The OECD AI Principles are an intergovernmental AI standard promoting trustworthy AI that respects human rights.
- Transparency: AI models should be explainable and understandable to users. Organizations should disclose how AI systems make decisions, particularly in high-stakes areas like finance, healthcare and law enforcement. The EU AI Act is at the forefront of AI transparency, requiring certain disclosures for high-risk AI systems.
- Accountability: Determining who is responsible for AI decisions is challenging. Developers, businesses and policymakers should collaborate to ensure AI systems align with consistent ethical and regulatory standards. This is a pillar of the U.S. Blueprint for an AI Bill of Rights, which was released in October 2022.
- Privacy: AI systems must follow strict data protection regulations to safeguard users’ privacy. This includes requiring informed consent and robust security measures. Google’s AI Principles are a compelling example of guidelines for the AI development process that put humans first.
- Security: All AI systems should be designed to prevent vulnerabilities and cyber threats. Developers must implement safeguards against breaches, attacks and unauthorized access. The UK’s National Cyber Security Centre is an example of a standard that examines AI security closely.
AI governance policy
An AI governance policy clearly outlines what an organization considers the acceptable development and use of AI systems. These guidelines should be clear, easy for employees to follow and align with compliance and risk management measures.
What these policies mandate can vary by organization. Some may prohibit entering proprietary information into AI systems; others may specify which tasks AI can support and which it can’t. Whatever the requirement, though, AI governance policies are important because they:
- Promote and help prove compliance with existing and emerging regulations or standards, such as the EU’s AI Act or the NIST Risk Management Framework.
- Support ethical AI development, keeping in mind the five aforementioned principles.
- Enhance public trust and confidence in AI-driven services that put responsible use first.
- Drive business and innovation goals by balancing business interests, ethical considerations, and the need to adapt to the future of AI.
Template for an AI governance policy
Given the quick pace of AI evolution, writing a governance policy can feel daunting. What does it look like to manage AI proactively and ethically? Here’s a template to get you started:
Effective Date: [MM/DD/YYYY]
Last Updated: [MM/DD/YYYY]
Owner: [AI ethics and compliance team]
1. purpose
This AI Governance Policy outlines the principles, guidelines and responsibilities for the ethical development, deployment and management of AI within [Organization Name]. We aim to promote the responsible, fair and transparent use of AI while aligning with legal and ethical standards.
2. Scope
This policy applies to all AI systems [Organization Name] develops, procures or deploys, including machine learning models, automated decision-making tools and AI-driven analytics in business operations.
3. Governance principles
[Organization Name] commits to the following:
3.1 Fairness and bias mitigation
- AI systems must be designed to prevent discrimination based on race, gender, age or other protected attributes.
- Regular audits will be conducted to identify and mitigate bias in AI models.
3.2 Transparency and explainability
- AI-driven decisions must be understandable and interpretable by users.
- Clear documentation on AI functionality and decision-making processes will be maintained.
3.3 Accountability and oversight
- An AI Ethics & Compliance Team must monitor and manage AI-related risks.
- Human oversight will be required for high-risk AI applications.
3.4 Privacy and data protection
- AI systems must comply with GDPR and other applicable data protection laws.
- Personal data collection must be minimized and anonymized where possible.
3.5 Security and risk management
- AI systems must follow best practices for cybersecurity, including encryption and adversarial testing.
- Incident response protocols will be in place to address AI-related security threats.
4. Compliance and legal standards
This policy aligns with the following regulatory frameworks:
- EU AI Act
- NIST AI Risk Management Framework
- OECD AI Principles
5. Roles and responsibilities
- AI Ethics & Compliance Team: Oversees AI governance implementation and compliance.
- Data Science & Engineering Team: Ensures AI models adhere to ethical and technical standards.
- Legal & Risk Management: Evaluates AI risks and ensures compliance with laws.
- End Users & Customers: Report concerns related to AI ethics, bias or transparency.
6. AI risk assessments and audits
- AI systems will undergo annual risk assessments to ensure ethical compliance.
- Third-party audits may be conducted for high-risk AI applications.
7. Continuous monitoring and policy updates
- AI governance policies will be reviewed and updated annually to reflect evolving regulations and best practices.
- An internal AI ethics training program will be mandatory for all employees working with AI technologies.
8. Reporting and incident response
- Employees and external stakeholders can report AI-related concerts to [Compliance Email/Portal].
- An incident response team will investigate and address AI system failures or ethical breaches.
9. Enforcement and consequences
- Noncompliance with this AI governance policy may result in disciplinary action, including termination or legal consequences.
10. Contact information
- Please contact [AI Governance Team Email] for questions regarding this policy.
Implementing AI governance
Effective AI governance starts with an implementation strategy that unites stakeholders across the board as well as executive and governance teams. Here are the steps you can take to embed AI into your governance policies:
- Establish an AI governance framework: Define specific governing principles for AI. These may be the above ethical principles or others unique to your organization. Consider how these principles could align AI with other essential functions like IT, legal and risk management and further compliance.
- Define leadership responsibilities: Assign specific roles and responsibilities to avoid duplication and reduce the chance that anything AI-related will fall through the cracks. For example, the chief technology officer leads AI development and deployment, the chief information officer implements data governance policies, and the chief risk officer conducts risk assessments. At the same time, legal counsel advises on AI compliance with local and international regulations.
- Implement key AI governance policies: Begin to roll out the pillars of your AI governance approach. This could include regular bias and fairness audits, implementing reporting mechanisms for AI decisions, assigning human oversight to high-risk systems and ensuring AI complies with data protection laws and regulations.
- Create an AI ethics and compliance committee: Though the board oversees AI governance activities, creating a committee with representatives from technology, legal, risk management, and leadership teams can make policies more rigorous. The committee can define a review process for any new AI developments and create AI training programs for employees and other stakeholders.
- Monitor, audit and improve AI governance: The chief risk officer should lead regular risk assessments using tools like the NIST AI Risk Management Framework to identify emerging threats. Establishing a centralized dashboard for real-time AI monitoring can also help identify risks before they cause harm. Reviewing AI ethics and compliance updates quarterly creates space to adapt governance policies, ensuring they keep pace with technology and the business landscape.
- Foster a culture of responsible AI: Train employees on AI ethics and responsible AI usage, engaging them in protecting your organization. Establishing clear mechanisms for AI whistleblower and reporting mechanisms can also encourage mutual accountability. AI ethics advisory boards can also provide independent advisory and guidance to shape your organization’s broader point of view on AI.
Navigate AI Governance Complexity
Join our comprehensive webinar series to learn how to leverage AI to advance your governance strategies.
Discover moreAI governance best practices
Effective AI governance goes beyond ethical principles and requires structured policies, operational controls and continuous monitoring. Organizations working toward best-in-class AI governance should consider the following practices:
- Define success: What does it look like to succeed with AI? Paint a clear picture among the board and executives about a positive future with AI, then craft the policies and procedures that will help you realize it.
- Establish metrics: The future you envision should also be measurable. Consider how you can evaluate quantitatively and qualitatively the efficacy of your AI governance program. This could include fairness and bias metrics, scoring the number of AI outputs that can be explained to users, how effectively AI models adhere to specific regulations and more.
- Craft governance policies for every AI lifecycle: AI may need different management at different stages. Consider how your governance framework can specifically address the risks and opportunities AI presents during development, testing and validation, deployment, monitoring and auditing.
- Assign clear roles for AI governance: Making specific individuals responsible for specific aspects of AI systems strengthens accountability. For example, the chief risk officer should answer for AI risk mitigation, a compliance officer should answer for regulatory compliance and legal counsel is ultimately responsible for regulatory compliance. Keeping humans abreast of critical AI decisions ensures AI remains in your control.
- Create robust AI incident responses: AI development, deployment and usage won’t always go according to plan. Establish fast-acting responses to model failures, security breaches, ethical concerns and other high-risk scenarios. Building user feedback loops into your AI approach can also help identify harms proactively.
- Foster cross-functional and external collaboration: Diverse perspectives on AI can strengthen and safeguard your approach. Engage regulators, industry experts and others — as feasible — and ensure your AI governance teams represent a breadth of expertise across your organization. The chief risk officer may identify a consideration the chief technology officer hadn’t considered; this back-and-forth will ultimately strengthen your AI governance.
- Promote AI literacy: The most well-written AI governance policy can fall flat if employees aren’t prepared to uphold it. Conduct AI ethics and governance training for AI developers and end users and create AI transparency reports to communicate the impact of their efforts. The more you can engage employees in the responsible use of AI, the better.
Govern AI Ethics responsibly
Join Diligent Institute's AI Ethics & Board Oversight Certification to navigate the complex landscape of ethics and compliance surrounding AI with confidence and integrity.
Discover moreFuture-proof AI governance
AI governance is in its infancy. Yet, getting it right from the start is essential to empowering responsible, ethical innovation.

Stay ahead of AI risks. Diligent’s EU AI Act Toolkit ensures your organization adheres to the EU’s comprehensive AI regulations, respects fundamental rights and maintains safety. For practical tips on integrating AI, discover our AI Unleashed report, which is packed with expert insights for boosting efficiency, mitigating risks, and gaining a competitive edge.
Related resources

AI is here. AI regulations are on the way. Is your board ready?
Your legal team is in the perfect position to help prepare your organization for new AI regulations. Here’s how — and how technology can help.

Harnessing AI’s power, and assuming the responsibility, with Diligent’s Nonie Dalton
We asked Nonie Dalton, VP of product management, to ask her to share what her team is working on and thinking about in terms of AI development & governance

Using AI for enhanced decision-making: 9 innovative ways to boost board efficiency and effectiveness
After grappling with artificial intelligence from a governance perspective, have you thought about using this transformative technology in your own daily activities — automation to streamline administrative tasks, AI to facilitate better decision-making and so forth?

Top corporate governance trends for 2025 & beyond
Corporate governance is an ever-changing topic that has substantially increased in importance over the years. Discover the top corporate governance trends for 2025 and beyond.