![Scott Bridgen Image](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2Ffc4e867c313e0a890dd4f4def45423ccf2c95164-620x620.jpg%3Ffit%3Dmax%26auto%3Dformat&w=96&q=75)
Uncover your AI governance gaps: How to run a ‘safe-to-fail’ AI exercise
![Entrepreprenur giving presentation on interactive screen in meeting room](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2F7abf4b7ea7f49bf1e62bd4cab7f1f38ceea276fb-8192x5464.jpg&w=2048&q=75)
AI is everywhere in modern business. From AI-powered chatbots handling customer queries to predictive analytics driving sales strategies, most organizations are adopting AI faster than you can say “algorithm.” But there’s a problem: many of these tools are being used with little to no governance in place.
According to a recent study by global law firm DLA Piper, 83% of companies today are using AI tools in some shape or form, yet only 86% of them have adopted an AI code of ethics. That means around 14% of organizations leveraging AI lack governance frameworks entirely. In some jurisdictions, it’s even more alarming: Over half of companies reportedly permit the use of AI without policies implemented to govern it.
For senior governance professionals like general counsels and corporate secretaries, C-suite leaders such as CIOs and CROs, and ultimately the board of directors — this isn’t just a potential compliance headache; it’s a risk landmine. Without oversight, AI tools can expose your organization to potential data breaches, algorithmic bias, and regulatory blowback from new legislations like the EU AI Act.
So, how do you bridge the gap? One sure-fire way to kickstart your AI governance strategy is to run a safe-to-fail AI tool assessment. This structured, low-risk experiment can help you:
- Take stock of where AI is already in use within your organization
- Uncover governance gaps and high-risk areas
- Set the stage for more robust oversight and policies that ensure compliance, accountability, and ethical AI use.
Here’s an overview of how to run such an exercise, step by step:
Step 1: Map your AI landscape
Before you plan the exercise, take stock of where AI is already being used in your business. Chances are, more tools are in play than you think.
Conduct an AI audit
- Inventory all AI-powered tools: These could range from customer service chatbots to automated recruitment systems or tools used for financial forecasting.
- Assess governance status: For each tool, ask:
- Who owns it?
- Who has access to it?
- How is it being used?
- What risks (data privacy, bias, compliance) have been identified?
Example: Your marketing team may be using AI to generate content, while HR is screening candidates using AI-powered tools. Many of these may have been adopted independently, with little oversight or alignment with organizational policies.
Quick Tip: Use this audit to create a centralized record of all AI applications currently in use.
Step 2: Define the scope of your experiment
Your safe-to-fail exercise should target one specific AI use case. Think of it as a microcosm for broader governance challenges.
- Start small: Choose an AI tool or application with limited business impact. For example, a tool that categorizes employee feedback or automates routine data entry tasks.
- Set objectives: What do you want to learn? Examples:
- Does the tool comply with data privacy regulations?
- Could its outputs be biased or inaccurate?
- Are users equipped to oversee and interpret its outputs?
Goal: Use this exercise to uncover governance gaps that may apply across other AI tools.
Step 3: Bring together key stakeholders
AI governance is a team sport. You’ll need the right mix of technical, legal, and operational expertise to run a meaningful experiment.
- Governance professionals (i.e. General Counsel, Corporate Secretary): To ensure AI aligns with ethical standards, corporate policies, and emerging regulations.
- Chief Information Officer (CIO): To manage technical aspects like setting up a sandbox environment, evaluating system integration risks, and ensuring cybersecurity protocols.
- Chief Risk Officer (CRO): To assess potential operational, reputational, and regulatory risks and create mitigation strategies.
- Compliance and legal teams: To verify that tools meet data privacy laws and industry-specific regulations, while evaluating contractual obligations, intellectual property concerns, and liability risks.
- Business Unit leaders: To provide insights on how AI is being used across teams and its real-world business impact.
![Placeholder Image](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2F308be66a537af2a71e3e2c7e11b41d14dde595b9-1080x300.png%3Ffit%3Dmax%26auto%3Dformat&w=2048&q=75)
Step 4: Run the experiment
Here’s how to test the AI tool safely and effectively:
- Create a sandbox environment: Test the AI in a controlled setting using anonymized or synthetic data to avoid real-world consequences.
- Monitor outputs: Analyze outputs for accuracy, bias, or any ethical concerns. For example, does the AI favor certain data patterns unfairly?
- Set guardrails: Clearly define what the AI can and cannot do during the experiment. For instance, ensure it does not make decisions without human review.
Example: If you’re testing an AI recruitment tool, analyze how it scores candidates and whether its recommendations show bias or reflect business priorities.
Step 5: Analyze results and identify governance gaps
The experiment isn’t just about the tool — it’s about what it reveals about your governance approach. After the test, hold a debrief to answer:
- Were there any unexpected risks, such as bias or compliance concerns?
- Are the tool’s outputs transparent and explainable?
- What governance gaps exist in the procurement, use, or oversight of AI tools?
Use this step to document insights that can inform your broader AI governance framework.
Step 6: Build your AI Governance framework
With lessons from the exercise, it’s time to go from reactive to proactive. Develop a framework that addresses:
- Accountability: Define ownership for AI tools and clarify who is responsible for their outputs.
- Policies: Establish guidelines for selecting, implementing, and monitoring AI tools.
- Risk management: Build processes to identify and address ethical, legal, and operational risks.
- Training: Equip employees to understand AI’s capabilities and limitations.
Quick Tip: Consider forming a cross-functional AI governance committee to oversee these efforts and update policies regularly.
![Placeholder Image](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2F979b1bacd26037af96a4d873a741d4c7434e0c73-1080x300.png%3Ffit%3Dmax%26auto%3Dformat&w=2048&q=75)
Step 7: Scale and monitor
AI governance isn’t a one-off task. Use what you’ve learned to scale oversight across all AI tools and build ongoing monitoring processes.
- Regular audits: Conduct periodic reviews of all AI applications to ensure they remain compliant and effective.
- Iterative updates: Adjust governance policies as new AI technologies and regulations emerge.
Example: As your AI use evolves, your governance framework should adapt to cover new risks, such as emerging privacy laws or advances in AI capabilities.
Final thoughts: Helping you go from chaos to control
AI adoption is no longer a choice — it’s an imperative to remain competitive. But without governance, it’s a reality fraught with risks. By running a safe-to-fail experiment, you can take control of your AI landscape, uncover governance gaps, and create a strategy that ensures accountability, compliance, and long-term success.
Whether you’re a senior GRC or legal professional, C-suite executive, or board director, your leadership is pivotal in building AI governance that safeguards trust, innovation, and value. As you take the first steps in running a safe-to-fail AI experiment and building smarter governance, remember that you don’t have to go it alone.
Drawing on years of hands-on experience with directors and GRC professionals — and backed by Diligent Institute's market-leading research — we’ve developed a comprehensive suite of best-practice educational resources to help you execute your experiment effectively.
Available exclusively through the Diligent One Platform, our Education & Templates Library includes:
- The AI Ethics and Board Oversight Certification — the most up-to-date and comprehensive AI governance qualification on the market.
- An AI Use Case Checklist to help boards and legal teams assess potential AI applications, identify risks, and evaluate mitigation measures.
- An AI Policy Template to serve as a foundation for governing AI use across your organization.
- A host of eLearning courses, templates, and instructional videos offering actionable strategies to help you govern confidently across a range of GRC topics, from cyber risk and climate leadership to everyday board and committee business.
Equip your organization with the tools it needs to confidently navigate the complexities of AI governance. With the right resources and guidance, you’ll move from experimentation to oversight with ease.
Get your AI governance starter kit
This essential starter kit provides the tools you need to lead AI policy discussions with confidence and conviction, including a SWOT analysis template and kick-off meeting guide.
Download for freeMore to explore
![AI governance: Ex-Google and Microsoft business leaders share tips for boards](https://cdn.sanity.io/images/33u1mixi/production/f47068c1c49686b6e2c7c699d49b9454fbdcd2f6-1920x1280.jpg?w=3840&q=90&fit=clip&auto=format)
AI governance: Ex-Google and Microsoft business leaders share tips for boards
Read expert insights on navigating AI's potential and risks for boardrooms, emphasizing ethical governance and strategic experimentation.
![Demystifying AI: What does it really mean for the boardroom?](https://cdn.sanity.io/images/33u1mixi/production/111d8948d5b4e6088ca3cce1cff2d0218d1988cf-2480x1500.png?w=3840&q=90&fit=clip&auto=format)
Demystifying AI: What does it really mean for the boardroom?
Download our white paper to gain a deeper understanding of the steps your company should take to leverage AI.
![Navigating generative AI risks](https://cdn.sanity.io/images/33u1mixi/production/34c8dc019499f39148a15b38cda32079413d0afb-1280x720.jpg?w=3840&q=90&fit=clip&auto=format)
Navigating generative AI risks
Jenn Kosar, Trust and Transparency Solutions Leader at PwC, provides valuable insights and pragmatic advice for board leaders on how to best navigate the risks associated with generative AI.