AI governance: Ex-Google and Microsoft business leaders share tips for boards
As you’re already well aware, artificial intelligence (AI) is rapidly reshaping industries worldwide, making responsible adoption, oversight, and governance more crucial than ever for businesses.
What is less commonly understood, however, is how boardrooms can confidently and effectively approach this modern governance challenge in what is such a new and constantly evolving field.
With this in mind, we recently invited an expert panel of AI business leaders to give their unique perspectives and actionable insights on some of the most crucial AI questions facing organisations today.
The discussion — summarised below — explored how boardrooms can harness the potential of AI while navigating its associated risks, emphasizing the importance of strategic experimentation, ethical governance and cultural adaptation. It also addressed the need for proactive risk management and staying ahead of regulatory changes to integrate AI responsibly.
This fascinating roundtable, which took place at the Diligent Elevate conference on September 10, 2024 in Houston, Texas, was chaired by Richard Barber, CEO of MindTech Group and featured the respected panelists:
- Keith Enright, Partner, Gibson Dudd (ex-VP and Chief Privacy Officer, Google)
- Sophia Velastegui, Director and Committee Chair, BlackLine (ex-Chief AI Officer at Microsoft)
Here are the key highlights of their conversation.
What should boardrooms consider when evaluating AI’s potential value?
Keith Enright: “We are on the front end of something with massively disruptive consequences across every industry. The opportunity is very real, but so are the risks. It’s essential for boards to raise the temperature in the room for their C-suites, ensuring they engage in safe-to-fail experiments to understand both the potential and pitfalls of AI.”
Sophia Velastegui: “The biggest risk is doing nothing. AI is not a passing trend — it’s here to stay. Boards must ensure AI strategy aligns with business strategy, incorporating a framework for safe and ethical engagement.”
How can corporate leaders ensure AI governance aligns with ethical and operational standards?
Richard Barber: “Boards need to add AI to their risk register and begin by implementing policies that guide the use of AI. It’s essential to understand the data governance behind AI tools, whether developed in-house or outsourced.”
Sophia Velastegui: “Board leaders should create an environment where AI initiatives align with company values and ethical standards. This includes frameworks that encourage innovation while maintaining safety and oversight.”
What steps can boards take to manage AI risks and integrate guardrails?
Keith Enright: “Approach AI risk as you would other operational risks: with clear training, policies, and awareness programs. Ensure that AI use aligns with your organization’s broader strategy and communicate this vision from the top down.”
Sophia Velastegui: “Remember that AI implementation doesn’t mean perfect safety. An example from my time at Apple was introducing fingerprint sensors, which led to unexpected issues. This taught us that safe failures are part of progress, paving the way for future improvements.”
How do regulations influence AI strategy, and what should leaders prioritize?
Keith Enright: “Smart regulation is not just beneficial but necessary. It’s crucial that AI’s development is balanced with responsible oversight to avoid stifling innovation. This requires collaboration among technologists, policymakers, and global entities to ensure consistent, informed policy-making.”
Sophia Velastegui: "Regulations should create a safe space for innovation while ensuring ethical boundaries. Like early car regulations that spurred the growth of the auto industry, AI needs clear rules of engagement.”
What educational strategies can support future AI leaders?
Sophia Velastegui: “Education shouldn’t just be about learning existing technology but developing the ability to learn and adapt continuously. My own path — from semiconductors to AI — demonstrated that understanding complex topics and the capacity to keep learning are the most critical skills for adapting to technological change.”
Keith Enright: “Equipping students to navigate change confidently is essential. AI is amplifying human creativity and productivity, not replacing it. Students should be encouraged to understand their creative value, as technology alone can’t replicate that human element.”
How can the boardroom and executive leaders foster a company culture that integrates AI effectively?
Sophia Velastegui: “Leadership needs to demonstrate change through their actions, not just words. At Microsoft, we embraced ‘dogfooding’ — testing new technology within the company before a wider rollout. This allowed teams to understand AI’s potential firsthand, refine its applications, and align the tech with business goals.”
“Ultimately, we shifted from a ‘know-it-all’ culture to a ‘learn-it-all’ mindset to embrace AI. This cultural shift empowered us to be more adaptable and receptive to new ways of integrating AI into our operations.”
Richard Barber: *“Change management principles, such as those in the Kotter model, can help ensure the transition to an AI-integrated culture. Building cross-functional teams that identify opportunities and monitor AI’s implementation is critical. Remember, responsible AI requires human oversight to verify outcomes and maintain trust.
“Organizations need to channel the fear of change into productive innovation. This means adopting a mindset of ‘responsible boldness,’ encouraging teams to engage with AI responsibly while fostering a culture that isn’t paralyzed by anxiety.”
Finally, what would your advice be to companies who want to prepare for the future of AI without falling behind?
Sophia Velastegui: “Companies must actively engage with AI, even if that means starting with small, manageable projects. Building internal expertise and collaborating with external advisors can help bridge any capability gaps. The key is to keep moving forward.”
Keith Enright: “Even organizations that are highly capable, like Google, have faced challenges in navigating AI’s rapid changes. Showing grace to yourself and your teams during these uncertain times can foster a productive environment. Top talent will be attracted to projects that promise growth and relevance to core strategic goals.”
A huge thanks to our panelists for providing a comprehensive look at how boardrooms can proactively manage the complex terrain of AI. With the right blend of governance, culture change, and strategic experimentation, businesses can navigate this powerful technological shift while aligning with their core missions and values.
At Diligent, we provide a host of resources to help boards and executive leaders confront this governance challenge with confidence.
- Download our Guide to Demystifying AI, which unpacks AI risk management, ethical AI use and global AI regulations.
- Learn more about more about our AI Ethics & Board Oversight Certification.
- Discover our purpose-built, AI-powered board management software, Diligent Boards.
Discover more AI governance resources
Putting AI to work for better board management
Learn how cutting-edge AI is reshaping corporate governance, making it easier than ever to prepare, track performance and make data-driven decisions — before, during, and after your board and committee meetings.
Unleashing GenAI's full potential: A balancing act for boards
Explore how boards can effectively balance the opportunities and challenges presented by Generative AI within the corporate landscape
Incorporating gen AI into business strategy
In this episode of The Corporate Director Podcast, we talk with Sophia Velastegui, a renowned advisor on AI business strategy with experiences at Microsoft, Google, and Apple.
Governing AI: Obligations, ownership and oversight
Discover key takeaways from our panel discussion on introducing AI tools into your business and establishing effective AI governance.