Governing AI: Obligations, ownership and oversight
Introducing artificial intelligence (AI) tools into your business is a major challenge but establishing effective AI governance is a whole new story. Imminent regulations, such as the EU AI Act and its equivalents around the world, aim to put in place guardrails for safe and ethical AI use, but there’s a difference between achieving compliance and gaining a competitive advantage. Boards need to provide oversight and guidance that satisfies the dual goals of empowering the company to maximise the potential of AI, while also ensuring its safe, secure and ethical deployment.
Governance should be seen as an enabler rather than a blocker in this scenario — the “good brakes” that allow companies to drive fast in their urgency to realise AI benefits. However, this is still a very new area for businesses and boards — as our recent seminar explored — and there are more questions than answers.
In the second seminar on this topic, John Davies, governance director at Beyond Governance, acknowledged this problem, saying: “The board will look for clarity on what [the business is] seeking to achieve, because it wants an objective that you can measure against — that’s what the board is about.”
The AI visibility challenge
Getting that clarity to inform objective-setting is not easy. AI adoption is fast-paced and — partly down to the easy accessibility of GenAI tools — many organisations don’t have visibility of the extent and how it is being used.
Maria Axente, Head of AI, Public Policy and Ethics at PWC believes that, before organisations talk about risk and regulation, the first step is a comprehensive discovery exercise to identify exactly what AI is in use, what it is expected to do, and whether it is successfully doing it.
She advises that businesses subsequently apply governance best practices to AI, focusing on key pillars: “The first is strategic, covering ethics and policy, the second is control that covers governance, risk and compliance and the third is responsible practices looking at bias, explainability, privacy and security.”
However, exercises such as establishing controls to mitigate risk and assigning accountability are academic if the business doesn’t have a complete inventory of what applications use AI, how they use it, and crucially, who owns it.
Maria notes that currently AI is typically being created and implemented by engineers who aren’t tasked with risk management and compliance and don’t operate within a risk management culture. As such, they cannot be the accountable “owners” of AI in the organisation.
Legal team or CEO? Who should own AI?
The bottom-up nature in which AI is often being deployed in businesses is also adding to the challenge of determining who “owns” it or “should” own it. Ultimately, of course, the buck stops with the board, but directors can feel behind the curve on a technology that is so new and powerful.
Caroline Cartellieri, Non-Executive Director and digital transformation specialist, compares it to the elevation of cybersecurity to a board concern, saying: “Today boards talk a lot about cybersecurity, but just add that to the power of X because now the risks are becoming so much bigger because nobody quite understands what GenAI does, what its capabilities are and how powerful it can be.”
Diligent’s Dale Waterman believes that the General Counsel has a key role to play on AI due to the ethical issues involved with the legislation: “We’re in this world of ethics, unlike other laws around cybersecurity and data protection where there is a lot of maturity, we’re in this new space where we are not quite sure who should own this. I think the General Counsel or Company Secretary is going to play an important role […] because you have to marry the ambition of the [AI] strategy and objectives with the reality of what governance will involve.”
PWC’s Maria Axente disagrees, believing that AI ownership must sit at CEO level because that is the only role with ultimate authority over the business. She explains that there’s a sense that “everyone and no one” should own AI, but the CEO is the only person in a position to challenge the status quo across every area from governance, risk and compliance issues to more existential and human topics, such as how employees should be expected to work with AI.
Beyond Governance’s John Davies believes the CEO needs to advocate for AI to counter the board’s natural caution in the face of AI regulation: “We are hearing quite a lot of stick here and not much carrot […] maybe you need someone who’s able to flip that a bit as I worry that boards may focus too much on the risks of what can go wrong.”
Caroline agrees, adding: “It’s the culture set by the CEO and it is top down.” She notes that even with rules in place there can be breaches, citing the Post Office and Volkswagen, saying: “I’m not saying you don’t need rules and frameworks, but ultimately the buck stops with the CEO and with the culture in any organisation.”
Maria believes it is essential to look at who is in the room and has most influence over AI strategy. “You have to be able to look at your culture, the current structure, and say who is best placed and who else needs to be in the room. […] You advocate very much in terms of AI to have three lines of defence and […] an accountability structure that brings everyone together with checks and balances.” She continued: “Ultimately someone needs to orchestrate everyone to sing the same tunes and that’s the CEO.”
Dale emphasised the many roles that will be touched by AI accountability, citing the increased responsibilities many Chief Data Protection Officers are bearing, and how CISOs are becoming more closely involved in privacy programmes.
Diligent toolkit offers guidance as businesses integrate AI
As the robust debate in our seminar demonstrates, there is a lot of work to do to establish lines of responsibility and accountability that will enable effective board oversight of AI. It’s a fast-moving landscape and there is a lot to learn at all levels from directors to GCs and executive team members.
Diligent has a range of resources for customers and their employee to support AI learning and increase familiarity with the issues around AI integration and use. These include certifications for board directors and at a more comprehensive level, our AI toolkit supports organisations to assess where AI risk and compliance issues lie.