Diligent
Diligent
Solutions
chevron_right
Products
chevron_right
Industries
chevron_right
Resources
chevron_right
Blog
/
Risk & Strategy
Phil Lim Image
Phil Lim
Director, Product Management

Harnessing the potential of AI: 4 key focus areas for boards

May 27, 2024
0 min read
Business professional using AI on a smartphone

After decades where artificial intelligence (AI) seemed to exist largely in the realms of science fiction, it exploded into the corporate consciousness in 2022 with the launch of generative AI tools such as ChatGPT and Dall.E. The immediate frenzy of citizen experimentation highlighted both the immense potential and current limitations of such tools.

Two years on, and businesses are now exploring the realistic applications of AI in their own operations. At this important stage, boards need to keep up to speed with the rapidly changing zeitgeist if they are to cut through the hype and help their businesses safely reap the benefits of AI without falling prey to its risks.

Here are four key areas that boards should understand:

1. AI has innate features that challenge adoption and regulatory approaches

In a recent webinar chaired by the UK Chartered Governance Institute, Chris Burt, Principal at Halex Consulting Ltd, summed up the two key features of AI that make it powerful enough to solve decades-old scientific challenges, but also dangerous enough to represent risks to human rights, safety, fairness and privacy.

The first is its adaptivity. AI based on machine learning means it can be difficult to explain the logic or intent of the system’s outcomes. While AI's processing and analytical power is far superior to that of humans, its superhuman efficiency means it is not always easy to look at the outcomes generated by an AI and understand its chain of reasoning. This is sometimes called the ‘black box problem’ and it can undermine trust in AI-powered decisions.

The second challenge is AI’s autonomy, which makes it difficult to assign accountability for the system's actions.

These innate AI features, plus the inherently borderless nature of technology solutions, pose challenges for companies and regulators. As Burt neatly expressed it: “Adaptivity plus autonomy equals complexity.”

To date, there is widespread agreement that AI use should be regulated, but little consensus on what approach to take.

The U.K. is not currently planning new regulations around AI, instead opting for a principles-based approach focused on outcomes and monitored via existing sector regulators. The aim is to foster the flexibility to adapt as the technology changes. The five key principles are:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

In Europe the EU AI Act was adopted in March 2024 and will be fully applicable by 2026. It establishes different rules for different AI risk levels and there is a focus on transparency requirements.

The U.S. approach is currently more focused at an individual industry level, but this carries the risk that siloed regulatory structures may result.

At this stage, boards should be aware that the regulatory landscape is evolving and ensure that it is receiving sufficient focus within relevant business divisions.

2. Assessing AI competitiveness is not straightforward

One thing that boards are likely keen to determine is their competitive position on AI. However, this is not easy. The hype around the technology means many companies are accelerating “AI-powered” solutions to market, that may use simpler forms of machine learning and robotic process automation. There is less visibility over what businesses are doing to integrate AI into business processes and strategic planning, meaning it is difficult to make direct comparisons. Furthermore, there is a wave of citizen experimentation across many business divisions and among third-party supplier companies, which can lead to AI being introduced into the business in an ungoverned way.

Boards should examine their governance processes around AI to address potential risks around unsanctioned use, asking: “Do we have policies on AI use? Do we have AI embedded in your third-party risk assessments? AI should be embedded into existing legal, security, third party risk and compliance procedures, not considered separately.”

Brunel University’s Ashley Braganza notes that there are several self-audit tools emerging allowing organisations to self-assess where they are in terms of AI readiness. He also notes that AI adoption faces the same challenges as any other technology.

“The actual technology implementation is often the easiest bit. Much harder is strategically exploiting the technology when you need to change the culture, processes and structure of the business.” — Ashley Braganza, Dean, Brunel Business School

Burt takes a different perspective on assessing AI competitiveness, querying whether organisations should be more closely focused on whether they are using AI in the right way for their own business. He notes that, “Businesses often seek psychological safety in being ‘middle of the pack,’” which is where benchmarking tools can be helpful, but he believes that the biggest challenge facing most organisations is the quality of their data. Without robust data on which to train AI for developing proprietary use cases, businesses lack the foundations they need to really leverage it. Boards should, therefore, consider their data strategy as an integral component of AI strategy.

3. AI skills are not necessarily a board composition requirement — yet

Gone are the days when directors’ primary experience was in the financial sector. Board skillsets have diversified to include ESG and cybersecurity, so should boards be seeking directors with AI skills?

Not necessarily, or perhaps, not yet. There is not a huge pool of director-level personnel with advanced AI experience in a business context — it’s simply too new. A more realistic approach might be to ensure that the board has access to expertise, and that directors and senior management receive regular training and updates on the evolution of the sector.

Burt suggests that boards use the same established methodologies they use for other strategy planning activities, such as horizon scanning and scenario analysis, applying this to their business to assess how AI can impact the organisation. These exercises need to include consideration of how the business will manage ethical issues such as the potential for bias.

In future, directors’ AI skills may serve a board well, but their scarcity means it is unlikely to be practical to recruit them to the board in the near term.

4. AI introduces additional cybersecurity risks

Security of data and systems is an important risk that the board should incorporate into risk management from two different perspectives.

First, the business must ensure AI use is governed responsibly to prevent, for example, employees uploading sensitive company data to public AI tools like ChatGPT. The ease of adoption of publicly available AI tools means organisations may have a significant shadow AI estate that employees are using without guidance or protection.

Second, the business needs to be aware that bad actors are equally adept at using AI tools to create sophisticated phishing campaigns using deepfake visual and audio information. As a result, external cybersecurity risks are rising and should be managed accordingly.

It is undoubtedly an understatement that AI and its regulation is a fast-moving area. Boards need to ensure they have the knowledge they need to guide the business to adapt to and capitalise on the risks and opportunities it offers.

Watch the replay of the CGI UK & I webinar here.

And to learn more about how Diligent can support a dynamic GRC environment that moves at the pace of AI, visit our website.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.