Diligent
Diligent
Solutions
chevron_right
Products
chevron_right
Industries
chevron_right
Resources
chevron_right
Blog
/
Boards & Governance
Elizabeth Gidez Image
Elizabeth Gidez
Associate General Counsel

How legal departments can be champions of artificial intelligence and help customers navigate AI risk

April 5, 2024
0 min read
Professionals discussing ways legal departments can champion artificial intelligence

In-house legal departments play a critical role in helping organizations navigate the increasingly complex world of artificial intelligence (AI) from both adoption and risk standpoints.

A company’s legal department is a key stakeholder in the exploration of adopting AI-powered technology within an organization. These technologies may be used as internal tools (to support legal, marketing, HR, sales or customer support units, to name a few), or within a company’s own product or services. A balance must be struck regarding the responsible adoption of this technology with the inherent risks of doing so.

The legal team’s role in AI oversight

As champions of AI, in-house legal teams can provide much-needed guidance and oversight regarding the adoption of these technologies within the organization. Legal teams must keep up to date on current regulatory guidance and frameworks, insofar as it relates to considerations of data privacy and data protection, intellectual property and even security.

"In the fast-paced world of AI, organizations face a dual challenge: the risk of falling behind by moving too slowly and the hazards of hasty adoption. Legal and security departments are crucial in guiding organizations to strike a balance, ensuring that the integration of AI tools is innovative and secure. By including these departments as part of the AI adoption lifecycle, organizations can better navigate the complex landscape of regulatory compliance, ethical considerations, and security risks. This approach not only mitigates risks but also secures a competitive edge in utilizing AI for internal and external users." - Greg Kowalski, Senior Director of Engineering, Data and AI Platform, Diligent

By staying current with the evolving legal landscape surrounding AI, legal departments can help organizations avoid both legal disputes and regulatory penalties, which can be costly from both a monetary and reputational standpoint. Further, legal teams must work with various internal stakeholders, including security, privacy, compliance, product and IT teams, to determine the business’s risk appetite for adopting and implementing AI technology throughout the organization. "I've really leaned on our legal team to help us decide how to react over both the quickly evolving regulatory aspects of AI, as well as the complex customer requests that come in regarding our orientation towards AI. Of course, I could've plugged these requests into ChatGPT, but as far as I know, ChatGPT doesn't give me privilege. Moreover, to get appropriate responses would require me to input a whole lot more context over our history, domain, and circumstance that the legal team has at their fingertips," shares Phil Lim, Director of Product Management, Analytics & AI at Diligent.

Legal departments also play a vital role in drafting, reviewing and negotiating contracts and policies related to AI technologies. These documents could relate to vendors and customers, as well as internal policies and procedures. Legal teams should work to provide a clear framework for an organization’s responsible adoption of AI.

Vetting AI vendors

A thorough process for vetting AI vendors is crucial to mitigating risk and ensuring AI solutions are deployed responsibly within an organization.

Legal departments must have strong due diligence checks in place when selecting and onboarding AI vendors to ensure they comply with privacy and legal requirements. This process starts with asking AI vendors important questions around their approach to AI and understanding whether their values are in line with the organization. This includes checking to what extent the vendor has access to the data that is input and how it is used and processed for data training purposes.

AI vendors can be held accountable by negotiating clear contractual terms and conditions, implementing strict checks throughout the lifecycle of the contract and having processes in place to ensure regular evaluations of how the data is used. The terms and conditions to focus on include data security and confidentiality provisions, which outline the vendor’s handling of sensitive data, encryption protocols, and data access to controls. These provisions should also dictate what notification obligations the vendor must abide to in the case of a data breach. Further, it is imperative to clearly define Service Level Agreements (SLAs) during contract negotiations, as they delineate metrics such as accuracy rates for AI models and training data usage. Incorporating penalties or remedies for failing to meet SLA targets is essential to incentivize the AI vendor to uphold high performance standards.

Lastly, understanding the source of the vendor’s AI model and the type of training data that the model uses is crucial to guaranteeing that the model is trained on accurate data. 

Privacy and data accuracy are also crucial factors when vetting an AI vendor. It is important to check whether the vendor uses personal data as part of its training data and whether that data is anonymized. Organizations should avoid working with vendors that are not complying with applicable privacy regulations in their use and processing of personal information from a liability standpoint. Moreover, inquiries into what measures an AI vendor is taking to prevent breaches and ensure data safety is key to understanding whether the vendor is operating in line with the organization's data protection policies.

AI policy considerations & training

"Within the realm of AI, ethical and responsible practices vary. Recognizing these nuances and understanding the implications of each deployment is paramount in both technological and business value evaluations. Informed decisions pave the path to responsible innovation and risk mitigation, through proactive governance addressing emerging challenges and gaps in AI governance," explains Arthur Miyazaki, Director, BT Enterprise Architecture. Having an internal AI use policy that outlines the guidelines and procedures for the responsible integration and use of AI technologies helps to ensure compliance with regulations and maintain ethical principles when processing and inputting confidential information. The policy should regulate what type of data can be inputted when using an AI tool, especially when dealing with personal data or confidential and privileged business information.

Educating and training stakeholders on AI risk and regulation also goes a long way in managing reputational and legal risk. This involves understanding the level of AI use within an organization, the departments involved, and how they are leveraging such technologies. The first step is to assess what AI-related knowledge gaps exist within an organization to then develop training materials tailored to each department and their needs. Providing workshops on data literacy and privacy awareness aids in fostering a culture of responsible AI use.

When vetting AI solutions offered by vendors, consider the following:

  • Has the AI vendor provided sufficient information on its approach to AI? Who at the company can use the data I input? What can they do with it? What principles are being used to create an AI product?
  • Is there a way to gauge the accuracy of the output?
  • How is the AI being trained? Which data is being used?

When drafting internal AI policies, consider the following:

  • What policies does my company have around the use of AI for my job duties?
  • What kinds of information can I put into an AI tool? Does this include confidential information? Personal data? Client data?
  • What commitments have we made to our customers about our use of their data?
  • Do our contractual positions need to change?

Legal teams as AI champions

In-house legal departments are well-positioned within an organization to provide guidance on the risks and considerations regarding the use of AI.

By utilizing some of the questions outlined above, legal teams can provide a framework on best practices for evaluating an AI vendor and adopting AI technology.

Note: Maryam Khan, Paralegal at Diligent, contributed to this blog post.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.