Diligent
Diligent
Solutions
chevron_right
Products
chevron_right
Industries
chevron_right
Resources
chevron_right
Blog
/
Risk & Strategy
Jay Cameron Image
Jay Cameron
Senior Manager, Product Marketing

De-risking business: What GRC professionals need to know about emerging AI regulations

January 3, 2024
0 min read
GRC professional reading about new AI regulations

The world’s first legal framework for AI is here: the EU’s proposed Artificial Intelligence Act.

Given the high penalties for non-compliance—fines of up to 35 million euros, or 7% of an organization’s global turnover—top executives and leaders have been paying attention. What do their organizations need to know? What next steps should they take to prepare?

“It’s not like it’s the Wild West out there,” says Renee Murphy, Distinguished Evangelist at Diligent. she says. “All those regulations you’re tracking now related to your privacy and your security? They apply to your AI. All of the stuff that you’re doing related to risk management and strategic risk? That applies to your AI as well.”

Risk-based rules of the road

According to Murphy, “AI is just a faster way of doing business, and it requires a lot more oversight than you probably are giving it.”

The AI Act responds to this reality by providing a regulatory framework with risk management at its heart. It classifies AI systems based on their level of potential harm: from minimal-to-no risk, to high risk, to prohibited applications.

High-risk systems require a host of risk management processes, including the use of relevant and representative data for training, validation and testing, plus human oversight and assurances of robustness, accuracy and cybersecurity.

To further support risk management, AI systems must be designed to perform reliably for their intended purpose. Systems must meet conformity requirements, and throughout a system’s life, providers must actively and systematically collect, document and analyze relevant data on reliability, performance and safety.

Transparency is emphasized throughout. Organizations must label “deep fakes,” notify people that they’re interacting with an AI system like a chatbot, and let people know if emotional recognition or biometric categorization systems are being applied to them. The providers of AI systems must also maintain up-to-date technical information, register the system in the EU’s database, and monitor the system after it’s entered the market.

Parallel guidance on the other side of the Atlantic

While AI policy in the United States is still moving from recommended guidance to official mandate, the overall path and focus on risk management is similar.

The White House has issued a Blueprint for an AI Bill of Rights and a handbook for putting the following five principles into practice:

• Safe and effective systems

• Algorithmic discrimination protections

• Data privacy

• Notice and explanation

• Human alternatives, consideration and fallback

Similar to the EU AI Act, a White House Executive Order on AI mentions testing, evaluations and performance monitoring to ensure that systems are functioning as intended, plus labeling and content provenance for identifying AI-generated content.

White House guardrails also dovetail with EU regulations regarding banned AI applications. For example, just as the AI Act prohibits AI that uses subliminal manipulation, like sonic tools that push truck drivers beyond the point of exhaustion, the White House executive order makes a similar statement: AI should not worsen the quality of jobs or otherwise disrupt the workforce.

Caveats for corporations

While much of the AI Act covers the providers of AI systems, companies who use these tools have responsibilities and obligations as well.

For starters, the AI Act requires users of AI systems to operate these tools as intended and follow instructions as they do so. Human oversight is required throughout. Users must also monitor operations for possible risks and inform the provider or distributor if the system malfunctions or if there’s a serious incident.

And some uses will be prohibited in the first place, like social scoring or the use of generative AI applications with certain data, such as sensitive personal information, or in some industries like health care.

“If you were hoping to use ChatGPT-style technology to make your job easier, you might not be legally allowed to do so,” Standard reporters Saqib Shah and Mary-Ann Russon wrote in their coverage of the issue. 

Cautions for mission-driven organizations 

Organizations in public education, the nonprofit space and local government will need to exercise similar diligence in their AI use. 

Are you exploring emotion recognition tools for student learning and progress? Are you using social scoring to home in on ideal donors, or biometric categorization systems to target outreach by race, sexual orientation, or political or religious views?

All could be considered banned applications under the EU’s new rules due to the potential risk. And while the act includes exceptions for law enforcement, facial recognition tools—like untargeted scraping of facial images from CCTV footage or the internet—are also forbidden, which is something that local governments and public safety agencies need to keep in mind.

A broad purview — and tight deadline

The AI Act is industry-agnostic. It applies to all sectors, and its purview extends across the AI value chain.

Do you manufacture or provide AI systems? Are you an importer, distributor or authorized representative in the commercial value chain? Will the system or its output be used in the EU?

Like the GDPR before it, the AI Act can have an extraterritorial effect on US companies.

Time is of the essence, so it’s critical for all organizations to start processing these new regulations and put their own oversight and governance in place.

After the EU Parliament and Council formally adopts the act, organizations will have six to 36 months to transition into compliance depending on their risk level, with industries such as insurance and banking considered higher risk.

Stay ahead of fast-changing regulations with a centralized platform

AI regulations are moving fast, and the technology itself is evolving every day. To keep up and stay ahead of risks, you need a consolidated view of governance, risk and compliance across your organization.

The Diligent One platform centralizes your GRC data for a unified perspective on risks and impactful insights that guide better decision-making.

See how Diligent One can help you ensure AI compliance and streamline your risk management processes. Schedule a demo today.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.