![Elizabeth Gidez Image](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2F009b6bff18b9f9b91965213123f1e2d45df88dea-667x1000.png%3Frect%3D0%2C0%2C667%2C748%26fit%3Dmax%26auto%3Dformat&w=96&q=75)
The Colorado AI Act: What you need to know now
![People discover more about the Colorado AI Act](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F33u1mixi%2Fproduction%2F789b3b8d03d670bb7874510de668e1215d177fe9-7992x5331.jpg&w=2048&q=75)
On May 17, 2024, lawmakers in Colorado established the Colorado AI Act, making the state a front-runner in shaping governance around artificial intelligence (AI) alongside California and New York. The new legislation will take effect on February 1, 2026.
How can board members, general counsel, CISOs and other senior leaders make the most of the intervening months? Guidance follows, starting with critical context on the Act’s purpose and objectives.
The Colorado AI Act vs. the EU AI Act
For busy tech and leadership teams eager to streamline their AI governance efforts, one initial question may be: Will policies and practices developed to comply with the EI AI Act satisfy requirements for new requirements on the other side of the Atlantic?
Yes and no.
The purpose of the Colorado AI Act is broadly aligned with many governance frameworks emerging worldwide. It aims to protect consumers from potential harm caused by AI systems. Like the EU AI Act, central focus areas are risk, transparency and accountability, with obligations for deployers and providers alike.
The two frameworks do diverge, however, when it comes to the details. While the EU AI Act outlines four levels of AI risk across a wide range of areas, the Colorado AI Act concentrates its regulatory effect on “high-risk AI systems.” These are defined as any system that makes or is a substantial factor in a "consequential decision” — that is, decisions exerting a significant impact on education, employment, essential government services, healthcare, housing, insurance and legal services.
Within this purview, the Colorado AI Act narrows its focus even further to “algorithmic discrimination,” where use of a high-risk AI system results in “unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status or other classification protected under [Colorado] or federal law.”
Another important parameter of the law involves application: individuals, corporations and other legal or commercial entities “doing business in Colorado.” While this exact phrase is not defined in the Act, international law firm White & Case suggests it applies to parties “who solicit business and receive orders from Colorado residents by any means,” adding that the firm’s lawyers “expect the phrasing to be interpreted broadly.”
Get the full picture
Stay ahead of regulation with our global AI regulation guide. Learn how countries are shaping AI's future.
Read the guideRequirements for developers and users of AI systems
In keeping with the Act’s objectives of ensuring transparency and accountability, parties who’ve developed or “intentionally and substantially modified” a high-risk AI system will be required to:
- Thoroughly test AI systems for bias
- Document what data is used to train each AI system
- Describe known risks of discrimination and actions taken to address these risks
- Share this documentation with users
For their part, organizations deploying these high-risk AI systems must implement, with regular review and updates, a risk management policy that aligns with standards, including the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the International Organization for Standardization's ISO 42001.
They will also be required to release an annual impact statement detailing:
- The system’s purpose, intended use, deployment context and benefits
- The system’s data output and the data it uses for input
- An analysis of the system’s risks related to algorithmic discrimination
- Performance metrics
- Transparency measures, including post-deployment monitoring and user safeguards
Deployers can enlist a third party to complete these assessments and may conduct group assessments for similar systems or laws into one undertaking. They must store their findings for at least three years.
When the AI system contributes to a negative or adverse decision, the Colorado AI Act requires that the system’s deployer send affected individuals an accessible, “plain language” notice. The notice must:
- Disclose how the system contributed to the decision, including the types of data it processed and where the data were sourced
- Provide an opportunity to correct inaccurate personal data that factored into the decision
- Provide an opportunity to appeal the adverse decision, with time for human review
Colorado AI Act monitoring, enforcement and exemptions
The Colorado AI Act includes detailed documentation requirements, with which organizations must be prepared to demonstrate compliance upon request from the office of the Colorado Attorney General.
Violations are considered unfair trade practices under Colorado state statute, with punishments including fines or injunctive relief.
When do these rules not apply? The Colorado AI Act notes several exceptions, including for businesses with fewer than 50 employees, businesses that don’t use their own data in AI training, entities already beholden to strict federal standards and federally contracted projects and research not used in high-risk areas like employments and housing.
Getting your business ready for what’s next
The Colorado AI Act should be high on your compliance agenda, despite the 14-month window before it takes effect and even if your company does not currently do business in Colorado. The law is most important for its role as a major precedent: “The Act represents the first comprehensive AI legislation in the U.S., and other states are likely to follow suit if the federal government cannot move quickly to pass a comprehensive nationwide AI bill,” White & Case notes.
Get started by:
- Thoroughly reviewing AI systems, policies and practices
- Adopting risk management frameworks based on NIST and ISO guidelines
- Documenting the data used to train AI systems and measures taken to prevent discrimination, in compliance-ready comprehensive detail
- Continuously testing AI systems for bias and proactively correcting any issues you detect
Stay ahead with the latest in AI oversight, compliance and more
Another important next step for the Colorado AI Act and other evolving AI legislation: educating practitioners, directors and leadership in your organization. Set clear expectations, with training sessions treated as business-as-usual for employees involved in AI development and deployment. For those involved in big-picture decisions, equip them with the skills to capitalize on valuable opportunities while applying responsible, strategic oversight.
Improve the quality of your work by employing pre-existing, purpose-built resources like the AI learning track that is part of our Education & Templates Library in the Diligent One Platform. It offers an introduction to AI along with educational material on risk management, ethics, governance, board oversight and more.
Diligent AI Ethics & Board Oversight Certification offers a more comprehensive decision-making toolkit for AI governance and oversight. Developed in partnership with the Volkov Law Group, its four constituent courses — 15 hours in total — cover the basics of AI ethics and compliance programs, different governance and oversight models, team collaboration and other areas of understanding that are essential for long-term AI strategy and compliance.
Learn more about making AI governance part of your board’s continuing education agenda, for the Colorado AI Act and beyond. Schedule a demo today.
Keep exploring
![AI regulations around the world: Trends, takeaways & what to watch heading into 2025](https://cdn.sanity.io/images/33u1mixi/production/67a1fdf3d87a3177b43f1f25a520855c67cf45b3-1400x740.png?w=3840&q=90&fit=clip&auto=format)
AI regulations around the world: Trends, takeaways & what to watch heading into 2025
AI regulations around the world are evolving. See how jurisdictions worldwide are responding to generative AI and compare their approaches.
![AI regulations in the U.S.: Navigating a complex and evolving landscape](https://cdn.sanity.io/images/33u1mixi/production/ea8e7146977119124c5187c0e4174abc326beef0-5120x3414.jpg?w=3840&q=90&fit=clip&auto=format)
AI regulations in the U.S.: Navigating a complex and evolving landscape
AI regulations in the U.S. are tough to keep up with. Read our blog to stay informed and ensure compliance.
![What California’s AI regulations mean for your company](https://cdn.sanity.io/images/33u1mixi/production/96578e246870efc9e3d3b849cf6450649aef2702-8192x5464.jpg?w=3840&q=90&fit=clip&auto=format)
What California’s AI regulations mean for your company
Discover how California’s leading AI regulations could shape future AI usage and impact your business.
![AI is here. AI regulations are on the way. Is your board ready?](https://cdn.sanity.io/images/33u1mixi/production/7ef13321754c150f46ae8370675e14d1efce6c2e-1080x720.jpg?w=3840&q=90&fit=clip&auto=format)
AI is here. AI regulations are on the way. Is your board ready?
Your legal team is in the perfect position to help prepare your organization for new AI regulations. Here’s how — and how technology can help.