Colorado Governor Jared Polis signed into law SB 205 on May 17, known as the Colorado Artificial Intelligence (AI) Act, that would go into effect February 1, 2026 and simultaneously released a signing statement contextualizing his approval of the bill. The Colorado legislature is also reportedly intending to study and possibly revise the bill before that time.
Colorado is the first state legislation that broadly addresses the use of AI, in particular for using AI in high-risk activities. It allows Attorney General’s office to promulgate regulations but does not contain a private right of action. The legislation originated from a bipartisan AI workgroup organized by the Future of Privacy Forum of about 30 state lawmakers. A similar bill passed the Connecticut Senate, but was blocked from a vote in the House after Governor Ned Lamont indicated that he would veto the legislation.
The Colorado AI Act is at its core anti-discrimination legislation, focusing on bias and discrimination caused by AI in the context of a consequential decision. It creates duties for those developing and deploying AI systems to use reasonable care to avoid algorithmic discrimination. The law exempts entities covered under the Health Insurance Portability and Accountability Act (HIPAA) if they provide AI-generated recommendations that require a health care provider to take action to implement that recommendation.
The Colorado law has several specific new definitions, including:
- “Developer” being a person doing business in Colorado that develops or intentionally and substantially modifies an artificial intelligence system.
- “Deployer” being a person doing business in Colorado that deploys a high-risk artificial intelligence system.
- “Person” is defined as an individual, corporation, business trust, estate, trust, partnership, unincorporated association, or two or more thereof having a joint or common interest, or any other legal or commercial entity.
- “Artificial Intelligence System” being defined as any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
- “High-Risk Artificial Intelligence System” is defined as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision” although it excludes several different types of activities such as anti-fraud technology that does not use facial recognition technology, anti-malware, data storage, databases, artificial intelligence-enabled video games and chat features so long as they do not make, or are not a substantial factor in making, a consequential decision.
- “Consequential Decision” is defined as a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.”
- “Algorithmic Discrimination” is defined as any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the english language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.
Under the new law, AI system developers have a general duty of care to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.”
The Colorado AI Act requires developers and deployers to post what is essentially an algorithmic discrimination statement on a website or “in a public use case inventory” that summarizes how the entity manages risks of algorithmic discrimination that may arise from the development, “intentional and substantial modification,” or deployment of covered AI systems.
Developers must make available to deployers or other developers of high-risk artificial intelligence systems:
- A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system.
- Documentation disclosing things such as a high-level summary of the type of data used to train the high-risk AI system and known or reasonably foreseeable limitations of the system.
- Documentation describing things such as how the system was evaluated for performance and mitigation of algorithmic discrimination and the intended outputs of the high-risk system.
- Any additional documentation reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the system for risks of algorithmic discrimination.
- Documentation and information necessary for a deployer to complete an impact assessment.
Developers also must make available an algorithmic discrimination statement on its website or in a public use case inventory, including:
- A statement summarizing the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer.
- How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise.
- If a developer learns that its high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination, it must disclose that to the Attorney General and all known deployers or other developers within ninety days of discovery.
Deployers are required to:
- Implement a risk management policy and program to govern their deployment of a high-risk artificial intelligence system (the requirements of which are outlined in the bill).
- Complete an impact assessment for the high-risk artificial intelligence system or contract with a third party to complete that assessment (the requirements of which are outlined in the bill).
- Notify consumers if the deployer uses a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer and provide the consumer with a statement disclosing information such as the purpose of the system and nature of the consequential decision and, if applicable, information regarding the right to opt out of profiling under the Colorado Privacy Act.
- If the high-risk artificial intelligence system is used to make a consequential decision to the consumer that is adverse, the deployer must provide certain information to the consumer regarding that decision and provide the consumer an opportunity to appeal that decision which must, if technically feasible, allow for human review.
- Make available on their websites a statement summarizing information such as the types of high-risk artificial intelligence systems that are currently deployed by the deployer and how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination.
- Like developers, if a deployer discovers that a high-risk artificial intelligence system has caused algorithmic discrimination, it must notify the Attorney General within 90 days.
Violations of the Colorado AI Act are treated as violations of Colorado’s general consumer protection statute which provides for a max civil penalty of $20,000 for each consumer or transaction violation involved. In any enforcement action brought by the Attorney General’s office, there is an affirmative defense if the developer, deployer, or other person discovers and cures the violation, is otherwise in compliance with NIST’s Artificial Intelligence Risk Management Framework, another nationally or internationally recognized risk management framework for artificial intelligence, or a risk management framework designated by the Attorney General.
For questions or more information about the Colorado AI Act, please contact Jim Potter, CHC Executive Director, at jpotter@cohealthcom.org.