Author

Government Relations

Topic

  • Artificial Intelligence

On May 17, 2024, Colorado Governor Jared Polis signed into law SB 205, known as the Colorado Artificial Intelligence (AI) Act. The first comprehensive AI law of its type in the U.S., the CO AI ACT law takes effect February 1, 2026. Changes to the new law are still possible before the enforcement date, as the Colorado legislature is reportedly intending to study and possibly revise the bill.

The Colorado AI Act is at its core anti-discrimination legislation, focusing on bias and discrimination caused by AI in the context of a consequential decision. It creates duties for those developing and deploying AI systems to use reasonable care to avoid algorithmic discrimination. The new law regulates “developers” (i.e. entities or individuals who create or substantially modify AI systems) and “deployers” (i.e. entities or individuals who use AI systems to make decisions or assist in decision-making) who develop or deploy “high-risk” AI systems. An AI system is considered “high-risk” if it “makes, or is a substantial factor in making, a consequential decision.” In turn, a “consequential decision” is any decision that can significantly impact an individual’s legal or economic interests, such as decisions related to employment, housing, credit, lending, educational enrollment, legal services, and insurance. Specifically excluded from high risk decision-making includes excludes certain technologies, such as cybersecurity technologies and spam filtering, from high-risk AI systems when they are not making or a substantial factor in making consequential decisions.

The Colorado AI Act mandates developers and deployers to explain how they prevent algorithmic bias. This can statement be posted to a website or “in a public use case inventory” and must include a summary of how the entity manages risks of algorithmic discrimination that may arise from the development, “intentional and substantial modification,” or deployment of covered AI systems. The law applies to certain AI systems and covers the entire process, from development to any major changes and finally to how the AI is used.

Developers of high-risk AI systems have several specifical obligations under the law including:

  • Making available a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system.
  • Providing documentation disclosing things such as a high-level summary of the type of data used to train the high-risk AI system and known or reasonably foreseeable limitations of the system.
  • Documenting and issuing information describing things such as how the system was evaluated for performance and mitigation of algorithmic discrimination and the intended outputs of the high-risk system.
  • Supplying additional documentation reasonably necessary to assist the deployer in understanding the outputs and monitoring the performance of the system for risks of algorithmic discrimination.
  • Furnishing documentation and information necessary for a deployer to complete an impact assessment.
  • Making available an algorithmic discrimination statement on its website or in a public use case inventory with specific specifications including:
  • A statement summarizing the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer.
  • Information on how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise.

Deployers of high-risk AI systems have obligations under the law including:

  • Implementing a risk management policy and program to govern their deployment of a high-risk artificial intelligence system (the requirements of which are outlined in the bill).
  • Completing an impact assessment for the high-risk artificial intelligence system or contract with a third party to complete that assessment (the requirements of which are outlined in the bill).
  • Notifying consumers if the deployer uses a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer and provide the consumer with a statement disclosing information such as the purpose of the system and nature of the consequential decision and, if applicable, information regarding the right to opt out of profiling under the Colorado Privacy Act.
  • In the event the high-risk artificial intelligence system is used to make a consequential decision to the consumer that is adverse, the deployer must provide certain information to the consumer regarding that decision and provide the consumer an opportunity to appeal that decision which must, if technically feasible, allow for human review.
  • Making available on their websites a statement summarizing information such as the types of high-risk artificial intelligence systems that are currently deployed by the deployer and how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination.

Both developers and deployers are required to disclose to the AG any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of a high-risk AI system. This disclosure is mandatory and must occur within 90 days when a developer or deployer: (1) discovers that the system has been deployed and has caused or is likely to have caused algorithmic discrimination; or (2) receives a credible report indicating such an occurrence.

The law vests enforcement authority for the law with the Colorado Attorney General’s office, which has the ability to promulgate regulations to implement the law. The law explicitly does not contain a private right of action. Violations of the Colorado AI Act are treated as violations of Colorado’s general consumer protection statute which provides for a max civil penalty of $20,000 for each consumer or transaction violation involved. In any enforcement action brought by the Attorney General’s office, there is an affirmative defense if the developer, deployer, or other person discovers and cures the violation, is otherwise in compliance with NIST’s Artificial Intelligence Risk Management Framework, another nationally or internationally recognized risk management framework for artificial intelligence, or a risk management framework designated by the Attorney General.

Worth nothing, a similar bill to the CO AI Act passed the Connecticut Senate but was blocked from a vote in the House after Governor Ned Lamont indicated that he would veto the legislation.

Questions or more information about the Colorado AI Act? Please contact Alison Pepper, 4As EVP of Government Relations & Sustainability.

 

Visit the 4As AI Hub for More