What is SB24-205
In the 2024 legislative session, Colorado passed a first in the nation AI law regulating AI. What exactly is in the law? Why is it so controversial? Why is it so hard to change?
These questions are all too complicated for a simple post, but over the coming weeks, I’ll be exploring them. This exploration will take us on a deep dive into not only what is unique about the Colorado law but also what it looks like to build these AI systems. What is and what is not technically feasible. What are some of the existing regulations, and how does Colorado extend them?
Today, we’ll provide a high-level overview of the law and discuss some of the controversy surrounding it.
SB24-201 In the Details
SB24-205 (the Act) aims to ensure that an artificial intelligence system does not discriminate against protected classes, including traits such as racial identity, gender, or disability status. The bill makes several definitions such as,
- an artificial intelligence system is a machine based system that can infer from inputs and an objective, content, decisions, predictions, or recommendations.
- a consequential decision is one that involves educational decisions, employment, financial and lending decisions, essential government services, health care services, housing, insurance, and legal services.
- a high risk artificial intelligence system is an artificial intelligence system that makes consequential decisions.
- a deployer is a person who deploys a high risk artificial intelligence system.
- a developer is a person who develops or substantially modifies a high risk artificial intelligence system.
The Act also provides several exclusions for high risk systems, such as everyday use cases for artificial intelligence, including fraud detection and cybersecurity. The Act also explicitly excludes more general software, such as databases, spell checkers, and computer networking.
For high risk artificial intelligence systems, the Act requires the developer to provide deployers with information about the potential biases of the artificial intelligence system. These disclosures include
- how the artificial intelligence system was evaluated for potential bias.
- what data governance exists for the training data used to develop the artificial intelligence system.
- what are the intended outputs, and how should the system be used.
The Act also requires disclosures around risks of algorithmic discrimination to the Attorney General. Developers must also create risk management systems such as the NIST AI Risk Management Framework to monitor their system for post deployment actions that may be discriminatory.
The Act also requires disclosure to a consumer that they are interacting with an artificial intelligence system, unless it is obvious to the consumer that they are interacting with one.
The Attorney General regulates deployers and developers whose systems do discriminate through civil actions. Only the Attorney General can bring these actions. It is a permissible defense that the developer or deployer acts to either fix the basis once discovered, or maintains an adequate artificial intelligence risk management system.
Criticisms
Broadly speaking, the concerns surrounding the Act are divided into explainability, excessive bureaucracy, and the breakdown of responsibilities between developers and deployers.
One of the key concerns, but also a key motivating factor behind the Act, is the role of explainability in modern artificial intelligence systems. While some types of systems, based on statistical models and other techniques, have solid methods to explain the factors that influence a particular decision or recommendation, systems based on neural networks, including Large Language Models (LLMs) like ChatGPT, do not. Explainability gets even more complex when the input is not traditional data that we might think to store in a spreadsheet or a database but is audio visual like radiological models that attempt to diagnose cancers or more sophisticated models that attempt to interact directly with job applicant as an avatar.
Developers worry that, rather than validating a model’s effectiveness, they will need to spend excessive time demonstrating that the model is unbiased. Broadly, the requirements of the Act are possible but may be new to many developers. Developers of financial services, banking, and insurance applications have existing regulations to guide their discussions of bias, such as Colorado’s SB21-169 or the Fair Housing Act. Developers of artificial intelligence systems in health care may be familiar with the FDA’s Software as a Medical Device standards. Still, these standards do not explicitly require testing of algorithmic bias. Furthermore, developers of artificial intelligence systems, particularly those focusing on hiring or government services, may not be accustomed to working within a system that requires bias detection.
Finally, as more systems incorporate general purpose LLMs, the line between a deployer and a developer becomes increasingly blurry. If a software developer uses natural language as the primary interface, who is responsible if a deployer can craft a discriminatory system through a set of prompts? If a deployer is only using AI to enable the deployer’s access to the deployer’s data in the deployer’s preferred manner, but that allows any user to modify or use the system in a discriminatory manner, will the disclosure actually be meaningful?