AI Policy in Colorado
Executive Summary
In 2024, Colorado enacted Senate Bill 24-205, the first comprehensive state law regulating artificial intelligence. The Governor signed the bill “with reservations,” citing concerns about the scope and potential economic impact of the compliance regime. This report reviews recent legal action involving artificial intelligence, summarizes Colorado’s regulatory framework and subsequent legislative activity, examines AI regulation in other states, at the federal level, and in the European Union, and reviews relevant policy and academic research. The report concludes with considerations for Colorado as lawmakers revisit AI regulation during the 2026 legislative session.
Legal Action
The development and deployment of frontier AI models has led to the enactment of state laws to regulate AI and litigation across the United States. These cases rely primarily on existing legal frameworks, including civil rights, employment, antitrust, consumer protection, and tort law. Courts and regulators are addressing issues such as algorithmic discrimination, AI-driven pricing practices, and consumer harms associated with AI platforms.
Colorado’s AI Legislation
Senate Bill 24-205 establishes a regulatory framework for “high-risk” artificial intelligence systems used to make “consequential decisions” in areas including employment, housing, education, credit, insurance, and government benefits. Colorado’s approach differs materially from other state frameworks in that it regulates both developers and deployers, with the most extensive and operationally complex requirements imposed on deployers.
Deployers, defined as any entity doing business in Colorado that uses a high-risk AI system, are subject to a broad set of ongoing reporting, documentation, and disclosure obligations. The law requires deployers to implement risk-management programs, conduct recurring impact assessments, maintain records, provide consumer disclosures, publish public-facing statements, and report instances of algorithmic discrimination to the Attorney General. Developers must provide documentation regarding training data, foreseeable risks, system goals, and risk governance, and update that documentation when systems are materially modified. A limited exemption applies to certain small deployers under specified conditions.
In 2025, legislators introduced SB 25-318 to narrow and delay SB 24-205, but the bill did not advance. Following a special session, SB 25B-004 was enacted, delaying implementation of SB 24-205 until June 30, 2026, to allow additional legislative consideration during the 2026 session.
Concerns Regarding Colorado’s Bill
SB 24-205 has generated significant stakeholder concern. Key issues include the breadth and ambiguity of statutory definitions, potential liability for unintentional discrimination, the cost and operational complexity of compliance, and the breadth of organizations it applies to. Critics also have raised concerns about impacts on innovation and competitiveness. Much of the concern about SB 24-205 centers around requirements for deployers of high-risk AI systems in the state, which could include any organization in the private, nonprofit, or government sectors. The Governor convened a second AI working group in October 2025, encouraging participants to consider alternative regulatory approaches, including California’s frontier-focused model.
AI Regulatory Actions Outside Colorado
At the federal level, Congress has not enacted comprehensive AI legislation. Executive orders have established policy direction but reflect shifting priorities across administrations. A December 2025 executive order explicitly addresses state AI laws, including Colorado’s, and advances a federal policy framework aimed at limiting regulatory fragmentation.
Internationally, the European Union Artificial Intelligence Act establishes a risk-based regulatory framework with tiered obligations and significant penalties for noncompliance. Because the Act applies extraterritorially, many U.S.-based companies are expected to adopt EU-aligned compliance practices.
At the state level, dozens of states have passed bills that regulate artificial intelligence practices, and approaches vary significantly. Colorado’s framework is one of the most expansive, while other states focus more narrowly on frontier models or consumer-facing disclosures.
AI Regulatory Research and Models
Academic and policy research on artificial intelligence governance generally emphasizes risk-based and adaptive regulatory approaches, rather than broad, static frameworks. The literature highlights several recurring themes, including the importance of clear definitions and thresholds, alignment with industry standards and existing legal frameworks to reduce duplicative compliance, and the use of transparency, auditing, and post-deployment monitoring mechanisms, often supported by safe harbors and whistleblower protections. Research also notes that existing common-law tort and civil rights doctrines continue to play a role in shaping incentives for responsible AI development.
With respect to algorithmic discrimination, the research identifies disparate impact doctrine as particularly relevant, while emphasizing that legal protections remain uneven across sectors and use cases.
Key Takeaways for Colorado
Clarify Scope and Definitions
Narrow and clarify statutory definitions, including “high-risk” systems and “consequential decisions,” and consider explicit risk tiers to better align obligations with actual impact.Adopt Adaptive Governance
Emphasize post-deployment monitoring and create mechanisms to update requirements as AI technologies evolve.Reduce Compliance Friction
Align transparency and reporting requirements with industry standards and provide safe harbors for good-faith compliance.Align Across Jurisdictions
Seek alignment with federal, state, and international frameworks to reduce fragmentation and improve interoperability.Support Innovation
Pair regulatory requirements with technical and institutional support, particularly for smaller organizations.Plan for Frontier AI Oversight
Begin developing safety, monitoring, and incident-response approaches for frontier and foundation model with growing downstream impact
Overview
In 2024, Colorado passed the first bill of its kind regulating artificial intelligence, Senate Bill 24-205. The Governor signed the bill into law “with reservations” expressed in his signing statement[1] about concerns regarding the impact to the industry from the compliance regime required by the bill. This report outlines legal action that has been taken against technology or AI companies, the action that has been taken to regulate AI in Colorado, what has been done in other states, the EU, and at the federal level in the U.S., and provides insights from academic research regarding regulatory frameworks.
Legal Action
The recent advent of frontier AI models, the use of data and content to train their models, and their myriad uses have faced legal challenges in dozens of cases across the U.S. Policymakers are debating whether new legislation and laws are needed to regulate AI or whether existing laws suffice. Below are a few examples of legal action taken to challenge the use or development of AI.
United States v. Meta Platforms, Inc.: In June 2022, the U.S. attorney for the Southern District of New York sued Meta for discrimination in their use of an algorithm to advertise housing, which violated the Fair Housing Act. The case was settled and required Meta to pay a fine and modify their algorithm to prevent future discrimination against protected classes.[2]
United States v. RealPage, Inc.: In August 2024, Department of Justice challenged the real estate software company under federal antitrust law for using nonpublic, competitively sensitive data in its AI-driven rental price-setting products in ways that allegedly reduced competition among landlords and harmed renters. As part of a settlement and proposed final judgment, RealPage agreed to overhaul how its pricing software uses data, including restrictions on using certain nonpublic data in training and runtime, limits on product features that automate pricing recommendations, and the appointment of an independent monitor to ensure compliance, without admitting wrongdoing.[3]
Derek L. Mobley v. Workday, Inc.: A complaint was filed in February 2023 against Workday alleging racial, age, and disability discrimination by the algorithm used to screen applicants in Workday’s software. The Equal Employment Opportunity Commission filed an amicus curiae in support of the plaintiff and in opposition to the defendant’s motion to dismiss in May 2024.[4] This has become a collection-action lawsuit that has not yet been ruled on.
Juliana Peralta, Deceased v. Character Technologies, Inc. et al: In September 2025, the Social Media Victims Law Center filed a lawsuit in Colorado against AI-platform Character.AI, after the plaintiff took her life after using their platform. The complaint alleged that Character.AI designed a platform that promoted predatory chatbot technology that targeted children. The case is ongoing.
Colorado’s AI Legislation
Several bills have been proposed and passed in Colorado over the last two years that regulate artificial intelligence.
Senate Bill 24-205 establishes a regulatory framework for “high-risk” artificial intelligence systems in Colorado by focusing on those that make “consequential decisions” in firms that employ more than 50 developers. Consequential decisions are defined as those regarding employment, housing, education, credit, insurance, or government benefits. The bill mandates that organizations deploying such systems conduct risk and impact assessments, document their processes, and implement mitigation strategies to address discrimination, bias, and other harms. The law also imposes transparency obligations, including user disclosures and record-keeping, and sets up enforcement mechanisms and liability provisions if AI systems produce harmful or unfair outcomes.
Additionally, the bill attempts to protect individuals’ civil rights by requiring developers and deployers of AI systems to safeguard against discriminatory outcomes. It aims to stimulate responsible innovation through built-in compliance requirements rather than simply banning systems. To support these goals, the bill sets implementation deadlines and allows for regulatory rulemaking by the designated state agency, ensuring the law remains adaptive to evolving AI technologies.
Senate Bill 25-318 was proposed during the 2025 regular session as an attempt to amend Senate Bill 24-205. This bill would have:
Specified existing laws that apply to algorithmic discrimination;
Delayed the regulatory requirements by almost a year;
Reduced developers’ disclosure requirements;
Exempted less risky uses of artificial intelligence and developers of the systems in some cases;
Repealed the 90-day impact assessment requirement;
Required additional information be disclosed to consumers regarding how consumers can correct personal information and delayed the implementation of the disclosure requirements by three months; and
Removed violations when they were inadvertent.
This bill failed after one of the sponsors moved to postpone the bill indefinitely due to insufficient support. This led the Governor to include artificial intelligence in the call to legislative special session in August 2025. Although several iterations of AI regulation bills were proposed during special session, only Senate Bill 25B-004 passed and was signed into law. This bill delays implementation of Senate Bill 24-205 from February 1, 2026, to June 30, 2026, giving policymakers the 2026 legislative session time to find a resolution.
Concerns About Colorado’s Bill
Senate Bill 24-205 has attracted significant opposition. Concerns about the bill include[5a][5b]:
Holding companies responsible for unintentional discrimination and unclear safe harbor provisions;
The broad and vague language of the bill, including the definition of terms like “automated decision-making system”, “high risk”, and “consequential decisions”;
The time and cost of complying with the regulations from risk and impact assessments, documentation and reporting, and disclosure requirements; and
The breadth of the organizations it applies to, such as government entities and institutions of higher education.
Opponents of the bill have expressed more general concerns about hindering the AI and technology industry, losing competitiveness to other states, and the burden on businesses of all sizes to comply with the regulations. Proponents of the bill express a desire to ensure that consumers are protected from the potential harms that AI could cause.
An important distinction exists between AI regulation in Colorado, which regulates both developers and deployers, and California, which only regulated developers of large frontier AI systems. Deployers are defined in Senate Bill 24-205 as “a person doing business in this state that deploys (uses) a high-risk artificial intelligence system.” This differs from a developer who “develops or intentionally and substantially modifies an artificial intelligence system.”
The bill requires developers to provide documentation to deployers on the types of data used to train the model, reasonably foreseeable risks associated with the model, and the goals and benefits of the model, as well as how risks are governed and managed by the developers. Developers would also be required to update this documentation each time the high-risk AI system is “intentionally and substantially modified,” creating ongoing reporting requirements.
Much of the concern about Senate Bill 24-205 centers around requirements for deployers of high-risk AI systems in the state. Deployers are businesses or organizations in any sector of the economy that use high-risk AI systems to make consequential decisions. This could include any organization in the private, nonprofit, or government sectors that uses AI to screen job applicants, for instance. The reporting requirements for deployers are also significant and include:
Risk Management Program: Deployers must create and maintain an iterative risk-management policy aligned with NIST/ISO frameworks to identify and mitigate risks of algorithmic discrimination throughout the system’s lifecycle.
Impact Assessments: Deployers must complete impact assessments before deployment, annually, and after major system modifications, documenting use cases, risks, data, metrics, transparency steps, and safeguards.
Recordkeeping: Deployers must retain all impact assessments and related documentation for three years after the system’s final deployment.
Annual AI Review: Deployers must conduct yearly reviews of each high-risk AI system to ensure it is not causing algorithmic discrimination.
Consumer Notice (Pre-Decision): Before a high-risk AI system makes a consequential decision, deployers must notify consumers, explain the system in plain language, and provide opt-out information.
Consumer Notice (Adverse Decision): If the decision is adverse, deployers must disclose the reasons, how AI contributed, and the data used, and provide opportunities for correction and human-review appeal.
AI Interaction Disclosure: Deployers must disclose when a consumer is interacting with an AI system unless it would be obvious to a reasonable person.
Public Website Statement: Deployers must publish and update a website statement summarizing their high-risk AI systems, risk-mitigation practices, and the nature and sources of data used.
Mandatory AG Reporting: Deployers must notify the Attorney General within 90 days if a high-risk AI system is found to have caused algorithmic discrimination.
AG Access to Records: Upon request, deployers must provide their risk-management program, impact assessments, or related records to the Attorney General within 90 days.
A small business exemption is provided for deployers with fewer than 50 employees who do not use their own training data and use systems only as intended. They are exempt from risk-management programs, impact assessments, and website disclosures.
The Governor convened a second working group in October 2025 to try and come up with a solution to the various disagreements over the approach to AI regulation in the state with a goal of introducing legislation during the 2026 legislative session. The convening letter for this working group encouraged participants to focus on the recently passed California legislation, Senate Bill 53.
AI Regulatory Actions
The adoption and impacts of AI broadly affect countries, governments, and workers. The International Monetary Fund (IMF) projects that 60% of jobs in advanced economies will be impacted by AI, while 40% of workers globally will feel some impact.[6] Aligning regulation across U.S. states is ideal to ensure compliance with regulation; a patchwork of regulations typically results in low compliance rates.
United States Federal Government
Both Presidents Biden and Trump have issued executive orders regarding the use and governance of AI, but Congress has yet to pass legislation to govern it. President Biden’s executive order[7] lays out eight principles:
Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.
Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
The responsible development and use of AI require a commitment to supporting American workers.
Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
Americans’ privacy and civil liberties must be protected as AI continues advancing.
It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
President Trump’s executive order[8], which rescinds any provisions that have taken effect under President Biden’s executive order, sets the following goals:
Maintain global leadership in AI development by removing any barriers to its development;
Sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security;
Ensure AI systems are free from ideological bias or engineered social agendas; and
Directs the federal government to develop a plan within 180 days to achieve these goals.
On December 11, 2025, President Trump issued another executive order[9] to specifically address the patchwork of state laws and regulations that have sprouted up and are continuing to grow. Colorado’s law is called out, stating the law regarding algorithmic discrimination “may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups,” a claim that is unsubstantiated in the executive order. The President sets the policy goal of “United States’ global AI dominance through a minimally burdensome national policy framework for AI” and puts forth details a federal legislative recommendation that would preempt state laws, except for state laws regarding child safety protections, AI compute and data center infrastructure, other than general permitting reforms, and state government procurement and use of AI.
The executive order also:
Establishes an AI litigation Task Force to challenge state laws that do not align with the policy goal established in this order.
Requires the Secretary of Commerce to evaluate state AI laws that conflict with this order, including those that “require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”
Restricts state eligibility for federal funding under the Broadband Equity Access and Deployment Program.
Orders executive departments and agencies to assess their discretionary grant programs to determine whether the grants can be restricted to state complying with the policies outlined in this order.
European Union
In December 2023, the European Union passed the EU Artificial Intelligence Act after years of planning and debate. The Act took effect on August 1, 2024, but is not expected to be enforced until 2026.[10] The EU took a risk-based approach to regulating AI, which classifies types of AI systems into various risk categories with associated compliance measures. The table below outlines the levels of risk, types of AI systems associated with each, and the compliance requirements.
Table 1. European Union AI Act Risk Levels
| Risk Level | Description | AI System Examples | Compliance Requirements |
|---|---|---|---|
| Unacceptable Risk | AI systems considered a clear threat to safety, livelihoods, or fundamental rights. These are prohibited under the Act. |
| |
| High Risk | AI systems that significantly affect people's lives or rights; often used in regulated sectors. |
| |
| Limited Risk | AI systems that interact with humans or generate content where transparency is essential for trust. |
| |
| Minimal or No Risk | AI systems that do not pose significant risk to users' rights or safety. |
|
The EU created an AI Office to regulate AI and enforce the Act. In addition to its regulatory functions, the office has created various plans of action to ensure adoption of AI by small and medium sized enterprises in different industries, as well as enhancing the EU’s competitiveness when it comes to AI development.
Noncompliance with the Act may result in fines up to €35 million or 7% of total global revenue for prohibited AI systems or €15 million or 3% of total global revenue for high-risk system violations.[11] Any U.S. AI developer that want to offer its platform or services, or offer them through a third-party firm, will be required to comply with the EU’s AI Act. This is likely to result in U.S. firms implementing these practices globally for efficiency.
U.S. States
States have taken different approaches to regulation, with the focus ranging from risk-based regulation of frontier models to ensuring copyright protection. In 2024 and 2025, the National Conference of State Legislatures’ AI legislation database[12] recorded 1,262 bills considered across all states, with 192 of them enacted.
Colorado’s regulatory structure is one of the most sweeping, using a sectoral risk-based model for all firms, except those with under 50 developers, and government entities. California’s legislation focuses on only the largest frontier AI firms, while Utah and Texas take a similar but much more limited approach to Colorado’s.[13] The table below outlines some of the primary characteristics of these select states’ approaches.
Table 2. AI Regulatory Approaches by State
| Colorado SB 24-205 | California SB25; SB942; AB2013 | Utah SB149, 226, 232 | Texas HB149 Type of AI System Regulated | Automated-decision-making systems; consequential decision | Frontier models; Generative AI systems or synthetic content | Consumer-facing generative AI systems or synthetic content | All covered AI systems for public sector and some private sector use (employment, healthcare, biometrics)
| Size of Company | Under 50 employees | Gross annual revenue of $500 million; over 1 million monthly users | None specified | None specified
| Type of Company | Deployer | Developer | Deployer | Deployer, Developer, Distributor
| Reporting Requirements | Requires documentation, impact assessments, and disclosure for covered systems | Provenance metadata, detection tools for generative content, transparency reporting, risk frameworks, chatbot disclosures | Disclosure of generative AI use for consumers in specific contexts (chatbots, mental health) | Disclosure and limits on specific categories (biometrics, explicit content)
| Penalties and Enforcement Mechanism | Penalties for noncompliance and discriminatory outcomes; Attorney General | Civil penalties; CalCompute office (includes capacity building) | Consumer protection enforcement; New Office of AI Policy | Civil penalties and regulatory enforcement; Attorney General
| |
|---|
On December 9, 2025, the New York State Assembly sent the Responsible AI Safety and Education (RAISE) Act[14] to the Governor for signature. The RAISE Act applies only to large developers of frontier models, meaning companies that have trained at least one very high-compute AI model costing more than $5 million to train and that have spent more than $100 million on frontier models. Universities doing academic research are excluded. Large developers must meet safety, transparency, and testing requirements before deploying a frontier model, including creating and publishing a written Safety and Security Protocol, conducting model risk assessments, maintaining detailed testing documentation for at least five years, conducting annual third-party audits, and reporting any “safety incident” to the state within 72 hours. Developers must not deploy any frontier model that creates an “unreasonable risk of critical harm,” must implement safeguards to prevent such harm, and must provide unredacted materials to state officials upon request. Employees who identify substantial safety risks also receive whistleblower protections.
Violations of developer transparency, auditing, or deployment requirements could result in civil penalties of up to $10 million for a first violation and $30 million for subsequent violations. Violations of the employee-protection provisions carry penalties up to $10,000 per affected employee, payable to the harmed employee. The Act applies only to frontier models developed, deployed, or operating in whole or in part in New York State.
AI Regulatory Research & Models
As the advent of large language models (LLMs) and frontier AI systems is recent, the literature that exists regarding the governance and regulation of it is burgeoning. The existing research is focused on two different regulatory approaches: oversight and safety standards for frontier AI and sectoral risk-based regulation, akin to what California and the EU have implemented, respectively. Research has also been conducted on legal liability surrounding AI platforms and algorithmic bias, as lawsuits on these issues pre-date the release of large language models and frontier AI platforms. Included here are reports that helped define or is relevant to existing policy frameworks.
Policy Research
California’s recently passed legislation, Senate Bill 53, takes a different approach than Colorado’s, focusing on very large frontier models as opposed to almost any AI system that makes a consequential decision regarding consumers’ lives (employment, healthcare, etc.). California commissioned a report[15] produced by scholars at the University of California system, Stanford University, the Carnegie Endowment for International Peace, and others. This report only addresses how to regulate and advance AI foundation models and does not address specific sectoral or risk-based approaches.
The report provides background on the current policy landscape, the regulation of other industries as case studies (the Internet, the tobacco industry, and the energy industry), and the lessons learned from initial attempts at mitigating potential harms caused by all three. The takeaways from these case studies, as well as others, are synthesized into several recommendations:
Governance structures should balance the benefits and risks of AI;
Early policy decisions can help set the trajectory that shape the evolution of the systems;
Industry expertise can help policymakers establish clear transparency standards and independent ways to verify safety claims and risk assessments;
Ensuring there is greater transparency can help AI system accountability, competition, and in building public trust;
Achieving transparency requires whistleblower protections, safe-harbors for third-party evaluators, and sufficient public-facing information;
Requiring adverse event reporting will allow for the monitoring of AI system impacts and adaptation of regulations to the evolving risks; and
Setting thresholds for which firms or models are regulated is helpful in the implementation of policies set and should adapt over time to the evolving technology; policymakers should monitor the efficacy of the thresholds and whether others should be selected.
More specific takeaways from the report apply directly to the policies enacted in Colorado. These include:
The authors highlight the potential problems associated with regulatory thresholds based on risk evaluations. They can help better manage safety risks but choosing which risk evaluation framework to use is difficult. They urge alignment with industry consensus around selecting which risks to track and how to measure them. The developers may be unaware of potential downstream impacts, so a better policy option would be to monitor the market over time.
The report highlights the challenges using developer thresholds for regulation, especially head count. AI firms may be able to develop large and impactful foundation models with few developers.
Existing regulatory frameworks could be used to regulate AI risks.
Ensure there is a plan of action for how to analyze and adapt policy to transparency disclosures.
Align regulations and transparency reporting with industry standard practices to not create a dual reporting system (internal and external).
Reporting requirements should consider freedom of speech, intellectual property, and trade secrets rights.
Ensure clarity and specificity of regulations to ensure the industry can thrive economically.
Frontier AI Regulation: Managing Emerging Risks to Public Safety
This 2023 report authored by more than a dozen researchers including some from Google DeepMind, OpenAI, and Microsoft and academic institutions, lays out regulatory recommendations for frontier AI models. The report begins by making the case for why regulation of these models is needed and states that self-regulation will not be sufficient to ensure public safety with these rapidly evolving models and argues that government intervention is needed. It also provides a list of recommendations and safety standards for policymakers to balance the risks and benefits.
Regulating frontier AI is difficult at this nascent stage of its development. The authors point out three challenges posed by these models:
The unexpected capabilities problem, which can arise quickly and unexpectedly.
The deployment public safety problem, which will be an ongoing challenge.
The proliferation problem, since proliferation will outpace regulation.
Three facets are needed to address these challenges:
Safety standard setting processes that can help identify the appropriate requirements for frontier AI developers.
Reporting requirements that can be determined through setting standards for the appropriate level of disclosure and monitoring, as well as whistleblower protections to provide regulators insight into frontier AI development processes.
Mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models including enforcement by authorities and licensing bodies.
Despite the need for this oversight, the authors caution that regulation could inhibit innovation in the field. Any regulations implemented early on would need to be monitored and updated to keep up with advances in the technology.
Once the standards have been set, the following are the authors’ recommended safety standards:
Risk assessments: Developers should conduct thorough pre-deployment assessments of dangerous capabilities and make deployment decisions based on assessed risk.
External oversight: Independent audits or expert review of models is recommended to improve accountability and unveil potential risks.
Dynamic monitoring: Post-deployment monitoring should be required so that if new capabilities or risks emerge, the model’s deployment status and safeguards can be adjusted.
Pro-innovation balancing: The recommendations attempt to balance safety and regulation with innovation.
Recommended regulations should align with the level of risk a system poses. An educational chatbot does not typically pose the same risk as a general-purpose assistant with human-level capabilities. The report lists the following questions as potential limitations and issues that had yet to be answered at the time of publication:
How should frontier AI be defined for regulatory purposes?
What is the potential danger from foundation AI models?
Will training advanced AI models continue to require significant resources?
How effectively can we anticipate and mitigate risks from frontier AI?
How can regulatory flight be avoided?
How can reductions in innovation be avoided?
How can the centralization of power in AI development be avoided?
How can it be ensured that government will not abuse its regulatory powers?
How can regulatory capture be avoided?
What is the appropriate regulatory body or agency?
How will the regulations relate to other AI regulation?
How do we move toward international cooperation?
This report explains that the European Union Artificial Intelligence Act (AIA) adopts a four-tier risk-based approach to AI regulation:
Unacceptable-risk systems (banned)
High-risk systems (strict regulation and conformity assessments)
Limited-risk systems (some transparency obligations)
Minimal-risk systems (largely unregulated beyond existing law)
In contrast, the U.S. regulatory landscape remains sectoral and fragmented. Executive orders provide guidance to federal agencies to identify AI risks, but there is no comprehensive federal law. In this light, the EU model prioritizes uniform, cross-sector regulation, while the U.S. model emphasizes innovation, minimal regulation, and sector-by-sector oversight.
The proposed AI Integrative Risk-Based model and recommendations bridge the gap and help move toward global regulatory coherence. This risk-based model consists of five steps:
Identify Risk – catalog AI risks, such as bias, privacy, etc.
Sectoral Analysis – apply those risks to specific sectors to determine where to focus oversight.
Prioritize Risk – grade or tier the risks, similar to the EU’s four-tier model, in terms of severity and potential impact.
Sectoral Comparison – compare risk across sectors and identify common regulatory approaches or uniform standards.
Draft Responsive Regulation – design regulation tailored to the risk level, balancing public protection with innovation. The report argues that this would allow the U.S. and other jurisdictions to develop regulations that are risk-appropriate, sector-specific, and aligned with the EU’s approach, creating a streamlined regulatory framework.
This report and framework do not address accountability mechanisms.
Legal Research
This report reviews tort law, how it varies by state, and how it applies to legal cases involving artificial intelligence. Tort law, stemming from common law, is described as “jurisdictionally specific,” with any set precedents only applying in the same jurisdiction. Under existing law, AI developers are already subject to tort liability because tort laws apply to any entity whose activities create risks of physical injury or property damage. Developers face exposure under negligence, product liability, and public nuisance theories, and this risk is greatest for companies that fail to follow industry-leading safety practices. Tort law therefore serves as an existing incentive structure that pushes developers to exercise caution in designing, testing, and deploying advanced AI systems to reduce the likelihood of harm and liability.
Despite this, courts have not yet fully clarified how existing doctrines apply to harms caused by third-party misuse of AI systems or how constitutional and statutory protections might limit liability. Policymakers could intervene by defining duties of care, specifying how best practices affect liability, adjusting causation standards, or clarifying responsibility for downstream misuse. Yet the common law’s flexibility is also a strength: it incorporates generations of experience managing safety risks and can adapt more readily than rigid statutes to fast-changing technologies. As AI advances, careful deliberation will be needed to determine whether legislative reforms should supplement or simply allow the evolving common law to shape the boundaries of developer liability.
Key recommendations drawn from the report include:
The AI industry should adopt standardized best practices to reduce harm and exposure to liability;
Policymakers at the state and federal levels could enact statutes or regulations to clarify guidelines for AI developers to reduce risk and promote transparency and documentation; and
Any statutory changes made should supplement and reinforce instead of displace existing common laws.
The legal doctrine that will be key to preventing AI discrimination
Artificial intelligence systems are increasingly being used in hiring, lending, health care, public benefits, and criminal justice, and these systems have repeatedly shown the capacity to produce discriminatory outcomes. Algorithms trained on historical or biased data have the possibility to intensify existing inequalities. Examples include facial recognition tools that misidentify people of color more frequently, hiring algorithms that favor applicants similar to past employees, and financial lending models that disproportionately reject certain racial groups. These risks are heightened because many modern AI systems are trained on very large datasets and operate in ways that are difficult to interpret, making it challenging to identify or correct the sources of bias.
U.S. discrimination law generally relies on two frameworks: disparate treatment, which focuses on intentional discrimination, and disparate impact, which focuses on policies or practices that disproportionately harm protected groups regardless of intent. Traditional anti-discrimination protections often require proof of intent, which is difficult to apply to algorithmic systems because AI models do not have conscious motives and often make decisions through opaque, complex processes. Disparate impact doctrine is thus better aligned with algorithmic bias because it allows plaintiffs to challenge harmful outcomes without having to prove the designer’s intent to discriminate.
However, disparate impact protections in the United States are limited and inconsistent across sectors. Some statutes, such as the Fair Housing Act and Title VII, provide clear pathways for disparate impact claims, while other areas including health care, public services, and emerging uses of AI have no comparable protection. Judicial decisions have also narrowed the doctrine in recent years, making enforcement more difficult. The article argues that Congress should adopt a federal disparate impact law that applies broadly to AI and automated decision making, includes a private right of action for individuals affected by discriminatory outcomes, and strengthens the capacity of enforcement agencies. Such a framework would help ensure that AI systems operate fairly and that individuals have meaningful recourse when automated tools create inequitable results.
This report reviews how algorithmic discrimination takes several distinct but similar forms, which reflects how automated decision systems may inadvertently introduce bias into their output. The literature commonly identifies five main types:
Bias by algorithmic agents arises when systems are trained on historical data that reflect past discrimination or are designed by humans whose assumptions shape outcomes.
Discrimination based on feature selection occurs when designers choose variables that systematically disadvantage certain groups, even if those variables appear neutral.
Proxy discrimination involves the use of indirect indicators, such as ZIP code or browsing behavior, that correlate strongly with protected characteristics like race or income.
Disparate impact refers to facially neutral algorithms that produce disproportionately harmful outcomes for protected groups without explicit intent.
Targeted advertising and pricing discrimination occurs when algorithms segment users and selectively show opportunities, information, or prices in ways that exclude or burden specific populations.
To address these risks, regulators have developed a mix of legal and policy approaches rather than a single framework. In the U.S., existing civil rights and consumer protection laws are applied to algorithmic systems through equal protection and anti-discrimination doctrines. Sector-specific rules, such as those governing employment, credit, housing, and education, provide more specific frameworks. Preventive measures focus on algorithmic audits, impact assessments, transparency requirements, and data governance rules. Consequential liability regimes rely on enforcement, holding organizations accountable when algorithmic decisions result in discriminatory harm through disparate impact frameworks under civil rights law.
In addition, self-regulation, which includes industry standards, professional codes of ethics, and voluntary commitments to fairness, transparency, and auditability, is beginning to play a larger role. Combining public oversight with private compliance, similar to what the EEOC and FTC have provided guidance on, provide some accountability for algorithmic outcomes. Other jurisdictions, particularly the European Union and Canada, place greater emphasis on proactive obligations such as mandatory risk assessments, rights related to automated decision making, and stronger transparency requirements. Together, these regulatory measures aim to balance innovation with fairness by combining legal liability, preventive oversight, and ongoing monitoring of algorithmic systems.
Takeaways for Colorado
Based on the research, what has been done in other states, and the concerns over Colorado’s initial AI regulation, below are several considerations for the next steps in the AI policy debate. An evidence-based framework can create balance between the level of regulation and the need to protect consumers.
1. Clarify the Scope, Risk, and Definitions
Colorado’s AI law is more expansive and ambiguous than most other U.S. states. Terms such as “automated decision-making system,” “consequential decisions,” and “high-risk” need clearer definitions to avoid overregulation and legal uncertainty. Lessons from the EU AI Act and California’s SB 53 suggest that risk-based classification should be explicitly tiered (e.g., unacceptable, high, limited, minimal risk) to focus compliance on higher-impact systems while reducing requirements for low-risk models or platforms. This can help reduce compliance costs and align Colorado’s framework with emerging national and global standards.
2. Build an Adaptive Governance Structure
As the Frontier AI Regulation report recommends, Colorado should design a dynamic oversight system instead of static compliance. This includes requiring post-deployment monitoring, risk updates, and external audits for high-risk systems while maintaining flexibility to adapt to technological advances. Establishing a state AI oversight office or advisory board could help enable continuous technical input, data collection, and rule adjustments without proposing new legislation each time AI evolves.
3. Encourage Transparency and Safe Harbors Through Practical Reporting Standards
Colorado’s current disclosure and documentation requirements are viewed as onerous, particularly for small firms or organizations. Similar to California, the state could create safe harbor provisions for good-faith compliance and align reporting requirements with industry-standard frameworks. Whistleblower protections, independent audits, and consumer notification standards could strengthen accountability while preserving innovation and trade secrets.
4. Align with Other Governments’ Standards
Colorado could seek alignment with U.S. executive orders, or California’s regulatory regime and could work with federal agencies and other states to pilot risk and compliance tools, which would help local organizations comply with both U.S. and international requirements.
5. Support Innovation and Small Business Capacity
Colorado could create an oversight and innovation organization similar to the EU’s AI Office or California’s CalCompute to help with capacity-building with regulation to ensure innovation continues alongside consumer protection.
6. Plan for Frontier AI Oversight
While Colorado’s bill focuses on consequential decisions, frontier and foundation models are likely to have increasing downstream impact. The Frontier AI Regulation framework recommends establishing safety standards, pre-deployment risk assessments, and monitoring systems for these models. Colorado could begin exploring that framework to ensure state-level readiness for rapidly evolving AI systems.