AI Regulations in the US What You Need to Know in 2025

AI Regulations in the US: What You Need to Know in 2025

AI regulations in the US have undergone a significant transformation in 2025, creating both opportunities and challenges for businesses operating artificial intelligence systems. Unlike the approach taken by the EU AI Act, the United States has developed a multi-layered regulatory framework that combines federal executive orders with pioneering state legislation, such as the Colorado AI Act. This patchwork approach means organisations must operate an increasingly intricate web of requirements that vary across jurisdictions.

As AI technology continues to reshape industries from healthcare to finance, regulatory bodies at both the federal and state levels are working to address emerging risks while maintaining American leadership in AI innovation. The stakes couldn’t be higher, with approximately 40% of Americans now using AI tools daily, and projections suggesting that 40% of jobs may be displaced or transformed by artificial intelligence. Achieving compliance is now a business imperative.

This detailed guide will help you understand the current regulatory environment, learn more about key compliance requirements, and prepare your organisation for the evolving world of AI governance. Whether you’re deploying generative AI systems, implementing automated decision systems, or simply using AI tools in your operations, understanding these regulations is crucial for sustainable business success.

Key Takeaways

The United States employs a multi-layered regulatory approach to AI, combining federal executive orders, agency guidance, and diverse state laws, creating a complex compliance landscape for businesses.

State-level legislation, such as the Colorado AI Act and California AI Transparency Act, leads AI regulation efforts by focusing on high-risk AI systems, transparency, and consumer protection.

Organisations should implement strong AI governance strategies, including risk assessments, transparency measures, and ongoing monitoring.

Current State of AI Regulations in the United States

The regulatory environment for artificial intelligence in the United States reflects a balance between innovation and oversight. Unlike other jurisdictions that have enacted federal legislation, the US approach relies on a combination of executive orders, agency guidance, and state laws to regulate AI systems.

The Multi-Layered Regulatory Approach

Currently, no federal legislation explicitly governs the development and deployment of AI across all sectors. Instead, the regulatory framework consists of:

Federal executive orders and agency guidelines that provide broad principles for trustworthy AI
Sector-specific regulations enforced by agencies like the FTC, EEOC, and CFPB
State AI laws that address specific use cases and high-risk AI systems
Industry standards such as the NIST AI Risk Management Framework

This approach has created what experts describe as a “regulatory patchwork” where businesses must comply with varying requirements depending on their location, industry, and specific AI applications.

Key Timeline of 2025 Developments

The regulatory landscape shifted in early 2025 with several significant developments:

January 2025: President Trump signed the executive order “Removing Barriers to American Leadership in Artificial Intelligence,” which rescinded many of the Biden administration’s AI safety measures. This order prioritised economic competitiveness and technological leadership over regulatory scrutiny.

February 2025: The Colorado AI Act took effect, making Colorado the first state to implement regulations for high-risk AI systems in employment and consumer contexts.

April 2025: Executive Order 14277 “Advancing Artificial Intelligence Education for American Youth”, focused on updating education strategies to incorporate AI literacy.

July 2025: The White House released “Winning the AI Race: America’s AI Action Plan,” outlining three pillars for federal AI policy focused on accelerating innovation, building infrastructure, and leading international diplomacy.

The Business Challenge

Organisations deploying AI systems face the challenge of managing this regulatory environment while maintaining operational efficiency. The lack of uniform federal standards means businesses operating across multiple states must develop compliance strategies that account for varying state requirements, federal guidelines, and industry-specific regulations.

Recent legislative activity, including H.R. 1 (the “One Big Beautiful Bill Act”), which would suspend state and local AI regulations for a decade, demonstrates the ongoing tension between federal and state authority in AI governance. This uncertainty makes it crucial for businesses to stay informed about regulatory developments and maintain flexible compliance strategies.

Federal AI Governance Framework

The federal government’s approach to regulating AI has evolved significantly, particularly following the Trump administration’s emphasis on removing regulatory barriers to promote American leadership in the field of AI. Understanding the current federal framework is essential for any organisation deploying artificial intelligence technology.

Current Federal Approach

Rather than enacting federal legislation, the US government regulates AI through existing laws and agency guidance. This approach relies on principles-based frameworks and sector-specific enforcement actions to address AI risks while preserving incentives for innovation.

The federal strategy focuses on three main areas:

Risk management through voluntary standards and best practices
Civil rights enforcement using existing anti-discrimination laws
National security measures to protect critical infrastructure and competitive advantages

2025 Executive Order on AI Leadership

The executive order “Removing Barriers to American Leadership in Artificial Intelligence” marked a significant shift in federal AI policy. This order directed federal agencies to:

Review and revoke policies that allegedly impede AI innovation
Prioritise American competitiveness in global AI dominance
Ensure federal procurement of AI systems is free from ideological bias
Fast-track permits for AI infrastructure, including data centres and semiconductor facilities

The order also mandated the creation of an Artificial Intelligence Action Plan within 180 days, resulting in a strategy released in July 2025.

NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF 1.0), which provides voluntary guidance for managing AI risks. This framework has become the de facto standard for many organisations, even though compliance is not legally required.

The framework emphasises four core functions:

Govern: Establishing organisational AI governance and risk management policies
Map: Understanding AI system contexts and identifying potential impacts
Measure: Assessing and testing AI systems for reliability, safety, and bias
Manage: Implementing controls and monitoring systems throughout the AI lifecycle

Many state laws, including the Colorado AI Act, reference NIST standards, making familiarity with this framework important for compliance across jurisdictions.

Agency Enforcement Actions

Federal agencies continue to enforce existing laws against discriminatory or deceptive AI practices, even without the passage of specific legislation regarding AI. Key enforcement areas include:

Federal Trade Commission (FTC): The FTC has taken action against companies for deceptive AI claims and algorithmic bias. Notable cases include enforcement actions against Rite Aid for the improper use of facial recognition technology, which resulted in false accusations and discriminatory impacts on customers.

Equal Employment Opportunity Commission (EEOC): The EEOC actively investigates AI-related employment discrimination, particularly in automated hiring and evaluation systems. The agency has issued guidance on how existing civil rights laws apply to AI tools in the context of employment decisions.

Consumer Financial Protection Bureau (CFPB): The CFPB monitors the use of AI in financial services, particularly to ensure fair lending compliance and protect consumers in automated decision-making processes.

Congressional Activity

While a complete AI legislation remains stalled, Congress continues to consider various measures:

S.2551 AI Training Act: Proposed legislation to require training for federal employees working with AI systems and establish government-wide AI governance standards.

House and Senate hearings: Regular oversight hearings examine the risks, benefits, and regulatory approaches of AI, with both House Republicans and Senate Republicans expressing concerns about regulatory overreach while supporting innovation.

The legislative process for federal legislation remains uncertain, with ongoing debates about the appropriate balance between innovation and regulation.

State-Level AI Regulations and Key Laws

State governments have become the primary drivers of AI regulation in the United States, with 38 states enacting approximately 100 AI-related measures in 2025 alone. This state-by-state approach has created a diverse landscape of requirements that businesses should consider.

Overview of State AI Regulatory Activity

State AI laws typically focus on specific use cases rather than regulating all artificial intelligence systems. Common areas of state regulation include:

Employment and hiring decisions using automated decision systems
Consumer protection and transparency requirements
Biometric data collection and facial recognition
Healthcare AI applications
Government use of AI technologies

The rapid pace of state legislation reflects growing awareness of AI risks and the absence of federal legislation to address these concerns.

Colorado AI Act (SB24-205)

Colorado became the first state to enact AI regulation with the passage of SB24-205, known as the Colorado AI Act. This legislation takes effect February 15, 2026, and establishes the state-level framework for regulating high-risk AI systems.

Key Provisions:

Scope: Applies to developers and deployers of AI systems that make consequential decisions in employment, education, financial services, healthcare, housing, insurance, and legal services
Risk assessments: Requires impact assessments for high-risk AI systems before deployment
Algorithmic discrimination: Prohibits the use of AI systems that result in unlawful discrimination
Consumer rights: Provides consumers the right to know when AI systems make decisions affecting them
Disclosure requirements: Mandates disclosure of AI system capabilities, limitations, and known risks

The Colorado law references the NIST AI Risk Management Framework, encouraging deployers to implement these voluntary standards. Organisations that comply with equivalent AI standards may receive reduced penalties for violations.

California AI Legislation

California has enacted multiple AI-related laws, effective January 2026, that establish requirements for AI transparency and consumer protection.

SB-942 AI Transparency Act: This legislation requires businesses to disclose when consumers interact with generative AI systems and provide clear labelling of AI-generated content. The law applies to any company operating in California that uses AI to interact with consumers or create content for public consumption.

AB 2013: Requires developers of large-scale AI models to document training data, provide impact assessments, and implement safety measures. This law targets generative AI systems and requires disclosure of copyrighted material used in training data.

The California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) also contain provisions that apply to AI systems, particularly regarding automated decision-making and consumer rights to explanation.

Other Notable State Laws

Illinois HB 3773: Taking effect January 1, 2026, this law prohibits discrimination in employment decisions made by artificial intelligence systems. It requires employers to provide notice when AI is used in hiring and allows job applicants to request information about the AI system’s decision-making process.

Tennessee ELVIS Act (HB 2091): Effective July 1, 2024, this law protects individuals from unauthorised use of their voice or likeness in AI-generated content. Named after Elvis Presley, the law addresses AI-generated deepfakes and provides civil remedies for those who violate it.

Utah SB 149 – AI Policy Act: Creates disclosure requirements for government use of AI and establishes an AI policy office to coordinate state AI governance. The law requires state agencies to conduct impact assessments before deploying AI systems that have a significant impact on the public.

New York City Local Law 144: While technically a local ordinance, NYC’s bias audit requirement for automated hiring tools has become a model for other jurisdictions. Effective July 5, 2023, the law requires employers to conduct annual bias audits of AI systems used in hiring decisions.

Emerging State Legislation

Several states are considering additional AI regulations for 2026 and beyond:

Washington: Proposed AI regulation similar to Colorado’s approach
Connecticut: Bills addressing AI in healthcare and government services
Massachusetts: Legislation focusing on AI in criminal justice and policing
Texas: Proposed measures addressing AI in education and employment

The trend toward state-level regulation shows no signs of slowing, with many states viewing AI governance as essential for protecting residents while federal legislation remains uncertain.

Core Compliance Requirements and Best Practices

Managing AI regulations requires understanding common compliance themes and implementing risk management strategies. While specific requirements vary by jurisdiction, several core principles emerge across federal guidelines and state AI laws.

Common Regulatory Themes

Most AI regulations, whether federal or state, focus on similar concerns:

Risk Assessment and Impact Evaluation: Nearly all AI regulations require some form of risk assessment for high-risk AI systems. These assessments typically evaluate potential impacts on civil rights, consumer protection, and safety. Organisations must identify automated decision systems that could significantly affect individuals and conduct impact evaluations before deployment.

Transparency and Disclosure: Regulations consistently require organisations to inform users when they interact with AI systems. This includes disclosing when AI makes decisions affecting consumers, employees, or other stakeholders. The specific disclosure requirements vary, but the principle of transparency remains constant across jurisdictions.

Bias Testing and Algorithmic Auditing: Many state laws mandate regular testing of AI systems for discriminatory impacts. This includes both pre-deployment testing and ongoing monitoring to ensure systems don’t produce biased outcomes against protected classes.

Specific Compliance Requirements

Colorado AI Act Requirements:

Conduct impact assessments for high-risk AI systems 90 days before deployment
Implement reasonable care standards to protect against algorithmic discrimination
Provide consumers with notices when AI systems make consequential decisions
Maintain documentation of AI system governance and risk management processes
Report annually to the Colorado Attorney General on high-risk AI system usage

California Disclosure Requirements:

Label AI-generated content clearly and conspicuously
Disclose when consumers interact with generative AI systems
Provide transparency reports for large-scale AI models, including training data documentation
Implement safety measures and impact assessments for generative AI systems

Federal Agency Expectations:

Follow NIST AI Risk Management Framework principles
Ensure AI systems comply with existing civil rights and consumer protection laws
Maintain human oversight for automated decision-making in sensitive contexts
Document AI system testing and validation procedures

Industry-Specific Considerations

Different sectors face unique regulatory requirements based on existing industry regulations and AI application contexts:

Healthcare: AI systems in healthcare must comply with HIPAA privacy requirements, FDA medical device regulations, and state medical practice laws. Healthcare AI developers must consider patient safety, data privacy, and clinical validation requirements to ensure the effective implementation of AI in healthcare.

Financial Services: Financial institutions using AI must comply with fair lending laws, consumer protection regulations, and banking oversight requirements. The CFPB actively monitors the use of AI for discriminatory lending practices and deceptive marketing.

Employment: AI tools used in hiring, evaluation, or workplace decisions are subject to scrutiny under civil rights laws, state employment regulations, and emerging AI-specific employment protections. The EEOC’s guidance on AI in employment provides valuable insights into compliance.

Government: Public sector AI use typically faces the highest scrutiny, with requirements for public transparency, due process protections, and constitutional compliance. Many local governments have enacted specific policies for the procurement and deployment of AI.

Practical Compliance Steps

Organisations can take several concrete steps to ensure compliance across multiple jurisdictions:

AI System Inventory: Maintain an inventory of all AI tools and automated decision systems used in operations. This inventory should include the system’s purposes, data sources, decision-making capabilities, and the populations it affects.

Risk Assessment Process: Implement standardised procedures for evaluating AI system risks before deployment. Consider adopting the NIST framework as a baseline, then add jurisdiction-specific requirements as needed.

Documentation Standards: Establish strong documentation practices covering AI system development, testing, deployment, and monitoring. This documentation proves essential for compliance audits and regulatory inquiries.

Training and Awareness: Ensure employees understand AI regulations relevant to their roles and responsibilities. Regular training helps prevent compliance violations and promotes responsible AI practices.

Vendor Management: Develop AI vendor evaluation processes that assess compliance capabilities and contractual protections to ensure effective management. Many organisations rely on third-party AI systems, making vendor compliance a critical business risk.

Compliance AreaKey RequirementsApplicable Regulations
Risk AssessmentImpact evaluation before deploymentColorado AI Act, NIST Framework
TransparencyUser notification and disclosureCalifornia laws, FTC guidance
Bias TestingRegular algorithmic auditingNYC Local Law 144, Colorado AI Act
DocumentationSystem governance recordsMultiple state and federal requirements
Human OversightMeaningful human reviewEEOC guidance, best practices

Monitoring and Ongoing Compliance

Compliance with AI regulations requires ongoing attention rather than one-time implementation:

Regular Auditing: Establish periodic reviews of AI system performance, including bias testing, accuracy assessments, and impact evaluations. Many regulations require annual or ongoing monitoring.

Regulatory Tracking: Stay informed about evolving AI regulations through legal counsel, industry associations, and regulatory monitoring services. The pace of AI regulation continues to accelerate across jurisdictions.

Incident Response: Develop procedures for addressing AI system failures, bias discoveries, or compliance violations. A quick response and remediation demonstrate good faith in compliance efforts.

Stakeholder Engagement: Maintain open communication with affected communities, employees, and customers regarding the use and impacts of the AI system. Proactive engagement can prevent regulatory issues and build trust.

Preparing for the Future of US AI Regulation

The regulatory environment for artificial intelligence continues evolving rapidly, with significant developments expected in the coming years. Organisations must adopt forward-thinking strategies to operate in this changing environment while maintaining competitive advantages in AI innovation.

Expected Regulatory Developments

Expanding State Legislation: Industry analysts predict that most states will introduce some form of AI regulation by 2026. This expansion is likely to focus on specific high-risk applications rather than AI governance, resulting in an increasingly complex patchwork of requirements.

Current trends suggest states will prioritise:

Employment and hiring AI systems regulation
Consumer protection for AI-generated content
Healthcare AI safety and efficacy standards
Government transparency in AI procurement and deployment
Educational AI applications and student privacy

Federal Legislative Prospects: While having federal legislation remains uncertain, several factors may accelerate congressional action:

International pressure from the EU AI Act and other global standards
High-profile AI incidents or failures that demonstrate regulatory gaps
Economic competitiveness concerns whether regulatory fragmentation hinders innovation
Constituent pressure for consistent consumer protections

The legislative process faces challenges from competing priorities between innovation promotion and risk reduction, as well as partisan disagreements about the appropriate federal role in AI governance.

International Regulatory Influence

The EU AI Act continues to influence US regulatory thinking and business practices, particularly for multinational companies. Key areas of influence include:

Global Standards Harmonisation: US companies operating internationally must consider EU requirements, creating pressure for similar domestic standards. This dynamic may accelerate the adoption of risk-based regulatory approaches in the United States.

Competitive Positioning: The Trump administration’s focus on global AI dominance includes efforts to export American AI frameworks and standards to allied nations. This international competition may shape future US regulatory approaches to maintain technological leadership.

Cross-Border Enforcement: As AI systems increasingly operate across jurisdictions, regulatory coordination becomes essential. US agencies are developing frameworks for international cooperation on AI oversight and enforcement.

Emerging Areas of Regulatory Focus

Agentic AI Systems: As AI systems become more autonomous and capable of independent action, regulators are beginning to address the unique risks posed by agentic artificial intelligence. These systems require new approaches to accountability, control, and safety oversight.

Generative AI Content: The proliferation of AI-generated content across media, marketing, and communications continues to raise concerns about misinformation, intellectual property, and consumer deception. Expect expanded labelling and disclosure requirements for AI-generated content.

Critical Infrastructure: AI deployment in critical sectors like energy, transportation, and telecommunications faces increasing regulatory scrutiny. The federal government is developing specialised frameworks for AI safety in infrastructure applications.

AI in Democratic Processes: Growing concerns about AI’s impact on elections, political communications, and civic engagement are driving the development of new regulatory approaches. This includes measures addressing AI-generated deepfakes in political content and automated influence operations.

Business Preparation Strategies

Legal Team Coordination: Organisations should ensure their legal, compliance, and technology teams work closely together on AI governance. This coordination becomes increasingly essential as regulations become more technically complex and enforcement intensifies.

Vendor Evaluation Processes: Develop vendor assessment procedures that evaluate AI suppliers’ compliance capabilities, security measures, and regulatory alignment. Many compliance failures occur through third-party AI tools rather than internal systems.

Policy Development: Create internal AI governance policies that exceed minimum regulatory requirements. Proactive policy development provides flexibility to adapt to new regulations while demonstrating good faith compliance efforts.

Cross-Jurisdictional Planning: For organisations operating across multiple states or internationally, develop compliance strategies that address the highest common denominator of requirements rather than minimum standards in each jurisdiction.

Staying Compliant Across Jurisdictions

Monitoring Tools and Resources: Implement systematic approaches to tracking regulatory developments:

Subscribe to regulatory monitoring services that track AI legislation across jurisdictions
Join industry associations that provide regulatory updates and advocacy
Establish relationships with legal counsel specialising in AI and technology law
Participate in regulatory comment processes to influence policy development

Adaptive Compliance Programs: Design compliance programs that can quickly adapt to new requirements:

Build flexibility into AI system design and deployment processes
Maintain modular documentation that can be updated for new requirements
Develop standard operating procedures that can accommodate varying jurisdictional needs
Train staff on regulatory principles rather than just specific current requirements

Risk-Based Prioritisation: Focus compliance efforts on the highest-risk AI applications and most likely regulatory scenarios:

Prioritise compliance for AI systems affecting employment, healthcare, and financial services
Invest in bias testing and fairness measures for customer-facing AI tools
Ensure transparent practices for any AI systems making decisions about individuals
Maintain strong data governance for AI training and operation

Industry-Specific Preparation

Different sectors should focus on specific areas of likely regulatory expansion:

Technology Companies: Prepare for increased scrutiny of AI model development, training data usage, and safety testing. Consider adopting voluntary standards that may become mandatory requirements.

Healthcare Organisations: Focus on patient safety, privacy protection, and clinical validation for AI tools. Regulatory agencies are developing specialised frameworks for medical AI applications.

Financial Services: Emphasise fair lending compliance, consumer protection, and risk management for AI-driven financial decisions. Expect increased enforcement of existing regulations applied to AI systems.

Employers: Prepare for expanded employment-related AI regulations covering hiring, evaluation, scheduling, and workplace monitoring. Implement human oversight and bias testing for employment AI tools to ensure fairness and accuracy.

Conclusion and Next Steps

The future of AI regulations in the US will be shaped by ongoing tensions between innovation and oversight, as well as between federal and state authority, and domestic and international considerations. Organisations that proactively address these challenges while maintaining flexible compliance strategies will be best positioned for success in this evolving regulatory environment.

Immediate Action Items:

1. Conduct a strong inventory of all AI systems currently in use
2. Assess current compliance gaps against existing federal guidelines and applicable state laws
3. Implement risk assessment procedures based on NIST framework principles
4. Develop internal AI governance policies and training programs
5. Establish monitoring systems for regulatory developments in relevant jurisdictions

    Long-term Strategic Considerations:

    Build compliance capabilities that can scale with regulatory expansion
    Invest in AI safety and fairness technologies that exceed current requirements
    Participate in industry standards development and regulatory policy discussions
    Develop competitive advantages through responsible AI practices that build consumer trust

    Start preparing your organisation today by conducting an AI inventory, adopting risk management frameworks, and developing adaptive compliance policies that can evolve with the rapidly changing regulatory environment. The future of AI regulation may be uncertain, but the need for proactive preparation is evident.

    FAQs

    What is the current federal approach to AI regulation in the US?

    The US federal government regulates AI primarily through existing laws, executive orders, and agency guidance rather than comprehensive federal legislation. This approach emphasises voluntary risk management, civil rights enforcement under current anti-discrimination laws, and national security measures, while promoting innovation and American leadership in AI.

    What are the key provisions of the Colorado AI Act?

    The Colorado AI Act applies to developers and deployers of high-risk AI systems in sectors like employment, healthcare, and financial services. It requires impact assessments before deployment, prohibits algorithmic discrimination, mandates consumer disclosures when AI makes consequential decisions, and encourages adherence to standards like the NIST AI Risk Management Framework.

    How do state AI laws impact businesses operating across multiple states?

    State AI laws vary significantly, resulting in a complex patchwork of regulations. Businesses must develop compliance strategies that account for differing state requirements, including transparency, bias testing, and disclosure obligations. Staying informed about evolving state legislation and adopting flexible AI governance policies is essential for multi-state operations.