The journey toward regulating artificial intelligence (AI) in the United States gained initial momentum with the issuance of the Executive Order on Promoting the Use of Trustworthy AI in the Federal Government in 2020. Although not a binding law, this Executive Order provided guidance for federal agencies, emphasizing the importance of transparency, fairness, and accountability in AI systems. It established principles for the ethical deployment of AI in government operations but did not impose specific legal obligations. Instead, it served as a critical first step in raising awareness about AI risks and set the stage for more robust regulatory frameworks.
The Executive Order helped establish foundational values for AI governance and signaled the need for future legislation, contributing to the pace at which state lawmakers began crafting AI regulations to address concerns at a more localized level.
Building on these principles, state lawmakers across the U.S. have taken the lead in addressing AI regulation. Recognizing that federal action on AI remains slow, states have started developing their own rules to govern AI’s use in both public and private sectors. These laws focus on mitigating algorithmic discrimination, ensuring transparency, and protecting consumers in their interactions with AI technologies.
For instance, in 2023, Connecticut and Colorado led the charge in AI legislative efforts. Connecticut Senator James Maroney introduced a bill aimed at preventing discrimination in AI decision-making systems. Although it passed the state Senate, it stalled in the House due to concerns from Governor Ned Lamont, who indicated he would veto the measure.
Meanwhile, Colorado passed the Colorado AI Act, which became the first state law designed to mitigate algorithmic discrimination in high-risk processing activities. This law, effective in 2026, targets AI systems used in employment, housing, and healthcare decisions, among others. Colorado also created a working group to refine the legislation and adapt it to future technological developments before it becomes fully operational.
Other states, such as Utah and Illinois, have passed narrower AI-related laws. Utah’s law regulates the private sector’s use of generative AI, while Illinois amended its Human Rights Act to prevent AI systems from discriminating in employment decisions.
California has emerged as a major player in regulating AI at the state level. In 2023, California lawmakers passed several significant AI bills, including:
• AB 2013: Requires developers of generative AI systems to provide transparency about the data used to train their AI models.
• AB 2885: Introduces a formal definition of AI and outlines new requirements for transparency and accountability in AI system operations.
• SB 942: Known as the California AI Transparency Act, mandates that companies with over one million AI users must provide clear details about how their systems function and impact users.
• SB 896: It mandates responsible use of GenAI in state operations, requiring transparency in AI communications, privacy safeguards, infrastructure risk assessments, and workforce development partnerships to promote ethical, secure AI applications.
• AB 2655: The Defending Democracy from Deepfake Deception Act of 2024 requires large online platforms to block or label false, AI-generated election content in California and allows candidates or officials to seek legal action for non-compliance.
• AB 1836: Known as Use of likeness: digital replica – establishes that using a digital replica of a deceased person’s voice or likeness in expressive works without consent incurs liability of at least $10,000 or actual damages, with exceptions for uses in news, commentary, satire, and biographical content, thus expanding protections for the likeness of deceased individuals in California.
• AB-3030: Health care services: artificial intelligence, mandates that health facilities, clinics, and physician offices using generative artificial intelligence (AI) for patient communications must provide clear disclaimers indicating that the information was generated by AI and include instructions for patients to contact a human healthcare provider, thereby promoting transparency and accountability in healthcare AI usage.
However, not all proposed legislation made it into law. Governor Gavin Newsom vetoed SB 1047, which aimed to regulate large AI models by requiring the development of security protocols and a “kill switch” for covered systems, reflecting ongoing debates about how far AI regulations should go.
Across the states, several recurring themes have emerged in AI regulation:
These initiatives show that state lawmakers are not waiting for federal action and are instead taking a proactive stance on AI governance.
As states forge ahead with their AI regulations, collaboration among them will be essential to create a cohesive framework. Without some level of standardization, companies operating across state lines may face a patchwork of laws that complicate compliance. This underscores the importance of dialogue among legislators to share best practices and learn from each other’s experiences. Moreover, engaging with technologists, ethicists, and community stakeholders can help ensure that these regulations are not only effective but also equitable, addressing the needs and concerns of diverse populations.
The landscape of AI regulation is likely to continue evolving as technology advances and societal attitudes shift. As more states implement their own laws, we could see a ripple effect that prompts federal lawmakers to take action. This ongoing dynamic creates an opportunity for informed public discourse about the ethical use of AI, encouraging citizens to voice their perspectives on how AI should shape our lives. With active participation from all stakeholders – businesses, consumers, and regulators alike – the path forward can lead to a balanced approach that fosters innovation while safeguarding public interests.
While the federal government has yet to pass comprehensive Artificial Intelligence legislation, state-level efforts are paving the way for a more regulated AI landscape. The groundwork laid by the 2020 Executive Order has been built upon by these state initiatives, which address the unique challenges and risks posed by AI technologies. Over time, these state experiments could shape the future of AI governance at both the national and international levels.
For businesses and developers, keeping pace with these rapidly changing regulations is essential for staying compliant and ensuring responsible AI innovation. States are taking bold steps to protect consumers and promote transparency, and these legislative efforts are likely just the beginning of a broader push for comprehensive AI governance in the U.S.
Disclaimer: This blog post is intended solely for informational purposes. It does not offer legal advice or opinions. This article is not a guide for resolving legal issues or managing litigation on your own. It should not be considered a replacement for professional legal counsel and does not provide legal advice for any specific situation or employer.