|

States Take the Lead in Regulating American AI Development

Laws from all US states will set the tone for AI innovation moving forward.

Photo of California's state capitol building
Photo by Josh Hild via Unsplash

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

Before President Trump signed his “One Big Beautiful Bill” into law on July 4, the Senate unexpectedly removed a 10-year prohibition on states enacting or enforcing their own AI laws. 

Cutting the AI regulation moratorium from the bill opened the floodgates for states to set their own rules, a movement that has picked up speed across the country. The volume of laws and potential laws, however, makes it difficult for enterprises to keep up, presenting challenges with fragmentation, duplication and governance, experts said. 

“A fragmented AI regulatory map across the US will emerge fast,” said Daniel Gorlovetsky, CEO at TLVTech. “States like California, Colorado, Texas and Utah are already pushing forward.” 

AI legislation from states is tackling a wide range of issues, from high-risk AI applications to digital replicas, deepfakes, and public sector AI use, said Andrew Pery, ethics evangelist at ABBYY

Colorado’s AI Act targets “high-risk” systems to prevent algorithmic discrimination in different sectors, with penalties of up to $20,000 per violation. California, meanwhile, is moving forward with myriad AI bills that focus on data transparency, impact assessments and AI safety, especially around consumer-facing applications. 

Texas is zeroing in on AI-generated content and safety standards in public services, such as with HB149, known as the Texas Responsible AI Governance Act. Utah’s SB 149, the Artificial Intelligence Policy Act, demands companies disclose the use of AI when they interact with consumers, said Gorlovetsky. 

Additionally, Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act, passed in 2024, “imposes strict limits on the use of AI to replicate a person’s voice or image without consent, targeting unauthorized digital reproductions,” Pery said. 

New York is expected to take the lead on financial sector regulations and push for mandatory risk disclosures from large AI developers, said Patrick Murphy, founder and CEO at Togal.AI. Additionally, the state’s legislature passed the Responsible AI Safety and Education Act that requires large AI developers to prevent widespread harm. 

So what does this patchwork of legislation mean for enterprises? The impact will depend on size and AI maturity levels.

  • Small businesses will need to start thinking about compliance before using off-the-shelf AI tools in customer-facing roles, said Gorlovetsky, while midsized companies will need to consider legal and data governance strategies state by state. 
  • Large enterprises will be forced to build compliance into their architecture and devise modular AI deployments that can toggle features depending on local laws, Gorlovetsky said. 
  • “Bottom line, if you’re building or deploying AI in the US, you need a flexible, state-aware compliance plan — now,” Gorlovetsky said.

Despite the challenges, the regulations do not necessarily translate into innovation loss. Rather, they can be leveraged to build safer and better AI. Enterprises can fall in line with compliance by keeping inventories of all components involved in the development, training and deployment of AI systems. 

“If the US wants to stay ahead in the global AI race, it’s not just about building smart tools,” said Murphy. “It’s about proving we can govern AI responsibly without holding back innovation and building confidence in AI without crushing startups or overwhelming smaller firms.”

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.