Communications Daily is a service of Warren Communications News.
'Rapidly Moving Train'

Policymakers Disagree With Industry on Letting States Be Labs for AI Rules

States have good ideas about AI regulation that the federal government shouldn’t try to block, said two state officials and a U.S. Senate Democratic staffer during a Federal Communications Bar Association panel Tuesday. In a separate session, however, telecom industry officials suggested that a “patchwork” of state AI requirements could be more difficult for businesses than dealing with today’s array of state privacy laws.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

Regardless, passing either a privacy or AI bill nationally remains challenging, particularly in an election year, said FCBA panelists. Rudy Brioche, a former Hill staffer and now a professor at George Washington University, said that on a scale of 1 to 10 on how difficult such bills would be to pass -- with 10 being the most difficult -- "AI legislation is like a 12," and privacy is “an 8.5."

The government panel discussed efforts by President Donald Trump and Congress to stop state regulation of AI, including Trump’s recent executive order (see 2512120042 and 2601070046).

States with AI laws, such as Colorado, California, Texas, Utah and New York, have “great ideas” that "are worth consideration to inform the federal debate,” said Edgar Rivas, senior policy advisor to Sen. John Hickenlooper, D-Colo. States filling the gaps in federal policy is "a feature of our system, not a bug."

Rivas said that “allowing states to move forward does not preclude the federal government from also establishing guardrails.” Having clear rules of the road can provide more investment certainty, he added.

Connecticut state Sen. James Maroney (D) agreed that "states can be the laboratories of democracy and bubble up the ideas" to federal government. "Not allowing the states to act" is "malpractice." Maroney noted that he would support a federal standard if it’s “more of a floor than a ceiling.” The "worst-case scenario" would be preemption by a "loose" federal AI law that merely seeks voluntary compliance, he said.

Maroney, author of Connecticut’s comprehensive privacy law, also said he will try again this year to pass a similarly sweeping AI bill. The 2026 edition will focus on areas such as chatbots, pricing disclosures and employers’ use of automated decision-making technology, he said.

"Good, clear regulations are actually pro-innovation," the lawmaker argued. Innovators "want to know what they can and can't do, and it's always cheaper to do things right the first time."

Massachusetts Assistant Attorney General Kaitlyn Karpenko agreed that there shouldn’t be “an inherent conflict between … state and federal work here.” To support enforcers, she said, “the biggest thing that Congress can do ... is to not issue a moratorium.”

Also, Karpenko said she sees a misconception that AI requires new regulations because it’s a new technology. She said offices like hers, which is headed by AG Andrea Campbell (D), can regulate AI through existing laws. However, she added, she’d like to see more clarity around who's responsible for AI when it does something illegal.

Maroney likewise said that some existing laws can be applied to AI. “But in some ways, they need to be updated.” For example, because of "the black-box nature of the algorithms" and how they make decisions, "there are certain transparency requirements we need to put in law."

A privacy law provides a good foundation for AI regulation, the state senator argued. "When you're looking at AI, really the key is the data," he said. "Many states who don't have a data privacy law want to rush to regulate AI before looking at the data, which is the basis of the decisions."

AG offices need resources to enforce privacy and AI, said Karpenko. Maroney agreed, saying that’s why Connecticut added $600,000 to its data privacy budget last year. “We’ll see what we are able to do this year.”

Meanwhile, Rivas said he expects the Colorado legislature will soon return to negotiations to revise the Colorado AI Act before it takes effect in June. State lawmakers tried to make substantive changes last summer but ended up simply punting the effective date by a few months (see 2601080057 and 2508290018).

“There's no secret that Colorado and our leadership have been evaluating how the Colorado AI Act moves forward,” Rivas said. The “primary goal” is giving businesses the “certainty to invest and innovate in Colorado and making sure that consumers in Colorado are able to use AI for the best potential that they can.”

ISPs Seek to Avoid AI Patchwork

Officials from associations representing big telecom companies said they don’t want to see a patchwork of state AI regulations, as they have experienced with privacy.

David Valdez, vice president of privacy, cybersecurity and AI for wireless association CTIA, said that because AI platforms “operate at scale, it is a much more complicated undertaking for companies to create these various frameworks that have to be applied on a state-by-state basis.”

With “such a rapidly moving train” like AI, efforts to “codify restrictions” specific to one technology “could be very, very problematic,” he added.

Likewise, USTelecom is “concerned about repeating that pattern that we've seen in the privacy world with AI at a point when the technology is still evolving,” said Paul Eisler, the broadband association’s vice president of cybersecurity. "A fragmented regulatory environment increases uncertainty" and could slow the "deployment of beneficial use cases."

"Broadband providers already operate under a combination of general consumer protection laws, as well as sector-specific privacy and security obligations," Eisler added. "So a national approach to AI can promote consistency and consumer understanding."

Also, while the USTelecom official agreed that providing transparency around AI is important, "it needs to be implemented in a way that is practical and also secure."

“Meaningful transparency can focus on how the systems are used, their intended purpose, the types of data involved and what are the limitations of those systems, rather than requiring disclosure of proprietary algorithms or source code,” Eisler said. USTelecom supports "a risk-based approach that scales transparency obligations to the nature of the use case."

Brioche, who also worked at Comcast and advised the FCC, said dealing with the array of state privacy laws has "been, for the most part, manageable" for most companies. “But then when you layer on top of it … dealing with AI potential regulation, that makes it very difficult.”

The question of preemption has long been the fight when it comes to passing a national privacy law, noted Tatiana Rice, the Future of Privacy Forum's director for U.S. legislation. Most agree there should be a federal law, she said, "but the question is: Should that be a floor or a ceiling?"

On whether it will be hard to pass an AI law in an election year, Rice said getting such legislation through Congress “will be challenging, period.” A “core issue that the federal government needs to resolve is whether they want to regulate AI or not. I think it's pretty split at this point.”