Communications Daily is a Warren News publication.
Chevron?

Rosenworcel: FCC AI Definition Needs Fine-Tuning, No to 'Department of AI'

FCC Chairwoman Jessica Rosenworcel acknowledged Friday that the agency’s definition of AI may need fine-tuning and rejected the idea of a dedicated AI regulatory agency. Speaking at the 7th Annual Berkeley Law AI Institute, she also discussed the end of Chevron deference.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

“We should start with disclosure to understand the scope of what is happening” when generative AI is used in political ads, robocalls and texts, Rosenworcel said. Requiring disclosures now will create the expectation of transparency around AI use, she said. “We should start with disclosure to make sure we have a legal and social norm -- that the public at large expects to be told.”

The FCC’s proposed AI disclosure requirements are a first step that could help raise awareness of the threat of AI deepfakes and create transparency about the extent of generative AI use, Rosenworcel said. “The power to manipulate voice images and video now is unlike anything we've seen in history," she said. “We should start with disclosure to understand the scope of what is happening.” Agencies should also come down hard on early AI abusers as a deterrent to future violations, she said. “When we have this happen at large scale, in big public ways, it's incumbent on federal actors, the FCC included, to be noisy about their enforcement activity.” The agency issued a $6 million penalty Thursday for the January occurrence where robocalls using an AI facsimile of President Joe Biden’s voice were placed in hopes of discouraging Democrats from voting in the New Hampshire primary (see 2409260026).

Rosenworcel conceded the scope of the FCC’s definition of AI in its proceeding on political ads might be too broad. Asked whether the definition could loop in commonly used video editing techniques alongside actual deepfakes, Rosenworcel said the agency started the rulemaking “to invite comment on just that kind of thing.” Indeed, she said the FCC purposely used a broad definition. “I'll be the first to suggest that [the definitions] could probably use some fine-tuning for issues just like that one.” Numerous commenters in the recent round of filings on the FCC’s AI political ad disclosures proceeding argued that the agency’s policy would require disclosures on nearly all content produced using a computer (see 2409200054).

“We don’t need a Department of AI,” Rosenworcel argued. AI is “akin to software” and in coming years will likely permeate multiple parts of the economy and areas that multiple regulatory agencies oversee, she said. Instead of a dedicated AI agency, federal actors should “harmonize” their AI policies. “We should be in the business of locking arms and working together,” she said. “Transparency is a good place to start.” Commenting on the AI policies in many states, Rosenworcel said that it would be “useful” for Congress and the federal government to “set a standard” and then “respect the work of the states that might also help us reach that standard.”

The FCC should continue regulating without the backup of Chevron deference because the agency's decisions are often very technical, Rosenworcel said. While the loss of Chevron will likely be “challenging for a lot of different regulatory actors going forward in Washington,” the FCC’s work “involves not just lawyers, but economists and engineers.” The FCC makes decisions “based on the engineering, the economics and the thesis of the words in the statute,” she said. “We're going to continue to do that, and I have some confidence if we do that well, we're going to be able to continue our work.”