Communications Daily is a Warren News publication.
'Premature'

Industry and Consumer Groups at Odds on Need for FCC's Proposed AI Rules

The U.S. Chamber of Commerce said the FCC should avoid taking the steps proposed in an AI NPRM. Commissioners approved the NPRM 5-0 in August. Conversely, consumer groups supported the proposed rules. Comments were due last week and posted Thursday and Friday in docket 23-362.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

New rules would be premature and likely harm consumers, the Chamber said. “The record … contains limited evidence that the impact of AI-enabled robocalls and robotexts require new regulations,” the group said: “At this point, existing regulations and frameworks are sufficient to address any new risks.” The Chamber said “given the evolving nature of AI … it is unclear to what extent new regulations would be effective.”

But consumer groups, led by the National Consumer Law Center and the Electronic Privacy Information Center, said consumers need the NPRM's proposed protections. Callers “must obtain explicit consent from called parties to receive calls that use this technology” and AI-generated calls “must include a disclosure at the beginning of each call,” the groups argued. Calls to wireless and wireline phones should be treated the same and the rule should cover texts that include AI-generated voices, they added. Twenty-four other groups endorsed the filing.

“With that information, consumers can end the call if they do not wish to proceed. … If the disclosure is short and to the point, this is information that consumers can understand and act on,” said Consumer Reports: “With that information, consumers can end the call if they do not wish to proceed. … If the disclosure is short and to the point, this is information that consumers can understand and act on.”

CTIA said collaboration with industry is the best and safest approach. The FCC’s proposals “to define ‘AI-generated calls’ and subject those calls to specific pre- and on-call disclosure requirements are not the most effective way to protect consumers from illegal AI-generated calls, particularly considering the Commission’s existing efforts,” CTIA said: “At worst, these proposals risk causing significant confusion for callers, providers, and consumers alike and ultimately chilling innovative uses of AI that would benefit callers and consumers.” Align the FCC’s approach with other federal agencies, including the National Institute of Standards and Technology, which “are establishing the frameworks that will guide a risk-based approach to the use of AI.”

USTelecom agreed that a cautious approach is needed on new rules and stressed the importance of flexibility. Bad actors, it predicted, will likely ignore new consent requirements. “New calling industry boilerplate that also asks for consent to make AI-generated calls when asking for consent to make autodialed and/or prerecorded or artificial calls will not provide more meaningful information to consumers, nor will it further empower consumer choice,” USTelecom said: “A consumer’s consent to receive a prerecorded or artificial voice call should be sufficient to receive an AI-generated call.” The FCC should also “take all steps” necessary “to ensure individuals with disabilities can take advantage of potentially transformative AI-calling technologies without” risk of violating the Telephone Consumer Protection Act.

The American Association of Political Consultants (AAPC) was among those warning that the proposed rules would have little real-world benefit. The proposed "definition of ‘AI-generated call’ and its requirements are overly broad and impose significant burdens on political campaigns and consultants,” AAPC contended: Additional disclaimer requirements “on top of those already required under federal and state laws -- will compel campaigns and consultants to expend additional resources on legal advice, data handling, and data management” and “increase the potential for lawsuits.”

NCTA cautioned “against moving too quickly to adopt new disclosure requirements specific to such calls, before the scope of how these technologies are being and could be used is understood.” Much remains unknown about how bad actors will use AI, NCTA said: “An understating of the scale and types of malicious calling using AI technologies is critical to ensuring that any new rules -- to the extent they are needed -- are effective.”

Commenters also questioned the FCC’s legal authority to impose the proposed rules. The FCC concludes in the notice that it “has legal authority subject to section 227 of the TCPA to adopt its disclosure proposals because the Act authorizes the Commission to ‘make technical and procedural standards for systems that are used to transmit any artificial or prerecorded voice message via telephone,'” said Incompas and the Cloud Communications Alliance. That provision doesn’t “confer authority to regulate any technology that may be tangentially related to telecommunications,” they said.