Communications Daily is a service of Warren Communications News.

Researchers Propose New Framework to Manage Export Control Risks From AI Systems

A new draft report issued this month from the nonprofit Law Reform Institute examines how frontier AI systems may soon be able to create instructions, designs and code subject to U.S. export controls and whether the U.S. will need to restrict this through new controls on AI developers.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

The report, authored by Law Reform Institute president Tim Schnabel and legislative director Joe Khawam, said export control frameworks, such as the Export Administration Regulations and the International Traffic in Arms Regulations, were created to control traditional transfers of technology or information between two parties. But AI models pose a challenge to that framework because they “can create customized outputs on demand for users,” the report said.

“As capabilities advance, the likelihood that users can elicit controlled content rises,” the report said. “This risk creates an untenable choice for AI developers and government agencies: accept systemic national security risks from frontier AI models or enforce restrictions against developers in ways that would undermine American technological competitiveness.

The report notes that “public-facing” AI models are “difficult to police with precision.” They said EAR and ITAR controls partly depend on the “nationality and location of the recipient,” and these are “facts that models cannot reliably verify when prompted. Overbroad refusals would suppress legitimate educational and commercial content; narrower safeguards risk repeated, unobservable violations of export control rules with unknown consequences.” The report also said non-U.S. workers may be able to “elicit ITAR or EAR controlled outputs” from AI models “during routine work,” which could trigger deemed export licensing requirements.

The authors recommend a new control framework that the report said would be “tailored” to AI models. This could be a “voluntary approach pairing a limited safe harbor with baseline internal compliance systems and government-run capability evaluations.” An interagency committee composed of officials from the Commerce, State, Energy and Defense departments, along with intelligence agencies, would “evaluate models upon request by developers who maintain government-approved [technology control plans] and robust internal logging.”

That committee would assess whether a model poses a high or low risk of hurting U.S. national security, the report said. “Models deemed low-risk would be eligible for a safe harbor that limits liability for developers. For higher-risk models, the Committee would work with developers to strengthen safeguards.”

The report includes draft legislative language to create this framework, although it said “much of the framework could be established under existing authorities.” The authors are accepting comments at tim@lawreforminstitute.org and joe@lawreforminstitute.org.