DOJ, the Commerce Department, the State Department and other agencies must improve “security and diversity of chip supply chains” to protect U.S. AI leadership, the White House said Thursday in a national security-related memorandum. President Joe Biden addressed the memo to DOJ, Commerce, State, the Office of Science and Technology Policy, the CIA, the Director of National Intelligence and several other agencies. “Our competitors want to upend U.S. AI leadership and have employed economic and technological espionage in efforts to steal U.S. technology,” a White House fact sheet said. The memo “makes collection on our competitors’ operations against our AI sector a top-tier intelligence priority” and directs U.S. agencies to give AI developers “the timely cybersecurity and counterintelligence information necessary to keep their inventions secure." A senior administration official told reporters Wednesday that the memo addresses “the importance of protecting advanced AI technologies so that they’re not used against us by adversary militaries or intelligence services.” The memo warns about the risks of not taking enough steps to protect and harness AI. If the U.S. government doesn’t “act with responsible speed and in partnership with industry, civil society, and academia to make use of AI capabilities in service of the national security mission -- and to ensure the safety, security, and trustworthiness of American AI innovation writ large -- it risks losing ground to strategic competitors,” it said. “Ceding the United States’ technological edge would not only greatly harm American national security, but it would also undermine United States foreign policy objectives and erode safety, human rights, and democratic norms worldwide.”
Possible collusion and improper data sharing in AI-related markets requires vigilant, vigorous and timely competition enforcement, DOJ and the FTC said in a statement Friday with international partners. DOJ’s Antitrust Division and the FTC participated in the G7 Competition Authorities and Policymakers Summit in Rome, where they discussed AI competition concerns. The group included enforcers from Italy, France, Germany, Canada, the U.K., Japan and the European Commission. They issued a communiqué highlighting guiding principles for promoting competition and innovation. The principles focus on the need to contest harmful behavior from dominant companies, fair access for competitors, consumer choice, interoperability and AI-model transparency. Sharing American “experiences and perspectives and reflecting on best practices alongside international enforcers helps us better apply the U.S. antitrust laws to unlock economic opportunity for the American people,” said DOJ Principal Deputy Assistant Attorney General Doha Mekki in a statement.
A federal judge stopped an AI deepfakes law in California about two weeks after Gov. Gavin Newsom (D) signed the measure. In a Wednesday order (case 2:24-cv-02527), U.S. District Court of Eastern California Judge John Mendez granted a preliminary injunction, stopping Attorney General Rob Bonta (D) from enforcing AB-2839, which was signed Sept. 17 (see 2409180024). The law would prohibit people and companies from sharing election campaign content containing “materially deceptive and digitally altered or digitally created images or audio or video files” with the intent of influencing elections. Plaintiff Christopher Kohls, under the alias "Mr. Reagan," uses AI to edit and create satirical content about politicians. He challenged the law in court the same day it was signed because he said it would allow politicians and others to sue him for damages and injunctive relief during election season. Kohls argued that the law violates the First and Fourteenth Amendment because it limits his free speech and the law is unconstitutionally vague. “AB 2839," wrote Judge Mendez, "does not pass constitutional scrutiny because the law does not use the least restrictive means available for advancing the State’s interest here.” The judge continued, “As Plaintiffs persuasively argue, counter speech is a less restrictive alternative to prohibiting videos such as those posted by Plaintiff, no matter how offensive or inappropriate someone may find them.”
The FTC on Wednesday announced enforcement action against five companies it said engaged in AI-driven deception online. Action against one of the companies resulted in a monetary settlement, and three of the cases are ongoing. Republican Commissioners Andrew Ferguson and Melissa Holyoak voted against filing one of the complaints. The other four complaints were filed through 5-0 votes. DoNotPay agreed to settle for $193,000 over claims against its “robot lawyer” service. The company’s “product failed to live up to its lofty claims that the service could substitute for the expertise of a human lawyer,” the agency said. Republicans voted against filing a claim against Rytr. The agency said the company violated the FTC Act by providing subscribers with tools to “generate false and deceptive written content for consumer reviews.” Rytr agreed to a nonmonetary settlement barring the company from engaging in similar behavior in the future. Holyoak in a dissenting statement noted the complaint doesn’t allege any of the company’s customers actually “posted any draft reviews” online. Ferguson said: “Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense. And it threatens to turn honest innovators into lawbreakers.” The agency announced ongoing litigation against three of the companies whose business schemes have been blocked in federal court. The FTC is suing Ascend Ecom, a company the agency said falsely claimed to offer AI tools that deliver customers thousands of dollars a month in passive income on e-commerce platforms like Amazon. The complaint alleges the company cheated consumers out of at least $25 million since 2021. The FTC is suing Ecommerce Empire Builders for similar allegations. The company promised consumers thousands of dollars in monthly income from online storefronts. A separate lawsuit against FBA Machine alleges it cheated consumers out of more than $15.9 million offering services promising guaranteed income from online storefronts.
Sen. Ed Markey, D-Mass., on Tuesday introduced legislation that would impose civil penalties on companies offering AI services that violate a consumer's civil rights. The legislation authorizes the FTC, state attorneys general and individual consumers to enforce the law’s provisions. The bill applies to algorithms and other technology responsible for “consequential decisions” that affect employment, banking, health care, criminal justice, public accommodation and government services. States can seek civil penalties up to $15,000 per violation, or 4% of a company’s average gross annual revenue over the preceding three years, whichever is greater. Markey introduced the AI Civil Rights Act with Sen. Mazie Hirono, D-Hawaii.
Tech companies are buying small AI startups without antitrust scrutiny, which could have long-term, negative impacts on consumers, Public Knowledge said Monday in comments to the FTC and DOJ. Tech associations argued empirical evidence shows there aren’t competition concerns in the sector and said antitrust enforcers should rely on statistics, not conjecture. DOJ and the FTC on Friday closed public comment on their inquiry into “serial acquisitions and roll-up strategies” that they believe harm competition. Public Knowledge, in joint comments with the Responsible Online Commerce Coalition, cited the strategic investment of companies like Microsoft, Google and Amazon. Companies in recent years have purchased hundreds of small tech startups, including those offering AI services, and the deals are so small they often don’t trigger antitrust review. “This has allowed Big Tech to shape numerous digital markets and expand their dominance unchallenged,” said PK. Tech companies already enjoy dominant positions in their respective markets, but purchasing AI companies further entrenches their dominance, said PK: “The lack of competition in technology ecosystems can lead to stagnation in innovation and service improvement and presents significant hurdles for consumers seeking to explore different products.” The Computer & Communications Industry Association said in comments that enforcers failed to show “how and why these business strategies raise particular competitive concerns.” The agencies’ annual Hart-Scott-Rodino report for fiscal 2022 showed enforcers don’t identify competition concerns in “most notified mergers.” The agencies requested additional information on 47 of the 3,029 notified merger transactions in the report, or fewer than 2% of the deals, said CCIA. NetChoice urged enforcers to keep their focus on “demonstrable consumer harm rather than abstract structural concerns or protection of competitors.” The association recommended the agencies rely on “grounded analysis in rigorous economic evidence rather than anecdotes or political considerations.”
FCC Chairwoman Jessica Rosenworcel will visit the University of California, Berkeley Law School Sept. 27 to address the Berkeley Law AI Institute, the agency said. The FCC has made AI a top focus under Rosenworcel (see 2404040040). The event starts at noon.
The Irish Data Protection Commission is investigating whether Google performed a required assessment before it started processing personal data of EU and European Economic Area (EEA) citizens for its AI model Pathway Language Model 2. Under the country's data protection act, assessments can be required to ensure that people's rights are protected when data processing will likely result in a high risk, the DPC said. The cross-border inquiry is part of a wider effort by the DPC and its EU counterparts to regulate personal data processing as AI models and systems develop, it said. A Google spokesperson, in an email, said the company takes "seriously our obligations under the [EU general data protection regulation] and will work constructively with the DPC to answer their questions." Earlier this month, the privacy watchdog announced that X permanently agreed to stop using personal data in public posts of EU/EEA users to train its AI tool Grok (see 2409040001).
The U.S. is among the first 11 signers of a Council of Europe treaty on AI, said the 46-member organization that promotes democracy, the rule of law and human rights. The agreement is a legal framework covering the entire lifecycle of AI systems and includes public authorities and private actors, the CoE said. Among other things, it requires signers to ensure that AI systems comply with fundamental principles such as respect for privacy and personal data protection. It requires risk and impact management assessments to ensure that AI systems protect rights, along with prevention and mitigation measures. Moreover, it gives authorities power to introduce bans on some AI applications. Signers must also ensure that remedies, safeguards and procedures are in place for challenging AI systems. The treaty will be effective three months after the date on which five signatories, including at least three CoE members, ratified it. Signers so far include seven CoE members, including the U.K., two nonmembers (the U.S. and Israel) and one international organization (the EU).
The Cybersecurity and Infrastructure Security Agency named its first chief artificial intelligence officer Thursday. The agency promoted Lisa Einstein, a senior adviser on AI for the past year. In addition, Einstein served as CISA Cybersecurity Advisory Committee executive director in 2022.