FTC Issues Policy Statement Against Deceptive AI Practices
The FTC will consider regulatory action against companies using artificial intelligence products “even before consumers are harmed,” FTC Chair Lina Khan said Thursday. The commission voted 3-0 to issue a policy statement detailing how the agency plans to use existing consumer protection authorities to preserve consumer privacy for AI and biometric technology.
Sign up for a free preview to unlock the rest of this article
Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!
The Senate Judiciary Committee and the House Judiciary Committee hosted hearings this week on harms of artificial intelligence (see 2305170045 and 2305160074). Senate Majority Leader Chuck Schumer, D-N.Y., said Thursday he met with a bipartisan group of senators Wednesday to discuss comprehensive AI legislation. He said the meeting included another Democratic senator and two Republicans. Senate Privacy Subcommittee Chairman Richard Blumenthal, D-Conn., and ranking member Josh Hawley, R-Mo., led Wednesday’s hearing with Senate Judiciary Committee members, OpenAI CEO Sam Altman and IBM Chief Privacy and Trust Officer Christina Montgomery. “Our group agreed that because AI technology is developing so quickly, Congress has to move fast,” said Schumer. “I look forward to continuing to work with a wide range of senators, from many committees and from both sides of the aisle, as we move forward.”
The FTC Act's flexible nature positions the agency to protect user technology, regardless of changing technologies, said Khan during Thursday’s meeting. The policy statement lays out how traditional consumer protection authorities apply to biometric risks, she said. Businesses need to address those risks proactively, instead of releasing the technology into the “wild” and cleaning up after the fact or expecting consumer advocates to do the cleanup work, she said. The policy statement notes the FTC Act allows the agency to act “even before consumers are harmed,” though the law requires the agency to “show the likelihood of harm,” said Khan: The FTC is going to “continue to keep that in mind as we continue to take a close look at these technologies.”
The policy statement focuses on risks associated with discrimination and civil rights abuses, said Commissioner Rebecca Kelly Slaughter. Hopefully the statement shows industry the FTC is “well-prepared” to use its FTC Act Section 5 authority for prohibiting unfair and deceptive acts or practices. Commissioner Alvaro Bedoya said companies making claims about the accuracy or real-world performance of their biometric technologies will need proof to back up those claims. He noted biometric technology is often flawed in identifying individuals, particularly women, children and minority populations. “If you’re a company using biometric technology, you need to think about how biases in that technology will affect the public, real people, all of whom have those different characteristics.” he said. “And you need to address any substantial consumer hardship, as you said, Chair Khan, proactively before those harms occur.”
The commission voted 3-0 to issue an NPRM to amend the health breach notification rule. Proposed amendments would “help clarify technologies and entities covered by the Rule, facilitate greater electronic breach notices to consumers, and expand the required content of the notices, among other changes,” the FTC said.