Vermont’s comprehensive data privacy bill “creates an unnecessary and avoidable level of risk,” said Gov. Phil Scott (R) Thursday as he vetoed H-121. It was a win for tech industry opponents (see 2405300038) and a setback for consumer group supporters of the bill that would have made Vermont the first state with a broad private right of action (PRA). It's possible, however, for lawmakers to override Scott’s veto with a two-thirds vote in each chamber. Also Thursday, the Rhode Island legislature approved a privacy bill that consumer groups say is too weak.
Incorporating kids’ privacy language is complicating the House Commerce Committee's effort to move ahead with a comprehensive bill, members told us in interviews last week. A full committee markup is possible when the chamber returns the week of June 24.
Four tech industry groups on Tuesday joined in opposing a kids’ social media legislative proposal advancing in Pennsylvania, despite support from their member Google (see 2406060062). The Computer and Communications Industry Association, NetChoice, TechNet and Chamber of Progress oppose the Online Safety Protection Act (HB-1879). Pennsylvania’s House Children and Youth Committee voted 15-9 to pass the bill Tuesday, with one Republican in favor. The legislation would require online platforms consider the “best interests of children” when developing products and features “children are likely to access.” Violators would face potential civil penalties enforced by the attorney general. CCIA and NetChoice have argued similar measures passed in California, Maryland and Vermont are unconstitutional, given the free speech implications for children. Committee staff on Tuesday listed Google as a supporter and the four associations as opponents. Google previously declined comment on why it supports the measure, and the company didn’t comment Tuesday. Chair Donna Bullock (D), who wrote the bill, successfully passed an amendment Tuesday with new language meant to address critics’ concerns about “vague” wording outlining what keeping children’s “best interests” in mind means. However, Rep. Charity Grimm Krupa (R) said the amendment fails to address concerns from Attorney General Michelle Henry (D) about enforceability. Krupa said she agrees with ranking member Barry Jozwiak (R), who previously said the bill is unenforceable due to its “overly broad” terms and definitions. The measure's intent is “good,” but sponsors haven’t addressed issues raised by Jozwiak, Henry and the industry groups, she said. Henry’s office didn’t comment Tuesday. Bullock said parents have an obligation to show children how to use social media platforms safely, but they can’t “do it alone.” Parents don’t understand every aspect of the technology and what’s “happening behind the scenes,” she said. Platforms should make these services “age-appropriate” and prioritize the safety of children over profits, she added.
The FCC’s Oct. 25 declaratory ruling authorizing E-rate funding for Wi-Fi on school buses (see 2312200040) “is both appropriate and lawful,” the National Education Association, the American Federation of Teachers and eight other educational groups said in a 5th U.S. Circuit Appeals Court amicus brief Monday (docket 23-60641) in support of the commission's ruling.
New York will soon require social media platforms to obtain parental consent when using algorithms to sort feeds for minors.
ISPs and industry groups told the FCC that while competition and access remain strong in the broadband marketplace, additional regulation could harm future investment and deployment. Those views were included in feedback the FCC sought about its biannual State of Competition in the Communications Marketplace report to Congress (see 2404220050). In comments, some wireless groups urged making additional spectrum available. MVPDs and broadcasters said the FCC should recognize the increasing competition they face from streaming video and accordingly relax regulations. Comments were posted Thursday and Friday in docket 24-119.
Google is unwilling to publicly support a kids’ social media proposal in Pennsylvania, despite the House Children and Youth Committee announcing the company’s backing Wednesday (see 2406050055).
The FTC wants to make “clear” that sharing certain types of sensitive data is “off-limits,” and the agency is paying close attention to AI-driven business models, Consumer Protection Director Samuel Levine said Wednesday. Speaking at the Future of Privacy Forum’s D.C. Privacy Forum, Levine highlighted instances where the FTC has reached settlements with data privacy violators that include prohibitions on sharing certain types of data. He noted five cases where the FTC banned sharing of health information in advertising, another case banning sharing of browsing data for advertising and at least two other cases in litigation in which the agency wants to ban sharing sensitive geolocation data. “We have made clear in our words, in our cases, complaints that certain uses of sensitive data can be off-limits.” FTC Chair Lina Khan has made similar remarks in the past (see 2401090081 and 2208290052). Levine said banning those practices will depend on the agency’s three-part FTC Act test for unfairness. Data sharing practices violate the FTC Act if they cause or are likely to cause substantial consumer injury, can’t be reasonably avoided by consumers and the potential harm isn’t outweighed by “countervailing” benefits to consumers or competition. So much of how “people experience” social media platforms and how data is handled is driven by behavioral advertising business models, said Levine. Some companies are clear about the business model incentives for AI, while other companies are “not being as clear,” he said. “It’s not illegal to want to make money. We want that in this country, but we do want to think about how these business models shape development of the technology and contribute to some of the harms we’ve seen.” It makes sense the director has a “strong view” there’s a “wide range” of statutory authority for the FTC when it comes to AI-driven data practices, said FPF CEO Jules Polonetsky. The FTC already has a “substantial ability” to enforce against AI-related abuse under its consumer protection regulations, Polonetsky told us. However, hard societal questions surround the technology that only Congress can answer, and that starts with a federal data privacy law, he said.
An age-appropriate social media design bill that Pennsylvania lawmakers are considering is unenforceable because of its vague language about protecting children, House Children and Youth ranking member Barry Jozwiak (R) said Wednesday. The committee planned to vote on the Online Safety Protection Act (HB-1879) but postponed the motion over Jozwiak's technical objections. Introduced by Chair Donna Bullock (D), HB-1879 would require companies that design platforms a child will “likely" access do so with the “best interest” of children in mind. In addition, it would require age-appropriate design standards similar to provisions included in California’s enjoined social media design law (see 2311160059). Committee staff said Google supports the legislation in Pennsylvania. Google didn’t comment Wednesday. Jozwiak said he has received three pages of questions and concerns from Pennsylvania Attorney General Michelle Henry (D) about the bill’s “overly broad” terms and definitions. The measure is “essentially unenforceable” against entities that don’t gather “actual knowledge” of ages, and the AG lacks the resources to enforce it as written, he said. He formally filed to have the legislation tabled. That motion failed on a 14-11 party-line vote. Committee members had several weeks to file amendments and work with sponsors, Bullock said. Joziak argued consideration of the legislation would be out of order because a Bullock amendment was received at 1:22 p.m. Tuesday, and committee rules dictate that the deadline is 1 p.m. Bullock conferred with committee staff and ultimately tabled the bill. Her amendment would alter some language, including terms like “best interests of a child.” The amendment would extend the effective date of the legislation to December 2025.
Banning social media platforms from using algorithms to tailor content could expose children to harmful content, the Computer & Communications Industry Association said Wednesday. CCIA wrote a letter of opposition against a bill the New York State Legislature is considering. Introduced by Sen. Andrew Gounardes (D) and sponsored by Sen. Patricia Canzoneri-Fitzpatrick (R), the Stop Addictive Feeds Exploitation (SAFE) for Kids Act (S-7694A) would ban platforms from using algorithms that provide “addictive feeds” to minors. California is also looking into regulations for kids’ algorithms (see 2405210019). “These algorithms also protect teens from harmful content and help tailor online content for younger users,” CCIA said. “While the intent is to reduce the potential for excessive scrolling, eliminating algorithms could lead to a random assortment of content being delivered to users, potentially exposing them to inappropriate material.” Northeast Regional Policy Manager Alex Spyropoulos said there are “numerous tools” that prevent excessive screen time and content consumption without compromising protective measures. The New York bill’s age-verification provisions could also “increase the amount of sensitive information users must share with companies and potentially cut off users from the online communities they depend on,” CCIA said.