California appropriators last week halted multiple telecom-related bills meant to help vulnerable communities. Assemblymember Mia Bonta (D) blamed the broadband industry after the Senate Appropriations Committee held back her bill that would have banned digital discrimination as the FCC defines it (AB-2239). However, that committee and its Assembly counterpart advanced several other telecom and privacy bills to final floor votes.
The FTC was unanimous in finalizing a rule that will allow it to seek civil penalties against companies sharing fake online reviews, the agency announced Wednesday. Approved 5-0, the rule will help promote “fair, honest, and competitive” markets, Chair Lina Khan said. Amazon, the Computer & Communications Industry Association and the U.S. Chamber of Commerce previously warned the FTC about First Amendment and Section 230 risks associated with the draft proposal (see 2310030064). The rule goes into effect 60 days after Federal Register publication. It allows the agency to seek civil penalties via unfair and deceptive practices authority under the FTC Act. It bans the sale and purchase of fake social media followers and views and prohibits fake, AI-generated testimonials. The rule includes transparency requirements for reviews that people with material connections to businesses write. Moreover, it bans companies from misrepresenting the independence of reviews. Businesses are also banned from “using unfounded or groundless legal threats, physical threats, intimidation, or certain false public accusations to prevent or remove a negative consumer review,” the agency said.
The FCC "must point to clear congressional authorization" before claiming it can reclassify broadband as a Title II telecom service under the Communications Act, a coalition of industry groups told the 6th U.S. Circuit Court of Appeals in its challenge of the commission's net neutrality rules. The court granted a temporary stay of the rules earlier this month (see 2408010066). The petitioners -- ACA Connects, CTIA, NCTA, USTelecom, the Wireless ISP Association and several state telecom associations -- said in their opening brief filed late Monday (docket 24-7000) that the "best reading of the federal communications laws forecloses the commission’s reclassification."
House Judiciary Committee Chairman Jim Jordan, R-Ohio, should investigate potential political misinformation against Vice President Kamala Harris on X, ranking member Jerry Nadler, D-N.Y., wrote the chairman Monday. Nadler cited allegations that Grok, X's AI chatbot, shared inaccurate information about Harris. Grok told users Harris missed ballot deadlines in “nine states and suggested that she was ineligible to appear on the presidential ballot in the 2024 election,” Nadler said. The platform removes misinformation against Republican politicians but doesn't apply the same standard for Democrats, he added. Given Jordan’s “extensive” focus on social media censorship claims, his office should investigate this issue, he said. A spokesperson for Jordan said Monday: “No one is doing more for free speech on the internet than Elon Musk and his platform is working better than ever.” Jordan has led various committee efforts probing alleged social media censorship against conservatives (see 2405010079).
New Mexico Attorney General Raul Torrez (D) is working with state lawmakers on legislation aimed at holding social media platforms more accountable for disseminating deepfake porn, he told us Wednesday.
Companies like Meta intentionally target children and must be held more accountable for social media-related harm, attorneys general from New Mexico and Virginia said Wednesday. New Mexico AG Raul Torrez (D) and Virginia AG Jason Miyares (R) discussed potential solutions to online child exploitation during the Coalition to End Sexual Exploitation Global Summit that the National Center on Sexual Exploitation and Phase Alliance hosted. Torrez said the tech industry received an “extraordinary grant” through Communications Decency Act Section 230, which Congress passed in 1996 to promote internet innovation. Section 230 has been a hurdle to holding companies accountable, even when they knowingly host illegal activity that’s harmful to children, Torrez added. Miyares said AGs won't wait for legislators in Washington to solve the problem, noting state enforcers' success in the courts. Tech companies shouldn’t be able to use Section 230 as a shield from liability while also acting as publishers and removing political content they disfavor, Miyares added. Torrez acknowledged he and Miyares disagree on many things, but they agree on the need to increase liability and accountability of tech platforms when it comes to children.
TikTok “flagrantly” violated children’s privacy law when it let kids open accounts without parental consent and collected their data, DOJ and the FTC alleged Friday in a lawsuit against the Chinese-owned social media app. TikTok violated the Children’s Online Privacy Protection Act (COPPA) when it knowingly allowed children younger than 13 to maintain accounts, DOJ said in a complaint filed on behalf of the FTC. The company purposefully avoided obtaining parental consent and delivered targeted advertising to underage users, the agencies alleged. The department cited internal communications from a TikTok employee acknowledging the conduct could get the company “in trouble” because of COPPA. TikTok let children bypass age restrictions and create accounts without age verification, DOJ said. Moreover, TikTok classified millions of accounts with an “age unknown” status, the filing said. “TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” FTC Chair Lina Khan said in a statement. “The FTC will continue to use the full scope of its authorities to protect children online.” Principal Deputy Assistant Attorney General Brian Boynton said the complaint will “prevent the defendants, who are repeat offenders and operate on a massive scale, from collecting and using young children’s private information without any parental consent or control.” In a statement Friday, TikTok said it disagrees with the allegations, “many of which relate to past events and practices that are factually inaccurate or have been addressed.” TikTok offers “age-appropriate experiences with stringent safeguards,” proactively removes “suspected underage users” and has “voluntarily launched features such as default screentime limits, Family Pairing, and additional privacy protections for minors,” the company said. The FTC is seeking a permanent injunction and civil penalties of up to $51,744 per instance of violation. The commission voted 3-0 to refer the complaint to DOJ, with Commissioners Melissa Holyoak and Andrew Ferguson recused.
Maurine and Matthew Molak, who sued the FCC for its decision authorizing funding of Wi-Fi on school buses (see 2406260006), filed a petition at the agency seeking reconsideration of last month’s 3-2 order allowing schools and libraries to use E-rate support for off-premises Wi-Fi hot spots and wireless internet services (see 2407180024). Pleading cycle deadlines will come in a Federal Register notice, a Friday notice from the FCC said. “Petitioners urge the FCC to reconsider and rescind the Report and Order because it is contrary to law,” the petition said. The Molaks argue that the Telecom Act didn’t provide the FCC authority to use the E-rate program to pay for internet service and connections, “such as the Wi-Fi service and equipment at issue.” An agency “cannot exercise authority it does not have,” the petition argued: “If the FCC wishes to move forward with this proposal, it must first obtain proper authority from Congress.” The Molaks, whose 16-year-old son died by suicide after he was cyberbullied, argued that the school bus ruling would give children and teenagers unsupervised social media access. That case is before the 5th U.S. Circuit Court of Appeals. Meanwhile, Schools, Health & Libraries Broadband Coalition Executive Director John Windhausen told us the group is mostly pleased with the Wi-Fi order and Further NPRM that the FCC posted last week. Windhausen saw no big surprises. “We're glad the FCC clarified a few issues and teed up additional questions in the further notice,” he said. SHLB's webinar on Wednesday “showed that there is a high level of interest in this new initiative, so we're excited to see how schools and libraries use this opportunity,” he said. SHLB plans additional webinars to answer questions about the program. Several changes were made between the draft and final version of the item, based on our side-by-side comparison. One question before the vote was whether the item would be tweaked to address fixed wireless access and partnerships with nontraditional providers (see 2406270068). The order clarifies that Wi-Fi hot spots “must be for use with a commercially available mobile wireless Internet service, rather than for use with [citizens broadband radio service] or other private network services.” The FNPRM adds language, as sought by Commissioner Geoffrey Starks, on cybersecurity issues. The final order includes a new paragraph on cybersecurity risk management. “Recognizing the critical needs of schools and libraries to protect their broadband networks and sensitive student, school staff, and library patron data, we seek comment on how to ensure that using E-Rate support for Wi-Fi hotspots does not introduce additional vulnerabilities or risks to cyberattacks,” the FNPRM says: “Specifically, we seek comment on whether service providers … should be required to implement cybersecurity and supply chain risk management plans.”
New York state on Thursday started the process to implement two kids’ online safety laws. Attorney General Letitia James (D) released an Advanced NPRM for each. The bills are the Stop Addictive Feeds Exploitation (Safe) for Kids Act and the Child Data Protection Act. While not part of the formal rulemaking process under the state’s administrative procedures act, the ANPRMs let the state seek information before proposing rules, the AG office said. Comments are due Sept. 30. “New Yorkers are looking to this office to protect children on social media apps and online, and the rules we are drafting will do precisely that,” James said. “By offering everyone, supporters and opponents of the recently signed legislation, the opportunity to submit comments and information, my office will ensure that we can better address concerns and priorities.” The Safe Act requires obtaining parental consent when using algorithms to sort feeds for minors, while the kids’ privacy bill bans websites from collecting and sharing minors’ personal data without informed consent. In the Safe Act ANPRM, the AG office asked about how it should identify commercially reasonable and technically feasible age-verification methods, how it should implement a parental consent mechanism and how to determine whether a social media platform is addictive. In the kids’ privacy bill ANPRM, the AG office asked about what factors are relevant to determining that a website is primarily directed at minors, young teenagers and older teens. Among many other questions, the office asked if there should be any exceptions to the definition of a data “sale” and how rules should account for “anonymized or deidentified data that could potentially still be re-linked to a specific individual.” Gov. Kathy Hochul (D) applauded the process to implement the bills she signed in June (see 2406200069). Citing the U.S. Senate's passage of two children’s internet safety bills Tuesday (see 2407300042), Hochul said, “Our efforts in New York are accelerating a national conversation on youth mental health and social media.”
The 5th U.S. Circuit Court of Appeals should lift a preliminary injunction against Mississippi’s social media age-verification law, Mississippi Attorney General Lynn Fitch (R) argued in a filing Thursday (docket 24-60341) (see 2407290008). HB-1126 requires that social media platforms obtain parental consent to allow minors to access their services. NetChoice sued to block HB-1126 on free speech grounds and won a preliminary injunction from the U.S. District Court for Southern Mississippi on July 1 (see 2407160038). District Judge Halil Suleyman Ozerden on July 15 denied Fitch’s request to lift the injunction, finding NetChoice is likely to succeed on the merits of its First Amendment challenge. Fitch argued before the appeals court Thursday that the injunction rests on “facial claims that NetChoice failed to support.” Nothing in the law “facially” violates the First Amendment because it regulates online conduct, not online speech, said Fitch: The law’s “coverage turns on where harmful conduct toward minors online is most likely: the interactive social-media platforms that allow predators to interact with and harm children.”