The Internet Association and TechNet, both of which count several sharing-economy companies as members, approved of a Thursday FTC review of such peer-to-peer platforms (see 1611170017). The report summarized a 2015 workshop on the topic and didn't issue any recommendations. "Consumers enjoy increased competition, lower prices, and better quality services when sharing economy platforms are able to compete," said IA General Counsel Abigail Slater in a statement. "Consumer protections are hardwired into sharing economy platforms and the internet industry welcomes the FTC’s recognition of this important attribute." TechNet President Linda Moore said in a statement that such platforms produce flexible job opportunities, encourage competition and help the economy. She said the study "demonstrates why it’s critical to advance policies that allow sharing economy platforms to compete on a level playing field with traditional industry participants.” Airbnb, Etsy, Instacart, Lyft and Uber are some major sharing economy companies that are members of either IA, TechNet or both.
It's time to abandon "over the top" as a way to describing broadband video, with the term OTT "a relic of a previous time," The Diffusion Group analyst Eric Grab wrote in a post Thursday. While OTT describes delivery of video over networks not owned by the distributor, "the modern video ecosystem is now inseparable from the Internet," Grab said, adding that "even the name 'Internet Video' doesn't seem to fit the bill." The Internet-delivered video itself is less important than the services enabled by the internet, Grab said. While OTT will be used for years to come, he said, "Gradually the industry will recognize the need to replace the term with ... something that captures the essence of the quantum video and the complete entertainment experience."
The Department of Homeland Security issued IoT security principles aimed at helping manufacturers and other stakeholders make better decisions about how they develop, build, implement and use such technologies and systems. “The growing dependency on network-connected technologies is outpacing the means to secure them,” said DHS Secretary Jeh Johnson in a Tuesday news release. "Securing the Internet of Things has become a matter of homeland security." The DHS principles emphasize integration of security measures at the design phase, vulnerabilities management, use of tested security practices, prioritization of security measures based on potential disruptions or failures, greater transparency across the IoT ecosystem, and consideration of what should be connected to the internet and what shouldn't. Wednesday, CTA issued an IoT white paper (see 1611160017).
President-elect Donald Trump was sent advice from the Information Technology and Innovation Foundation about how his administration can boost competitiveness, innovation and productivity in 2017. ITIF proposed 36 policies that can be achieved via executive authority and legislation in a 14-page open memo to Trump, the group said in a Wednesday news release. President Robert Atkinson said it means going beyond "outdated" economic theories and addressing "quarterly capitalism." He said it proposed "relatively easy, bipartisan steps" that Trump can take. For example, ITIF said Trump should create a digital infrastructure council of federal agency representatives to discuss how artificial intelligence, data analytics and the IoT can improve power grids, roads, water systems and other infrastructure. The group said that the Department of Agriculture's rural broadband support mechanisms should be updated and that access, permitting and leasing of federal land for fiber and wireless network buildouts should be streamlined.
Reports of social media censorship from Facebook, Google, Instagram and Twitter indicate that users are becoming increasingly frustrated with those platforms' content moderation policies, Onlinecensorship.org reported Wednesday. The Electronic Frontier Foundation/Visualizing Impact project based its report on an analysis of user-generated content takedown reports between April and November. About 76 percent of 230 reports of social media content takedowns between March and October concerned Facebook, while 17 percent involved Twitter and about 4 percent concerned Instagram, the report said. Facebook this year faced criticism from congressional lawmakers and others over claims the website censored conservative news as well as the publication of several fake news stories (see 1605100032, 1605240059, 1605110048 and 1610310038). Onlinecensorship.org researchers also analyzed "content takedowns" from Google+ and YouTube users. The report said 36 percent of the reports related to account shutdowns. About 26 percent involved a takedown of a post, 19 percent involved a photo takedown and 6 percent involved a video, Onlinecensorship.org said. Most users don't have a clear understanding of why their content was removed with only 60 reports providing a reason, the project said. Election-related censorship complaints in particular showed that users desired to speak their minds about the presidential contest between Republican President-elect Donald Trump and Democratic presidential nominee Hillary Clinton. “These companies have enormous impact on the public sphere, yet they are still private entities with the ability to curate the information we see and the information we don’t see at their sole discretion,” said Jillian York, Onlinecensorship.org co-founder, in a news release. “The user base is what powers these social media tools, yet users are feeling like they don’t have any control or understanding of the system.” Onlinecensorship.org recommended that social media platforms create best practices for content moderation, including a commitment to transparency in how such policies are enforced. Researchers also recommended improving the systems for appealing content takedown decisions when they're made in error. Facebook, Google, Instagram and Twitter didn't immediately comment.
Facebook said Wednesday that it's fixing more errors in its view count metrics, including the view counts for content posted on businesses' and individuals' professional profile pages. The visible view counts were showing the sum of all views, rather than counting views by unique visitors. Once the metrics are fixed in the coming weeks, the seven-day view count will drop by about 33 percent, while the 28-day summary view count will drop by 55 percent, Facebook said in a news release. The social media company said it had also undercounted views for videos watched to completion. Fixes to that metric will cause video view counts to increase by 35 percent, Facebook said. The company disclosed in September that it had overestimated video view counts for two years, drawing criticism from the Association of National Advertisers (see 1609290075).
As a way to help lawmakers and others find common ground and advance the encryption debate, BSA|The Software Alliance released an analytical tool to examine legislation and policy proposals, the industry group said in a Tuesday news release. They center around: improving data security; raising the capabilities of law enforcement; promoting privacy; protecting sensitive government data; encouraging innovation of data security tools free of government mandates; protecting critical infrastructure like banking and health; understanding the impact around the world; and increasing transparency of any legislative proposal.
Distributed denial-of-service (DDoS) attacks increased 71 percent year-over-year in Q3, Akamai reported Tuesday. DDoS attacks greater than 100 Mbps increased 138 percent, including two attacks attributed to the Mirai botnet. Mirai originated the October attacks against DynDNS, which caused outages and latency for major U.S. websites (see 1610210056). The DynDNS attacks have resulted in significant congressional interest in the cybersecurity of connected devices (see 1610260067). The House Communications and Commerce Trade subcommittees are set to hold a hearing Wednesday on IoT cybersecurity, partially in response to the DynDNS attacks (see 1611090063). In contrast, Akamai found that web application attacks decreased 18 percent, with U.S.-originating attacks down 67 percent.
The Trustworthy Accountability Group launched an anti-malware certification seal program for buyers, sellers and intermediaries in the digital advertising supply chain, said the ad industry initiative in a news release Tuesday. TAG, which was formed by the American Association of Advertising Agencies, Association of National Advertisers and Interactive Advertising Bureau, said it's also building and hosting an information-sharing hub to disseminate real-time intelligence about malware attacks to the industry and law enforcement. Adform, AppNexus, Google and RocketFuel are some ad companies and agencies that agreed to participate in the certification program, which entails complying with certain guidelines and best practices for scanning content for malware. The program "uses a multi-prong approach that includes consumer education, industry best practices, information sharing, and law enforcement to shut down malware distributors and protect the advertising supply chain," said TAG CEO Mike Zaneis. The threat-sharing hub will permit some companies to get data on the most recent infections, "serving as a type of immune system for the industry in helping it respond to new and emerging threats," TAG said. The anti-malware program is the fourth and last program in the initiative's mission to stop fraudulent traffic, fight malware and internet piracy and promote transparency.
Addressing challenges in preventing abuse, bullying and harassment on Twitter, the social media website said in a Tuesday blog post that it's made progress in the areas of controls, reporting and enforcement to better help users. One feature called "mute," which enables users to stop seeing tweets from certain accounts, is being expanded, it said. "We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days." The company will also give users a "more direct way" to report specific conduct that targets people based on their age, disability, disease, ethnicity, gender and gender identity, national origin, race, religious affiliation and sexual orientation. "This will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter," the company said. The site said it's "retrained" its support team on policies, "including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program." Twitter said it also improved internal tools and systems to effectively address abusive conduct when reported.