Communications Daily is a service of Warren Communications News.
Distracted by ‘Fries’

Consumers Confused About Privacy Terms, Ad Icon, Unofficial Meeting of NTIA Participants Hears

Privacy researchers painted a starkly different picture of consumers’ grasp of privacy policies related to advertising and data-sharing than that of the Digital Advertising Alliance (DAA), in dueling presentations Tuesday. In a conference call organized by consumer and privacy groups participating in the NTIA’s mobile privacy stakeholder proceeding, researchers said consumers had markedly different understandings of common privacy terms than that intended by companies, and barely interacted with the DAA’s “Ad Choices” icon, which when clicked directs users to information about behavioral or “interest-based” advertising. DAA statistics released by the group show millions of visits since the effort kicked off earlier this year.

Sign up for a free preview to unlock the rest of this article

Communications Daily is required reading for senior executives at top telecom corporations, law firms, lobbying organizations, associations and government agencies (including the FCC). Join them today!

Marketers use words “aimed at calming” the public over privacy concerns in privacy policies and related disclosures, but use those terms “quite differently from how the public understands them,” said Joseph Turow, professor in the University of Pennsylvania’s Annenberg School for Communication, on the NTIA participants’ call. His new paper on the “non-transparency of key online words” (http://xrl.us/bnzhxx) said more than half of consumers falsely believe a privacy policy stops a company from sharing their information with other companies, a consistent result going back to 2005. Majorities oppose being served tailored ads for products, news and especially politics, with 85 percent saying they would be angry if Facebook served them political ads based on private profile information, he said. “People don’t simply give us mechanical answers” but show “subtlety” based on a given scenario, for example, favoring tailored discount offers by a majority, Turow said.

The Ad Choices icon barely registers with consumers, with 55 percent saying they didn’t know if a “small triangle” with such a marking next to tailored ads exists, and only 20 percent recognizing the marker, Turow said: “They have no idea perhaps what we're even talking about.” More than half also didn’t know whether Ad Choices lets them tell the particular company to stop sending them “types” of ads, he said: “This idea that billions of these things are served as impressions” to consumers evidently isn’t having much effect. He referred to a GAO study that found mobile sites don’t explain well what they or advertisers do with consumers’ location data, or who an “affiliate” is that gets such data: That’s dangerous because marketers are intensely focused on developing contextual ads that take into account “social relationships” at a particular location. If consumers fundamentally misunderstand a given term as laid out in a privacy policy and it continues to be used, “we should consider that a deceptive phrase” and also prohibit companies from describing such notices as “privacy policies,” Turow said.

Users in general have “very little knowledge” about behavioral advertising, said Carnegie Mellon University professor Lorrie Cranor, citing her lab and other studies from the past year (http://xrl.us/bnzhzg). Third-party cookies are a “complete mystery” to them, they associate behavioral advertising initially with identity theft, and their first response to stopping tracking is to delete their cookies -- which also deletes opt-out preferences, she said. Consumers in her study had “almost no familiarity” with tracking companies, didn’t know how to decide which trackers to keep enabled, and didn’t even realize the Ad Choices icon was clickable. Because they knew so little about ad preferences enabled through the icon, they “assumed the worst” about it. Opt-out tools are “full of jargon” and the default settings “surprised” users, especially when changing a setting resulted in broken functionality of a Web page item, she said. Her team in December tested variations of a New York Times home page on 1,500 online users, using different icons and “taglines” to measure comprehension. “'Ad Choices’ is about the worst tagline you could use” based on results, she said. In the mobile realm, users didn’t understanding Android app permissions and thought the Android brand itself meant they were protected from “bad apps,” Cranor said.

The DAA separately said Tuesday its YourAdChoices.com education site clocked 12.7 million visits through early November, or 1.3 million per month, “which demonstrates both interest and traction” (http://xrl.us/bnzhz3). Including its AboutAds.info site, visits have “increased 10-fold” over visits to AboutAds.info alone in 2011, reaching 17.4 million for 2012 so far, it said. It’s planning additional countries and regions following its EU launch and is working on “how to translate this unprecedented program into mobile app environments,” it said: The group’s icon “has become ubiquitous online -- and is a signal to millions” of users about “the presence of an interest-based advertisement.” DAA didn’t immediately respond to our query about the privacy researchers’ claims.

Giving users more control and information about privacy doesn’t necessarily protect them, but can actually make them more reckless with disclosure, Alessandro Acquisti, Carnegie Mellon associate professor of information technology and public policy, said on the conference call. In his research, not yet published, Acquisti found that users were more likely to provide “arbitrary amounts of personal disclosures” based on trivial differences in how requests were made. College students given a survey that included sensitive questions, such as whether they had sex in public, actually approved of the release of their answers at higher rates when the surveys included a “publication permission” checkbox, he said. Acquisti got statistically significant differences in responses when students were told survey results would be shared with a student committee alone or a faculty committee in addition -- but that difference was erased when his team introduced a “15-second delay” between being told of the sharing parameters and giving their consent, he said. Even asking an irrelevant question between giving conditions and getting consent -- “'Do you want fries with that?'” -- seems to make survey subjects forget their reticence.

TechFreedom President Berin Szoka asked what kind of response to the DAA campaign would be considered adequate, if it wasn’t a “terribly good campaign” in the first place, as Cranor had claimed on the call. That’s difficult to say, Cranor said -- perhaps opinion surveys taken before and after a major informational campaign would help. The problem is the “essential purpose” of the DAA educational materials is to convince users that tracking “is not a problem” and that “it’s silly not to get relevant advertising,” Turow said. Studies show “a miniscule percentage of people really ends up doing anything about” their ad preferences when they actually visit a DAA site, he said. That prompted Szoka to ask “methodologically” how a researcher could differentiate from people who know but don’t “care,” versus those who don’t know at all. “The percentage of ‘don’t knows’ was over 50 percent” in the survey of Ad Choices recognition, Turow said.

Cranor’s earlier research that suggested people would only pay 50 cents for their privacy has been misunderstood, she said in response to a question: It had to do with how much of a “premium” a person would pay for a $10 item if it came from a privacy-protecting e-commerce site. Another situation that shows the power of “framing” came from Acquisti’s research, he said: His team offered mall shoppers either a $10 “anonymous” gift card or a $12 card that could be traced to them. Users chose the $10 card when it was presented first, and the $12 card when it was presented first, which shows the “enormous power” of the default setting, he said.

FTC cases show that “the design of disclosure matters,” said Maneesha Mithal, associate director of the agency’s privacy and identity protection division. That’s why it pursued P2P software maker Frostwire for pre-checking boxes on a setup screen that would share a user’s entire computer contents by default, she said. Mobile developers shouldn’t think they'll escape FTC scrutiny if they simply don’t post a privacy policy, because any disclosure -- whether in a policy or not -- will be enforced, as will be failure to disclose “material” information to consumers, she said. An FTC staffer in the agency’s San Francisco office has been making the rounds to inform developers of their responsibilities, Mithal said.

Pressed on the FTC’s willingness to pursue failure to disclose, Mithal said the agency investigated an analytics company for failing to tell consumers using its tracking software in a rewards program that the software would also scoop up bank account and other sensitive data. She told another questioner that the agency’s fraud actions generally start as consumer complaints, but its privacy investigations usually arise from press reports, advocacy groups and congressional inquiries, since consumers don’t know privacy breaches as readily. The FTC also does “our own independent research” in opening some investigations, she said.

The agency has “absolutely encouraged” developers to devise short-form notices in addition to longer privacy policies, Mithal told a questioner: They shouldn’t worry about disclosing too little in short-form notices “as long as they're not used to sort of omit or hide” relevant information in the longer policy. In determining what’s material or not for disclosure, “I think consumer testing is certainly something we would encourage” companies to do, she said.