I tried to read all my app privacy policies... it was 1 million words

This writer read every single privacy policy (like we are told to do)... the words were about as many as half the number of the prolific War and Peace novel! Picture by Towfiqu Barbhuiya/UnSplash

This writer read every single privacy policy (like we are told to do)... the words were about as many as half the number of the prolific War and Peace novel! Picture by Towfiqu Barbhuiya/UnSplash

Published Jun 1, 2022

Share

By Geoffrey A Fowler

Twitter simplified its privacy policy earlier this month, encouraging us to read it by turning parts into a video game. Yes, a game - it's called the Twitter Data Dash.

In it, you use keyboard arrows to take a dog named Data to the park while dodging cat ads and battling trolls, meanwhile learning about Twitter's new 4,500-word privacy policy.

OK, who are we kidding: No one has time for that.

I applaud Twitter for putting effort into being more understandable. The same goes for Facebook, which last week rewrote its infamous privacy policy to a secondary-school reading level - but also tripled its length to 12 000 words. The deeper I dug into them, the clearer it became that understandability isn't our biggest privacy problem. Being overwhelmed is.

We the users shouldn't be expected to read and consent to privacy policies. Instead, let's use the law and technology to give us real privacy choices. And there are some very good ideas for how that could happen.

There's a big little lie at the centre of how we use every website, app and gadget. We click "agree," saying we've read the data policy and agree to the terms and conditions. Then, legally speaking, companies can say we've given them consent to use our data.

In reality, almost nobody actually reads these things and almost nobody feels in control. A 2019 Pew survey found that only 9% of Americans say they always read privacy policies.

It's not like you have a choice, anyways. When you're presented with one of these "agree" buttons, you usually can't negotiate with their terms. You could decline to use apps or websites - but it's increasingly hard to participate in the world without them.

What's the harm? You might be clicking away the right to mine the contents of your tax return. Your phone could collect evidence that you've sought an abortion in a state where it's suddenly illegal. Or you could be sharing data that will be used to discriminate against you in job applications or buying a home.

Still, I don't blame anyone whose eyes glaze over when they see a privacy notice. As an experiment, I tallied up all of the privacy policies just for the apps on my phone. It totalled nearly one million words. "War and Peace" is about half as long.

And that's just my phone. Back in 2008, Lorrie Cranor, a professor of engineering and public policy at Carnegie Mellon University, and a colleague estimated that reading and consenting to all the privacy policies on websites Americans visit would take 244 hours per year. She hasn't updated the tally since, but tells me that now you'd have to add in not only apps and connected gadgets such as cars, but also all the third-party companies that collect data from the technology you use.

Some government efforts have made things worse. Thanks to a recent European law, lots of websites also now ask you to "opt in" to their use of tracking technology, throwing a bunch of dials on the screen before you can even see if it's worth looking at.

Many people, including a generation setting up their first tablets and smartphones, just click "agree" to everything because they think privacy is a lost cause. "We're teaching everyone the wrong thing," said Mika Shah, co-acting general counsel of the tech non-profit Mozilla.

So in my hunt for ways to make tech work better for us, I called up one of the top officials responsible for policing all one million of those words on my phone: Commissioner Rebecca Kelly Slaughter of the Federal Trade Commission.

Turns out, she thinks privacy policies are broken, too. "That system is premised on the flawed assumptions that the information will be digestible, intelligible, usable for people, and that they will have meaningful choice," she said.

"I have four children between the ages of two and 9," Slaughter told me. "I literally couldn't - even if I didn't have a job - micromanage each piece of technology they interact with. But when we live in a universe that says we're given a 'choice,' I feel like I'm failing as a parent if my kid's data is shared because I have given 'consent' and I probably should have been watching more carefully. That's an incredibly punishing burden."

So then what's a less punishing way to protect our privacy? What I discovered: We're going to need laws - and some new technology to read and manage all those privacy policies for us.

- - -

For the past decade or so, one idea has dominated efforts to fix privacy policies: Make them simpler. Twitter's big reset may well be the peak example of this thinking. It tried not only to be simple, but also fun.

Twitter's chief privacy officer, Damien Kieran, was open with me about what went into developing the company's new policy and game - and also open to criticism about where it fails.

"We did a bunch of independent research around the world to understand our privacy practices, including our privacy policy," Kieran told me. "That confirmed our working assumption: Much of this stuff was very difficult to understand."

To be sure, at some companies, that's by design. Laws might require consent, but most don't require meaningful consent. So they use the vaguest possible legalese to let them gobble up the most possible data. Some, like the credit cards I've investigated, go out of their way to obscure whom they're selling your data to. (Just last week, Twitter had to pay a $150 million fine for having "deceptively collected" email addresses and phone numbers to target ads between 2014 and 2019.)

Kieran said the goal of Twitter's new privacy policy really was clarity, and getting us to use controls many people don't even know exist.

The new policy offers short summaries of topics, and links throughout to settings pages. And, of course, there's the game, which mixes a spoonful of dopamine with the medicine of learning about data use.

So how much better off is the privacy of Twitter users?

The game is cute, but does the set of people who love arcade games love them enough to play one that's about a privacy policy? (Not to mention everyone else: My parents reported they couldn't get far enough in the game to learn anything about privacy.) And in the new privacy policy itself, there's terminology that only a lawyer's mother could love. There are 11 references to "affiliates" and six to "certain" - as in, Twitter shares "certain information," which is certainly vague.

Kieran said Twitter used some of this language because explaining things further would have made the policy even longer. (Facebook, for one, said it dealt with the bloating language in its simplified policy by presenting the information in layers of complexity, with sub-menus and pop-outs.)

Twitter's first idea was to make a privacy policy that could also be read as a series of tweets. But companies, Kieran said, get conflicting messages from regulators who want them to be both simpler and also convey more detail.

There may be a middle road, but it's also rocky. Cranor at Carnegie Mellon has experimented with making privacy policies that look like the nutrition labels on packaged food. A label, she says, not only communicates quickly, but also makes it easier to compare the practices of different websites and apps.

In January, a bipartisan group of lawmakers even introduced legislation that would require sites to make easy-to-digest summaries of their privacy terms. They called it the TLDR Act, a nod to the saying "Too long, didn't read."

But the devil is in the details. Few companies have made privacy labels that Cranor thinks actually do the job. "What's most important to show to users is the stuff that will surprise them - the stuff that's different than what every other company does," she said. Both Apple and Google now offer app store privacy labels, but they're not particularly clear or, as I discovered, always even accurate.

"I'm sympathetic to the idea that it is challenging for companies to figure out how to say everything without saying too much and being confusing," the FTC's Slaughter told me. "That's why we shouldn't just be relying on companies to offer disclosures."

Case in point: For all of Twitter's efforts to make privacy simple and fun, its recent reboot didn't actually change anything about how much of our data it takes - or what it does with it. Your direct messages still aren't encrypted.

The same is true at Facebook, where its new policy hasn't changed any of its awful default settings.

- - -

So here's an idea: Let's abolish the notion that we're supposed to read privacy policies.

I'm not suggesting companies shouldn't have to explain what they're up to. Maybe we call them "data disclosures" for the regulators, lawyers, investigative journalists and curious consumers to pore over.

But to protect our privacy, the best place to start is for companies to simply collect less data. "Maybe don't do things that need a million words of explanation? Do it differently," said Slaughter. "You can't abuse, misuse, leverage data that you haven't collected in the first place."

Apps and services should only collect the information they really need to provide that service - unless we opt in to let them collect more, and it's truly an option.

I'm not holding my breath that companies will do that voluntarily, but a federal privacy law would help. While we wait for one, Slaughter said the FTC (where Democratic commissioners recently gained a majority) is thinking about how to use its existing authority "to pursue practices - including data collection, use and misuse - that are unfair to users."

Second, we need to replace the theater of pressing "agree" with real choices about our privacy.

Today, when we do have choices to make, companies often present them in ways that pressure us into making the worst decisions for ourselves.

Apps and websites should give us the relevant information and our choices in the moment when it matters. Twitter actually does this just-in-time notice better than many other apps and websites: By default, it doesn't collect your exact location, and only prompts you to do so when you ask to tag your location in a tweet.

Even better, technology could help us manage our choices. Cranor suggests data disclosures could be coded to be read by machines. Companies already do this for financial information, and the TLDR Act would require consistent tags on privacy information, too. Then your computer could act kind of like a butler, interacting with apps and websites on your behalf.

Picture Siri as a butler who quizzes you briefly about your preferences and then does your bidding. The privacy settings on an iPhone already let you tell all the different apps on your phone not to collect your location. For the past year, they've also allowed you to ask apps not to track you.

Web browsers could serve as privacy butlers, too. Mozilla's Firefox already lets you block certain kinds of privacy invasions. Now a new technology called the Global Privacy Control is emerging that would interact with websites and instruct them not to "sell" our data. It's grounded in California's privacy law, which is among the toughest in the nation, though it remains to be seen how the state will enforce GPC.

Cranor and her partners are even plotting how technology might be able to protect our privacy in a world of connected devices like surveillance cameras. One idea: If there's a common way for devices to wirelessly broadcast their presence, your phone could read the signal and warn you if you're entering an area with surveillance. The need for that has become frighteningly evident with the rise of devices like Apple's AirTags, which have been misused to stalk people.

Of course, tech-based solutions will always have to keep pace with the new ways our data is being harvested and sold.

But just imagine it: We could use technology to protect our privacy, not just invade it.